paper_id
stringlengths 19
21
| paper_title
stringlengths 8
170
| paper_abstract
stringlengths 8
5.01k
| paper_acceptance
stringclasses 18
values | meta_review
stringlengths 29
10k
| label
stringclasses 3
values | review_ids
sequence | review_writers
sequence | review_contents
sequence | review_ratings
sequence | review_confidences
sequence | review_reply_tos
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_fhO6vCGuuag | On the inability of Gaussian process regression to optimally learn compositional functions | We rigorously prove that deep Gaussian process priors can outperform Gaussian process priors if the target function has a compositional structure. To this end, we study information-theoretic lower bounds for posterior contraction rates for Gaussian process regression in a continuous regression model. We show that if the true function is a generalized additive function, then the posterior based on any mean-zero Gaussian process can only recover the truth at a rate that is strictly slower than the minimax rate by a factor that is polynomially suboptimal in the sample size $n$. | Accept | The reviewers unanimously agree that the theory here exhibiting a particular case where Gaussian process priors are inferior to deep Gaussian processes is interesting, and furthermore that the proof techniques themselves are novel. Indeed, reviewers had minimal or no substantial concerns about the paper, and most of the questions asked by reviewers txpX and sPbe read as simple follow up questions that the authors may choose to include discussion on. | train | [
"518ec8iJz2O",
"MsxoNJqWU2Q",
"IzIWEARD0c3",
"t5RY0cjPPLi",
"l4v46ytwvtg",
"m2fdwXc9h7o"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the positive assessment of our work and kind words.",
" Thank you for the constructive suggestions and helpful comments. In reply to your comments:\n\n1. The symbol $L$ is overloaded to imply both Lipschitz constant and the $L^2$ space.\n\nThe Lipschitz constant has been changed to $\\Lambda.$\n\n2. In line 109, it seems to me that $f_i$ was overloaded to imply both the $i$-th additive component of $f_0$ and later on $f_0(x_i)$ with $x_i$ being the $i$-th observed input. I could be wrong, so I invite the authors to clarify this.\n\nCorrected, thanks.\n\n3. The subscript $i$ in $x_i$ is also used to denote both the $i$-th observed input and the $i$-th dimension of an arbitrary input.\n\nThe $x_i$ in the nonparametric regression has been renamed $X_i$ to clearly differentiate these cases.\n\n4. In line 113, when the minimax rate is introduced, I had to assume that this is the rate for deep GP prior based on the motivation of this work. Eq. (13) of [32] seems to suggest that the analysis is for general nonparametric regression with deep NN. Can you elaborate on the extension of this to GP with deep prior?\n\nWe now explicitly state and reference that this rate is attainable by suitably calibrated deep GPs.\n\n5. Also [32] seems to suggest that the deep architecture has to satisfy certain conditions to achieve such rate. Can you provide a brief summarization of what those conditions entail, and whether they are realistic in practice? For example, in practice we also have to train the weights of the network, and I suppose a large architecture would affect the convergence rate.\n\nIn [32] it is assumed that the depth of the network is of the order $O(\\log n)$ with $n$ the sample size. It is known that for ReLU networks, this is necessary to get the optimal convergence rates for smooth regression functions. Moreover, [32] assumes a minimal width of the hidden layers and more importantly, the network sparsity ($=$ number of non-zero network parameters) needs to be of the order $n \\times$minimax rate (up to $\\log n$ factors). \n\nWe prefer not to mention the details underlying the deep network architectures in [32], since we are focusing more on Bayesian approaches. We have instead added a sentence before Theorem 1 overviewing the hierarchical deep GP construction in [10], which attains the minimax rate. We note that the deep GP prior in [10] is selected for theoretical reasons and indeed may not reflect empirical practice.\n\n6. As a non-expert, I am quite curious if a contraction rate for deep GP has been lower bounded for general $f_0$? If so, how would the difference compare to the additive scenario?\n\nNo, we are not aware of any such results. Most proofs of lower bounds for GPs use specific properties of Gaussian measures (e.g. the form of the posterior mean) that do not extend straightforwardly (if at all) to deep GPs. Indeed, first upper bounds for contraction rates, which are generally much better understood, have only recently been obtained for deep GPs in [10]. \n \n7. How do we arrive at the expectation in line 18 of the appendix?\n\nIn the line before, $w_k\\sim^{iid} N(0,1)$ are the only random variables. We now use $E[w_k]=0$ and $E[w_k^2]=1$ together with the linearity of the expectation. This causes the terms $I$ and $III$ to disappear. ",
" Thank you for the constructive suggestions and helpful comments. In reply to your comments:\n\n1. Add an extra paragraph to elaborate on the notations and mathematical operators used throughout the paper.\n\nWe have added a notation paragraph in Section 2 (p2) explaining the main mathematical notations needed to understand the results of the paper, and provide more details when high-level mathematical objects are introduced. The proofs do require some additional notation, but we felt it best to define these locally since they are generally used in only a single place, and we wish to keep the main paper as unencumbered as possible.\n\n2. How would the range ($[0,1]$) and the distribution of design points $x$ affect the theorem results?\n\nWe believe that if the inputs are sufficiently evenly scattered on $[0,1]^d,$ the same results will hold. For instance, this should be true if the design point distribution $H$ (i.e $X_i\\sim^{iid} H$) has a density that is bounded away from zero and infinity. We have added a paragraph in the Discussion (Section 3) on this, where we also mention the related problem of heteroscedastic (non-uniform) noise level. On the other hand, if the input distribution $H$ is for example discrete, we expect that the results will change (e.g. we may have to change the loss to $L^2(H)$ to reflect this - i.e. weight the loss by the probability distribution $H$). The domain range $[0,1]^d$ can be extended to any $[a,b]^d$ without changing the rates in the lower bounds (the constants will, however, depend on $a,b$).\n\n3. How would the feature extraction methods (neural networks, projections, etc.) affect the correctness of Theorem 1? As lots of works with high-dimensional points or multi-modal data will first project the data to a low-dimensional feature space with better expressiveness. If the dimension is lower than 3, according to Theorem 1, then the contraction rate can be accepted.\n\nExcellent question! We believe that if one allows for general (arbitrarily non-linear) feature extraction methods, the optimal minimax rate should be attainable. The reason is that if one has a good (nonlinear) feature extraction scheme (e.g. the empirical basis coefficients with respect to a pre-selected basis) recovery of the full regression function $f$ can become rather straightforward (in the case of basis coefficients, one can just take the series estimator with respect to this basis). \n\nIn particular, if one can learn the nonlinearity of the underlying function (in our case the compositional part), then the underlying function might be a `nice' function of the learned features. In this case, placing a GP as a function of the learned inputs no longer deals with a compositional function, and thus can potentially attain the optimal rate. If one constrains the feature extraction, e.g. to certain projections, similar results to ours may be true, but we are unsure. We have added a paragraph on this in the Discussion (Section 3). \n\n4. This paper concentrates on the zero-mean Gaussian process with sample data generating process and the underlying function with a special case of the compositional functions, it would be better to extend it to general mean function.\n\nSince we are interested in lower bounds uniformly over a symmetric function class, centering the prior at a function $m\\neq 0$ will intuitively be detrimental to recovering the function $-m$. In other words, the prior mass is now further away from the target function $-m.$ Thus within our framework, a non-zero prior cannot lead to faster uniform lower bounds. \n\nWe agree that it would be interesting to work this out precisely, but it will make all our proofs significantly more technical and thus shift the focus away from the main proof ideas to more lengthy computations. For the sake of clarity, we therefore prefer to stick to centered GPs.",
" In this paper, the authors studied information-theoretic lower bounds of posterior contraction rate for Gaussian process regression with compositional assumptions. Specifically, if the true function is a generalized additive function, the posterior contraction rate of any zero-mean Gaussian process (irrespective of the choice of kernel) is strictly slower than the minimax rate. Even the posterior mean will be a suboptimal reconstruction and implies an uninformative uncertainty quantification. A sharper lower bound is also studied, showing that the performance under the Gaussian wavelet series priors suffers from the curse of dimensionality. \n\nOverall, the paper is well written and easy to follow. Various proofs and frequentist assessments are provided to measure the speed of posterior contraction around the true regression function, which is a solid work that helps people to understand the theory underlying deep Gaussian process and the curse of dimensionality. For varied research backgrounds, it is recommended to add an extra paragraph to elaborate on the notations and mathematical operators used throughout the paper. \n The following questions may be considered by the authors. \n\n1. How would the range ([0,1]) and the distribution of points x affect the theorem results? Since the points may not follow U [0,1] in real applications.\n\n2. How would the feature extraction methods (neural networks, projections, etc.) affect the correctness of Theorem 1? As lots of works with high-dimensional points or multi-modal data will first project the data to a low-dimensional feature space with better expressiveness. If the dimension is lower than 3, according to Theorem 1, then the contraction rate can be accepted.\n\n\n This paper concentrates on the zero-mean Gaussian process with sample data generating process and the underlying function with a special case of the compositional functions, it would be better to extend it to general mean function. ",
" This paper derives a novel lower bound for the GP posterior contraction rate when the true function has an additive structure. The result shows that for any GP in such scenario, the contraction rate is worse than the minimax estimation rate of deep GP. The authors further demonstrate that the contraction rate is even more suboptimal on a specific GP (e.g., Gaussian wavelet series). This is a purely theoretical work and there is no empirical study. This paper provides two original proving strategies to obtain the lower bounds of GP posterior and posterior mean contraction rates in the generalized additive function setting. The first strategy directly derives these lower bounds, whereas the second strategy reduces the regression to a one-sparse sequence model then lower bounds the minimax risk for its linear estimators. To the best of my ability, I have verified that both proofs are sound and original. Nonetheless, I acknowledge that I have taken some derivation steps for granted and my lingering questions are listed in the section below.\n\nThe paper is generally well presented and its high level ideas are quite easy to follow, although there are some small issues with the notations and technical clarity. For example:\n- The symbol $L$ is overloaded to imply both Lipschitz constant and the $L^2$ space. \n- In line 109, it seems to me that $f_i$ was overloaded to imply both the $i^{\\text{th}}$ additive component of $f_0$ and later on $f_0(x_i)$ with $x_i$ being the $i^{\\text{th}}$ observed input. I could be wrong, so I invite the authors to clarify this.\n- The subscript $i$ in $x_i$ is also used to denote both the $i^{\\text{th}}$ observed input and the $i^{\\text{th}}$ dimension of an arbitrary input.\n\nOverall, I believe the paper delivers a good theoretical contribution, and it is sufficient without empirical demonstration. However, it would be more compelling to see if the proposed theory aligns with practice, especially in settings that specifically construct the latent function $f_0$ to be of additive form, as assumed in this paper.\n - In line 113, when the minimax rate is introduced, I had to assume that this is the rate for deep GP prior based on the motivation of this work. Eq. (13) of [32] seems to suggest that the analysis is for general nonparametric regression with deep NN. Can you elaborate on the extension of this to GP with deep prior?\n- Also [32] seems to suggest that the deep architecture has to satisfy certain conditions to achieve such rate. Can you provide a brief summarization of what those conditions entail, and whether they are realistic in practice? For example, in practice we also have to train the weights of the network, and I suppose a large architecture would affect the convergence rate. \n- As a non-expert, I am quite curious if a contraction rate for deep GP has been lower bounded for general $f_0$ ? If so, how would the difference compare to the additive scenario? \n- How do we arrive at the expectation in line 18 of the appendix? The authors have not discussed negative societal impact of their work. However, since this is a purely theoretical work, I do not think there is any potential problem.",
" In the domain of frequentist guarantees of Bayesian nonparametric models, the paper exhibits a case where Gaussian processes learn slower than the minimax rate. This is of particular interest because for the same function class a recent paper showed that deep Gaussian do achieve the minimax rate. The sub-optimality of the GP is shown to be at least polynomial under very general assumptions. The function class in question is a type of generalized additive model. So together with the previous paper on deep GPs this is a formalization of the idea that deep methods do better when the \"true\" function is compositional in a certain particular sense. There is a relatively self contained exposition of the necessary background. The compositional function class considered does not easily fall into the domain of previous theoretical work lower bounding the rates of linear methods so providing a proof method is the main part of the technical novelty. Clarity: I found the exposition of the results very clearly written considering that this is a notoriously challenging area of the literature. Work like this would often appear in a statistics journal and it felt like a particular effort had been made to make the work accessible to the NeurIPS theory community. Each non-trivial statement was carefully referenced. I hope to learn from such exposition in my own work. Make though mistake though, the work is still technically quite challenging for an average non-theoretical NeurIPS reader and the proof section is very heavy going I suspect for the big majority of potential readers. So be it.\n\nOriginality: There is a very clearly defined research question for the paper and this topic is to the best of my knowledge novel. Answering the question requires highly non trivial proof methods.\n\nQuality: The expositional sections are of a high quality. I spent a significant amount of time on the paper and the proof method looks sensible to me but to carefully verify it would take more time than is available particularly given heavy reviewing loads this year. Given that similar work appears for instance in the Annals of Statistics where peer review takes significantly longer it is difficult to verify it to that level in this venue. It helps that there are two independent proof methods for Theorems 1 and 2 and that the result is not entirely unexpected.\n\nSignificance: The work is significant enough for publication in this venue in my opinion. At a broad level all work in this area is somewhat abstracted from practical reality. The work is in a sense \"doubly\" asymptotic since it requires both the reformulation in terms of a \"statistically equivalent\" SDE to be valid and then the number of data points to be sufficiently large on top of that. Of course chaining asymptotes is fine if this is your only goal but it does of course make it harder to understand the implications for numbers of data points and dimensionality encountered in practice. Similarly work in this area neglects computational constraints. After all both GPs and deep GP posteriors require approximation in practice. This is again fine as long as this is understood. More specifically to this paper the most general difference in rate (Theorems 1 and 2) is proven to be a polynomial and the level of difference is bounded as the dimension increases. It is tantalizing that the authors speculate that the more dramatic difference in rate proven under the special conditions of Theorem 3 may apply more generally. This obviously would increase the significance of the work but I appreciate how difficult such results are to obtain. I have no questions. As far as I can see there is no potential negative societal impact of this work unless one was against science and mathematics in general. The scientific limitations are very clearly communicated and discussed leaving ample opportunities for follow up work."
] | [
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
5,
4,
3
] | [
"m2fdwXc9h7o",
"l4v46ytwvtg",
"t5RY0cjPPLi",
"nips_2022_fhO6vCGuuag",
"nips_2022_fhO6vCGuuag",
"nips_2022_fhO6vCGuuag"
] |
nips_2022_jHIn0U9U6RO | Understanding the Eluder Dimension | We provide new insights on eluder dimension, a complexity measure that has been extensively used to bound the regret of algorithms for online bandits and reinforcement learning with function approximation. First, we study the relationship between the eluder dimension for a function class and a generalized notion of \emph{rank}, defined for any monotone ``activation'' $\sigma : \mathbb{R}\to \mathbb{R}$, which corresponds to the minimal dimension required to represent the class as a generalized linear model. It is known that when $\sigma$ has derivatives bounded away from $0$, $\sigma$-rank gives rise to an upper bound on eluder dimension for any function class; we show however that eluder dimension can be exponentially smaller than $\sigma$-rank. We also show that the condition on the derivative is necessary; namely, when $\sigma$ is the $\mathsf{relu}$ activation, the eluder dimension can be exponentially larger than $\sigma$-rank. For Boolean-valued function classes, we obtain a characterization of the eluder dimension in terms of star number and threshold dimension, quantities which are relevant in active learning and online learning respectively. | Accept | All reviewers and AC believe this paper is valuable contribution to the theoretical understanding of reinforcement learning. | train | [
"1Y73ARmCdji",
"_7SiqlpdJ07",
"W2MvIQHDJf5",
"5z9oNa2oGSV5",
"YMDxWPn6jG3",
"uDC8rPWmAO",
"uTnscUU_nxt",
"vX_lCYP21AC"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Makes sense! Thanks for your explanation.",
" Answering the reviewers questions:\n1. **Does comparing $\\sigma$-rank and eluder dimension help us understand when eluder dimension is bounded? What is the consequence of eluder being exponentially smaller than $\\sigma$-rank?** Yes, understanding the connection between $\\sigma$-rank and eluder dimension allows us to show the existence of function classes (e.g., parities) which “go beyond” generalized linear. The main consequence is that there are function classes which have eluder dimension $d$, but the best possible bound that Russo and Van Roy’s result can give is $\\exp(O(d))$. \nThe reason we carefully define the notion of $\\sigma$-rank is that it allows us to be rigorous about what we mean when we say “a function class is generalized linear”. A priori, it is not obvious that parities cannot be rewritten as a generalized linear model in $\\mathrm{poly}(d)$ dimensions. By defining the notion of $\\sigma$-rank, we can formally say that the best possible bound that Russo and Van Roy can provide is $\\exp(O(d))$. \nA larger, less immediate consequence is that the results in bandits/RL literature which prove bounds via eluder dimension apply to a richer set of function classes than generalized linear models.\n2. **Function classes that lie in RKHS.** Function classes that lie in an (infinite dimensional) RKHS do not have finite dimension, so it is not meaningful to discuss their generalized rank. The recent note [1] proves that for function classes which do lie in an RKHS, the eluder dimension is equivalent (up to log factors) to the notion of information gain. This can be viewed as an extension of the bounds for eluder dimension for linear function classes (we know here that eluder dimension and information gain are exactly $\\Theta(d \\log R/\\epsilon)$.\n\n[1] Huang, Kakade, Lee, Lei. “A Short Note on the Relationship of Information Gain and Eluder Dimension”.",
" We thank the reviewer for their comments and time. We make several important clarifications.\n\n**Regarding our Question 1.** We agree that the most important question to address is “when is eluder dimension bounded?” - this is exactly what this work seeks to answer. The reason why Question 1 is framed in terms of understanding the relationship between eluder dimension and generalized linear models is because this is a more manageable question to ask - the same way in which “when is VC dimension bounded?” is too broad compared to the question “what is the VC dimension of neural networks?”.\n\nThe function classes whose eluder dimension we could compute an *upper bound* on were linear models and generalized linear models (with strictly monotone link function). Even though many papers have been published on the eluder dimension in the past decade, none have shown examples beyond (generalized) linear models for which one could compute an *upper bound* on eluder dimension.\n\nSo, it is natural to wonder if this is truly an impossible task, to find an example of a function class which is not generalized linear for which we can upper bound the eluder dimension. Mathematically speaking, we ask if one can prove a *lower bound* on eluder dimension in terms of generalized linear rank. If this was true, then in conjunction with the upper bound of Russo and Van Roy, this would unequivocally show that *every function has bounded eluder dimension if and only if it is a generalized linear model (perhaps in disguise)*. This is the strongest possible answer one can get for the question of “when is eluder dimension bounded?”. It immediately implies that all the papers published in the past decade which bound regret in terms of eluder dimension can be rewritten as showing bounded regret assuming the function class is generalized linear.\n\nWe show that this hypothetical “strongest possible” answer is indeed a pipe dream. Namely, Theorem 6 proves that there exists a simple function class (parities) where the eluder dimension is small but the generalized linear rank is quite large. Theorem 10 provides a stronger separation by constructing function classes for which the eluder dimension (with respect to a single base function) is constant but generalized linear rank is infinite. Therefore, it is impossible to prove a lower bound on eluder dimension in terms of generalized linear rank. \n\nAnother restatement is that the set of function classes with bounded eluder dimension is strictly larger than the set of generalized linear function classes. Russo and Van Roy only show that the set of function classes with bounded eluder dimension contains the set of generalized linear function classes.\n\n*To conclude:* the answer of “no” to our Question 1, beyond the technical statement that “such a lower bound cannot hold”, provides a positive result for RL theory - it says that all these papers on eluder dimension actually apply to a *richer* set of functions classes than generalized linear models!\n\n**Regarding the connection to Russo and Van Roy when $\\sigma$ is not strictly monotone.** You are correct in saying that the Russo and Van Roy upper bound becomes vacuous for generalized linear functions if $\\sigma$ is not strictly monotone. Thus, for the relu function class the *upper bound* is infinite. Our contribution is to prove a *lower bound*: in Theorem 7, we show for the relu function class that the eluder dimension is at least exponential in the dimension. More generally, this shows that one cannot remove the dependence on $\\mu$ in the bound for the eluder dimension of generalized linear models.\n\nWe agree that this is not spelled out in the clearest way in the paper; our revision intends to make this discussion clearer.",
" We thank the reviewer for their comments and time. Answering the questions in the review:\n\n1. **Question on Eq (3)**. Here is a proof of Eq (3). \nLet $\\sigma:\\mathbb{R}\\to \\mathbb{R}$ be such that $\\sigma \\mathsf{-rk} (\\mathcal{F}, R) = \\mathcal{M}_\\mu^L (\\mathcal{F}, R)$. Then we define $\\tilde{\\sigma}(z) = \\sigma(z/L)$. \nWe can write every pair $(x,f)$ as $f(x) = \\sigma(\\langle w(f), \\phi(x) \\rangle) = \\tilde{\\sigma}(L \\cdot \\langle w(f), \\phi(x) \\rangle)$. \nWe can compute that this $\\tilde{\\sigma} $ satisfies $\\frac{ \\tilde{\\sigma} (z’) - \\tilde{\\sigma}(z) }{z'-z} = \\frac{ \\sigma(z’/L) - \\sigma(z/L) }{z’- z} \\in \\[\\frac{\\mu}{L}, 1 \\]$. \nThis shows one direction; the other direction can be shown with a similar argument.\n2. **On Theorem 7.** Yes, this is a typo. Thank you for catching it!\n3. **What is Ldim?** Yes, Ldim is the Littlestone dimension. We apologize for omitting the definition; we will include it in later revisions. A formal definition can be found in the book [1], Definition 21.5.\n4. **The poly() in L308-309.** We wanted to use this sentence to provide motivation for the next result. Theorem 8 proves an upper bound of $\\mathsf{Edim} \\le \\mathrm{exp} (\\max(\\mathsf{Sdim}, \\mathsf{Tdim}))$. However, this is just an inequality, and may not be tight. A priori, it could be possible using stronger techniques to prove an upper bound of $\\mathsf{Edim} \\le \\mathrm{poly}(\\mathsf{Sdim}, \\mathsf{Tdim})$, for example $\\mathsf{Edim} \\le \\mathsf{Sdim} \\cdot \\mathsf{Tdim}$ for any function class. However, Theorem 9 shows that this cannot be done, by exhibiting a specific function class where $\\mathsf{Edim} \\ge \\exp ( \\max (\\mathsf{Sdim}, \\mathsf{Tdim})).$\n\nWe will gladly include these points in our revision.\n\n[1]: Shalev-Shwartz and Ben-David. “Understanding Machine Learning: From Theory to Algorithms.”",
" We thank the reviewer for their comments and time and have no corrections or objections. \n\n**A small comment regarding new definitions for function approximation**: Indeed, we are quite excited about these other notions for function approximation, like Bellman rank and bilinear classes. They allow one to prove interesting and general results. One distinction between the eluder dimension and these other measures of complexity is that eluder dimension is purely a property of the function class (and can be studied as such), while the other measures depend heavily on the underlying MDP (since they are defined in terms of the expected Bellman error under some roll-in policy in the given MDP).",
" \nThis paper gives an in-depth investigation on the notion of eluder dimension. Eluder dimension is a widely accepted complexity measure of function classes in bandits and reinforcement learning. However, it is previously unknown whether there is a separation between (function classes with) finite eluder dimension and generalized linear models. This paper is the first to show the separation.\n\n To show this, they defined $\\Sigma$-rank, a new complexity measure that characterizes the generalized linear models. The paper first shows a chain inequality to link $\\Sigma$-rank classes for different $\\Sigma$. The paper then shows the place of Edim in the $\\Sigma$-rank chain. Next, the paper narrows down the focus from real-valued functions to binary functions. Then the paper introduces combinatorial Edim, Sdim, Tdim, and shows an important tight characterization of Edim via Sdim and Tdim, via a connection to the Ramsey theory. At last, the paper asked if the results for binary (combinatorial) Edim can be brought back to real-valued functions, and they leave it as future work.\n \nStrengths:\n\n1. This paper identifies an important problem: is Edim different from generalized lienar models? Indeed, this is an important question that are less studied in the literature. Most papers just take the notion of Edim as granted and prove results based on Edim. But when asked about examples of Edim, they are elusive or just give generalized linear models as example.\n2. The writing is good in this paper. \n\nWeakness:\n\n1. There are too many definitions of complexity measures in this paper. For better presentation, I would prefer them to be aggregated in a single section/appendix, which could help readers find and compare the definitions. It makes me hard to locate and figure out the definitions of different measures.\n2. The paper might have less impact because recently, RL theory community are turning their focus from Edim to new definitions like Bellman rank and bilinear classes. Once people are proving results based on these two new defintions, it might be less interested to show results on Edim.\n3. The proof are simple and combinatorial, which means it might not bring many new techniques or intuitions for future work.\n\n N/A N/A",
" The authors study eluder dimension, its relationship with other measures like sigma-rk, and their binary versions. Specifically, they compare eluder/star dimension with various sigma-rk's and report their findings on which one is larger in general and when one can be much bigger than the other. (Please provide a thorough assessment of the strengths and weaknesses of the paper, touching on each of the following dimensions: originality, quality, clarity and significance. You can incorporate Markdown and Latex into your review. See /faq.)\n\nOriginality: moderate.\n\nQuality: above bar\n\nClarity: above bar\n\nSignificance: moderate\n\nThe strength is the improved understand of various complexity measures. The weakness is that it is quite a theoretical work that does not have much implications about algorithms and the fundamental difficulty of learning problems. \n\n-----\nafter the rebuttal period, I feel I have a better understanding of the contribution. Indeed, showing that EDim can be a strictly better complexity measure than sigma rank is quit meaningful and what researchers have been missing. Thus, I have raised the score. I did not follow Eq (3). the condition implies that $\\mu/L \\le \\frac{ \\sigma(z')/L - \\sigma(z)/L}{z'-z} \\le 1$. The authors' claim makes sense if $\\sigma$ is identity. Otherwise, I don't see how it is true; the range R is applied to the input of $\\sigma$, but $L$ is on the gradient of $z$, so the the constant $R$ and $L$ are bounds for two different spaces.\n\nother comments\n\n* THeorem 7: should R here be $R^2$?\n* What is Ldim? (littlestone dimension? I was not able to find the definition.)\n* L308-309: why suddenly poly() here? The sentence here seems out of context.. None.",
" This paper provides new insights on the eluder dimension which is a complexity measure extensively used for bandits and reinforcement learning with function approximation. The paper studies the eluder dimension in relation to a new complexity measure proposed by the authors called $\\sigma$-rank which is the minimal dimension required to represent the class as a generalized linear model. The authors show that in some cases, the eluder dimension can be exponentially smaller than $\\sigma$-rank. In some other cases where $\\sigma$ is ReLu, the eluder dimension can be exponentially larger than $\\sigma$-rank. Further, for binary-valued function classes, the authors provide a characterization of the eluder dimension in terms of star number and threshold dimension.\n Strengths:\n- This paper addresses an important problem: understanding the insight of the eluder dimension in bandits and reinforcement learning with function approximation.\n- The generalized notion of rank seems novel. The results of the comparison of this notion and the eluder dimensions are interesting.\n- The obtained results of this paper are original.\n\nWeaknesses:\n- The organization and the writing are not very clear to follow.\n- Although the paper addresses an insight into the eluder dimension, however, the questions raised in the Introduction section toward this insight are not convinced. Question 1 seems strange to me. Whether all function classes with small eluder dimensions are essentially generalized linear models is an \"important\" question? In [37], in a generalized linear setting, Russo and Van Roy provided an upper bound on the eluder dimension. This upper bound may be small or even infinite depending on the $\\sigma$ function. Thus, we cannot require that the eluder dimension is small in generalized linear models. The answer \"No\" of Question 1 does not contrast with the result of Russo and Van Roy. I think that a more important question to answer is when the eluder dimension is bounded? \n- Obtained results are not significant enough for the understanding of the eluder dimensions in existing works. \n I have some questions for the authors:\n- Does the comparison between the $\\sigma$-rank and the eluder dimension help understanding when the eluder dimension is bounded?\n- In some cases, the eluder dimension can be exponentially smaller than $\\sigma$-rank and otherwise. The authors can explain what is the consequence of this fact?\n- When the function class lies in an RKHS, is there any equivalence between the eluder dimension and the generalized rank? Yes"
] | [
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"5z9oNa2oGSV5",
"W2MvIQHDJf5",
"vX_lCYP21AC",
"uTnscUU_nxt",
"uDC8rPWmAO",
"nips_2022_jHIn0U9U6RO",
"nips_2022_jHIn0U9U6RO",
"nips_2022_jHIn0U9U6RO"
] |
nips_2022_jcIIVkbCaHO | Pessimism for Offline Linear Contextual Bandits using $\ell_p$ Confidence Sets | We present a family $\{\widehat{\pi}_p\}_{p\ge 1}$ of pessimistic learning rules for offline learning of linear contextual bandits, relying on confidence sets with respect to different $\ell_p$ norms, where $\widehat{\pi}_2$ corresponds to Bellman-consistent pessimism (BCP), while $\widehat{\pi}_\infty$ is a novel generalization of lower confidence bound (LCB) to the linear setting. We show that the novel $\widehat{\pi}_\infty$ learning rule is, in a sense, adaptively optimal, as it achieves the minimax performance (up to log factors) against all $\ell_q$-constrained problems, and as such it strictly dominates all other predictors in the family, including $\widehat{\pi}_2$. | Accept | The reviewers are in agreement that this paper provides a minimax optimal solution to the problem of offline linear contextual bandits. This new family of learning rules beat state of the art approaches and provide a unified view on existing approaches, such as Lower Confidence Bound and Bellman-Consistent Pessimism. The theoretical results are backed by reasonable numerical simulations. Accept. | train | [
"BISWZ0opbjt",
"AGMInIajTZH",
"nMaRL4loyv",
"I2pKmfnpCgYh",
"M21UENsOQW",
"qauVJt3sU2s",
"6prQjRVcK7E",
"cG7mVIKAtzA",
"lCMhu4FgAAO",
"Qwvf2RbLiq_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the clarification and my question is well-addressed. I believe this is a good work and I'll thus keep my score.",
" I thank the authors for their response. They address my concern about the theoretical contribution of the work. But I still doubt its real-world applicability since real-world problems are rarely \"linearly realizable\" and might not belong to the benign class of instances analyzed in the paper. \n\nI would like to increase my evaluation to 6. ",
" We thank the reviewer for their comments and time. \n\n**Important corrections**: The main proposed learning rule has *not* been previously discussed in the literature. The main methodological contribution lies in the proposal of the $\\ell_\\infty$ learning rule in the general linear contextual bandit setting. This has not been previously proposed. The previously proposed rules for linear contextual bandits are BCP [4], PACLE [5], and PEVI [6], which we show are suboptimal and dominated by the proposed $\\ell_\\infty$ rule. \n\nIndeed, once we introduce the general framework for pessimism via confidence sets (which is a significant part of the novel contribution and what enabled the methodological advance), the relationship to prior work (LCB, BCP, PACLE, PEVI) becomes apparent, and one can see that the previously proposed LCB rule is a special case of our proposed novel rule. But LCB is only for the tabular setting, not for the general linear setting. And without our framework, seeing the more general rule in the more general linear contextual bandit setting is perhaps not so obvious (e.g., it wasn’t obvious to researchers who previously worked on offline linear contextual bandits/RL and suggested the suboptimal BCP, PACLE, and PEVI rules). By way of imperfect analogy, this is a bit like saying Exponentiated Gradient is not new because the Winnow algorithm (developed for learning a structureless finite class) can be seen as a special case of EG (when each hypothesis is encoded as an indicator, with a feature representation that encodes the hypothesis class).\n\n**Further responses:**\n1. **Comparison to prior work on offline contextual bandits.** We thank the reviewer for bringing to our attention these additional related works on offline contextual bandits, we will include them in our revision. The papers [1] and [3] study approaches which use importance reweighting to estimate the values of policies from offline data. An inherent feature of importance weighting methods is that the behavior policy which generates the offline data is either known or approximated. In contrast, our method does not require knowledge of the behavior policy, and instead relies on the principle of pessimism to do offline learning. Another salient difference between our work and the papers [1-3] is the study of statistical optimality of the proposed methods. Our work confirms that our $\\ell_\\infty$ learning rule is in one sense statistically optimal for offline linear contextual bandits, while these papers do not discuss the optimality of their methods for policy learning.\n2. **Using inf vs min.** You are correct, we should use inf, since in general, the set $\\Theta$ can be infinite.\n\n[4]: Xie, Cheng, Jiang, Mineiro, Agarwal. “Bellman-consistent Pessimism for Offline Reinforcement Learning.” \n[5]: Zanette, Wainwright, Brunskill. “Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning.\" \n[6]: Jin, Yang, Wang. “Is Pessimism Provably Efficient for Offline RL?”",
" We thank the reviewer for their comments and time. To answer the question asked by the reviewer:\n\n**The requirement for $\\Lambda$ in the lower bound.** Thanks for the interesting question. We conjecture that it is possible to improve the lower bound to hold for any $\\Lambda = \\Omega(1)$ for all $p \\in [2,\\infty)$. Due to technical challenges, we were only able to do so for $p= \\infty$. For $p=\\infty$, we use a different construction which uses sparsity of the test distribution $\\rho$, and it is unclear how to adapt it for $p < \\infty$. In Appendix D.1, we do have a more complicated lower bound construction which holds for any $\\Lambda = \\Omega(1)$ for $p \\in [2,\\infty)$, but unfortunately it incurs an undesirable stronger requirement on sample complexity $n$ in order for the lower bound to hold, so we did not include it in the main text.\n\nIn future revision, we will incorporate some discussion on this in the main text. As you point out, it does look a bit peculiar without any explanation. We can also include more details on the lower bound in the main text to provide intuition.",
" We thank the reviewer for their comments and time.\n\n**To justify the contribution for the linear contextual bandit community**: prior work on offline linear contextual bandits/RL suggested the BCP method [1], the PACLE method [2], and the PEVI method [3], which in this work we show is suboptimal, and instead our work suggests a different method that we argue strictly dominates these methods. Suggesting a method that strictly dominates the previously published methods seems to us a significant contribution.\n\nThe general framework of pessimism via confidence sets is a tool we develop in this paper. Indeed, once we present this general framework, the relationship to BCP, LCB, PACLE, and PEVI becomes immediate, it seems natural to use an $\\ell_\\infty$ confidence set also for general linear contextual bandits, and our method just pops out. We are happy this is all clear after reading the paper and consider this “obviousness-in-hindsight” a good thing. But it's important to keep in mind the unifying view we present is what enabled this, and this was not obvious, e.g., to researchers who worked on the problem before us and suggested BCP, PACLE, and PEVI. \n\n[1]: Xie, Cheng, Jiang, Mineiro, Agarwal. “Bellman-consistent Pessimism for Offline Reinforcement Learning.” \n[2]: Zanette, Wainwright, Brunskill. “Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning.\" \n[3]: Jin, Yang, Wang. “Is Pessimism Provably Efficient for Offline RL?”",
" We thank the reviewer for their comments and time. To answer the questions asked by the reviewer:\n\n1. **Extending to RL**. We believe that the learning rules can also be relevant to RL. In fact, the instantiation of the $\\ell_2$ rule (i.e., Bellman-consistent pessimism) has already been applied to RL [1,2]. We can modify the PACLE algorithm from [2] to solve for any $\\ell_p$ learning rule by suitably changing the convex program (10) from the paper [2]. However, analyzing the statistical performance in the RL setting is a new challenge, and this could indeed be an interesting direction.\n2. **Instance-dependent guarantees.** This is an excellent question, and thanks for pointing out two relevant RL papers with instance dependent guarantees. There are multiple desiderata in obtaining instance-dependence guarantees in offline RL. The first is the fine-grained dependence on the behavior policy and the optimal policy, which we have achieved in this work. Compared to the single policy concentrability coefficient, our bound is more fine-grained and problem dependent. The second is the dependence on the reward structure. It is relatively easy to develop instance-dependent bounds when the variance associated with each (s,a) pair is different (though one may need to change the confidence set using appropriate weights). However, the more interesting question is the dependence on the gap structure of rewards - this is the usual meaning of “instance-dependence” in the online setting. This question is of great interest and we leave this one to future work. \nWe will add further discussion on instance-dependent guarantees in the revised manuscript to incorporate these points.\n\n[1]: Xie, Cheng, Jiang, Mineiro, Agarwal. “Bellman-consistent Pessimism for Offline Reinforcement Learning.” \n[2]: Zanette, Wainwright, Brunskill. “Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning.\"\n",
" This paper focuses on offline learning for linear contextual bandits and provides a novel family of pessimistic learning rules that generalizes over the Bellman--consistent pessimism and lower confidence bound strategies. The statistical guarantees established here for this new family of learning rules are proven to be minimax optimal, as the authors also show a lower bound. Last is demonstrated the adaptive minimax optimality property of one of the new learning rules - the extension of the lower confidence bound strategy - with empirical experiments corroborating the theoretical findings. Strengths:\n- Presentation: the problem is well introduced and the main results are clearly presented\n- Impact: this paper provides a minimax optimal solution to the problem of offline linear contextual bandits. This new family of learning rules generalizes well-known approaches.\n- The paper is technically sound.\n- The experiments seem to nicely support the theoretical findings\n\nWeakness: No instance dependent results. It seems like the instance dependent literature for linear is growing (even for RL, see [1,2]) and it would have been to see a result of that form \n\n[1] Zanette, A., Kochenderfer, M. J., and Brunskill, E. Almost horizon-free structure-aware best policy identification with a generative model. \n[2] Wagenmaker, A., Simchowitz, M., and Jamieson, K. Beyond no regret: Instance-dependent pac reinforcement learning\n - Could this work be extended to RL?\n- The definition of the policies of interest are only explicitly made in Equation (9), while they should be explicitly defined before Theorem 1.\n- Could the confidence set in Equation (8) and Lemma 1 incorporate more instance-dependent terms in order to get tighter results?\n The theoretical limitations are adequately addressed. The authors state that the potential negative societal impacts of their work is N/A due to its theoretical nature. It might still be valuable to mention what could go wrong if the suggested algorithms were actually deployed. \n",
" This paper proposed a new confidence set estimation approach for linear contextual bandits based on the pessimistic principal. The optimality gap is derived using the dual norm techniques. The authors are able to show that the l_p confidence set attains the optimal rate when p=infinity. Furthermore, the paper showed that using p=infinity also achieves the adaptive minimax optimality. Experiments on synthetic dataset demonstrate the confidence set derived in this paper is valid and using l_infinity norm indeed outperforms the other candidates. Strengths:\n\n(1). This paper provided a suite of methods for deriving valid confidence set in the linear contextual bandit scenario. Furthermore, the paper suggested that using l_p norm with p=infinity gives the best suboptimality guarantee. The theoretical results are interesting, which I believe will facilitate future research in offline linear contextual bandits.\n\n(2). The pessimism principle is well-motivated in the beginning of the paper, and it is very clear why pessimism would work in the offline linear contextual bandit scenario through equation (7).\n\n(3). The related works are thoroughly discussed. Two important prior works, the LCB and BCP, are special instantiations of the general confidence set derived in this paper. Therefore, this paper can be viewed as an extension of prior research to the broadest setting.\n\nWeaknesses:\n\n(1). I am a bit skeptical of the contribution of this paper. In particular, the paper basically unified the prior works on LCB and BCP and provided a generic approach for confidence set estimation. The paper also suggested that when p=infinity, the confidence set estimation and the error rate attains the optimum. However, again as the paper pointed out, LCB is an instantiation of the generic approach when p=infinity, thus is already optimal. Therefore, the contribution of this paper is mostly about a general framework to unify previous methods. (1) How to justify that the paper has significant contributions to the linear contextual bandit community? Yes",
" This paper proposes a family of offline pessimistic learning algorithms for linear contextual bandit based on $\\ell_p (p\\geq 1)$ confidence sets (called $\\widehat{\\pi}_p$). Among them, the algorithm based on $\\ell_\\infty$ confidence set (called $\\widehat{\\pi}_\\infty$) has the smallest suboptimality gap. Furthermore, it also proves lower bounds for classes of linear contextual bandit problems indexed by $q\\in[1, \\infty]$ and shows that $\\widehat{\\pi}_\\infty$ is minimax optimal for all classes of problems. ### Strengths\nThe claim that $\\widehat{\\pi}_\\infty$ is adaptively minimax optimal is appreciated and considered to be highly novel. Meanwhile, this paper also provides thorough discussion and comparison between $\\widehat{\\pi}_\\infty$ and previous methods, showing non-trivial advantage of $\\widehat{\\pi}_\\infty$. Furthermore, to support its claim, this paper also proposes a novel lower bound and complexity measure for linear contextual bandit problems.\n\nThe writing of the paper is also in a good logic flow.\n\n### Weaknesses\n\nMinor issues:\n- It may be better to have a separate section for conclusion to summarize what has been discussed before.\n- It may be better to give a sketch or core idea of the hard instance construction for proving the lower bounds. - It looks peculiar that the requirement for $\\Lambda$ in lower bound becomes purely numeric when $p=\\infty$. Is there any intuition for this? Yes, the limitations are adequately discussed, especially the regime where the proposed lower bounds are applicable.",
" The paper studies batch (offline) learning in linear contextual bandits (with a realizability assumption that the expected reward of a context and an action is a linear function of their feature mapping given to the learner). \n\nThey analyze a class of pessimistic learning rules indexed by different l_p norms. For each learning rule, the learning rule is to (1) build a confidence set of the true parameter under l_p norm; (2) pessimistically select the policy that maximizes a lower confidence bound on the expected reward constructed using the confidence set. \n\nThey provide near-optimality analysis of the learning rules and show that the learning rule with l_{\\infty} norm enjoys the best performance guarantee under their analysis. \n\nThe show that each learning rule under a norm is minimax optimal under the specific norm, within a norm-constrained class of contextual bandit instances (instances that are \"easy\" to learn). \n\nThey show that the learning rule under l_{\\infty} nrom is minimax optimal under all norms, while the other learning rules are not. \n\n\n Strengths:\n\nThe paper is well-written and easy to follow. \n\nThey provide instance-dependent and minimax optimality analysis of learning rules under different norms. \n\nThe discussion about the connections to prior works is interesting. \n\nWeaknesses:\n\n1. The proposed learning rules seem to have already been discussed in many existing offline RL literature, as also mentioned by the authors. The methodological contribution seems limited. \n\n2. It would be great if the authors discuss and compare with prior works on offline learning in contextual bandit literature (e.g. [1,2,3] and their follow-ups). \n\n3. The paper lacks empirical evaluation. And I doubt its real-world applicability. In particular, the theoretical analysis of the proposed learning rule relies heavily on a realizability assumption, which might not hold in most of the real-world problems. Also, the minimax optimality analysis is within some benign class of instances, which might not hold in real-world applications. It would be great if the paper shows some empirical evidence that the proposed learning is better compared with other rules and existing works [1,2,3]. \n\n\nSome minor issues:\nwhy in eq (6) we use \\min (instead of \\inf) and in eq (7) we use \\sup? \n\n[1] Dudík, Miroslav, John Langford, and Lihong Li. \"Doubly robust policy evaluation and learning.\" ICML (2011)\n[2] Bottou, Léon, et al. \"Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising.\" Journal of Machine Learning Research 14.11 (2013).\n[3] Swaminathan, Adith, and Thorsten Joachims. \"Counterfactual risk minimization: Learning from logged bandit feedback.\" International Conference on Machine Learning. PMLR, 2015.\n See weaknesses. I did not find limitations and potential negative societal impact. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
2,
3
] | [
"I2pKmfnpCgYh",
"nMaRL4loyv",
"Qwvf2RbLiq_",
"lCMhu4FgAAO",
"cG7mVIKAtzA",
"6prQjRVcK7E",
"nips_2022_jcIIVkbCaHO",
"nips_2022_jcIIVkbCaHO",
"nips_2022_jcIIVkbCaHO",
"nips_2022_jcIIVkbCaHO"
] |
nips_2022_sc7bBHAmcN | Understanding and Extending Subgraph GNNs by Rethinking Their Symmetries | Subgraph GNNs are a recent class of expressive Graph Neural Networks (GNNs) which model graphs as collections of subgraphs. So far, the design space of possible Subgraph GNN architectures as well as their basic theoretical properties are still largely unexplored. In this paper, we study the most prominent form of subgraph methods, which employs node-based subgraph selection policies such as ego-networks or node marking and deletion. We address two central questions: (1) What is the upper-bound of the expressive power of these methods? and (2) What is the family of equivariant message passing layers on these sets of subgraphs?. Our first step in answering these questions is a novel symmetry analysis which shows that modelling the symmetries of node-based subgraph collections requires a significantly smaller symmetry group than the one adopted in previous works. This analysis is then used to establish a link between Subgraph GNNs and Invariant Graph Networks (IGNs). We answer the questions above by first bounding the expressive power of subgraph methods by 3-WL, and then proposing a general family of message-passing layers for subgraph methods that generalises all previous node-based Subgraph GNNs. Finally, we design a novel Subgraph GNN dubbed SUN, which theoretically unifies previous architectures while providing better empirical performance on multiple benchmarks. | Accept | This paper studies the recent hot topic in GNN, namely subgraph-based GNNs which apply GNN to each node-centered subgraph copy of the original graph instead of directly applying GNN to the full graph. These GNNs were shown to be more expressive than 1-WL but were unknown in terms of their upper bound of expressive power. This paper shows that all these subgraph-based GNNs, including Nested GNN, ID-GNN, reconstruction GNN, GNN-AK etc., can be implemented by 3-IGN which is upper bounded by 3-WL, thus giving an upper bound to subgraph-based GNNs' expressive power. The novel perspective that views subgraphs as an additional tensor dimension which is also equivariant to node permutation is very insightful, and is the key to the 3-IGN implementations. Overall, I believe this paper is of great theoretical contribution to the GNN community and opens up some new design space. | train | [
"Mx071i5o4lS",
"uC7UdHJd_3T",
"YYoaflKIENe",
"F7ymMFHJtZn",
"pLygWmGGkY",
"nF95jDan9Q7",
"9-zmAvBbu47",
"NPzrQvkmQv",
"WI3vsIIQPsI",
"sznMeQVAz5Z",
"vOBDwssPglD"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We kindly bring the attention of the Reviewers to a new manuscript revision we have just uploaded. The revision implements the additions discussed in the previous general comment and in specific responses to Reviewers.\n\nChanges are visually signalled in _blue_; they include:\n- A more thorough and detailed introduction of Invariant Graph Networks as recommended by Reviewers **F5Bg** and **qAH9** – pages 2-3;\n- A more explicit reference to the 3-IGN construction and orbits as recommended by Reviewer **qAH9** – page 3 and 5, along with the new Figure 4 at page 16 (Supplementary Materials);\n- An analysis of the computational complexity of Subgraph GNNs with reference to 3-IGNs, as recommended by Reviewers **SuEM** and **F5Bg** – page 15 (Supplementary Materials);\n- An explicit mention and consideration of relevant directions for future developments of the work: the study of the gap between Subgraph GNNs and 3-WL and of the intrinsic expressive power of node-based policies – page 40 (Supplementary Materials);\n- An ablation study on the terms in the SUN layer equations, as recommended by Reviewer **qAH9** – pages 43 and 45 (Supplementary Materials);\n- Minor rephrasing of periods in the main corpus of the manuscript to accommodate the changes whilst respecting the given space limitations.\n",
" We are deeply grateful to all reviewers for their feedback and constructive comments, while glad to notice that all Reviewers have positively welcomed our work.\n\nThe Reviewers have recognised the validity and importance of our theoretical contribution, considered _“solid”_ (**SuEM**), _“novel”_ (**F5Bg**), _“useful”_ (**qAH9**), and able to provide a _deeper understanding_ of the novel class of Subgraph GNN models (**GpAd**, **SuEM**). The reviewers have stressed the _significance_ of the research topic of Subgraph GNNs (**GpAd**, **F5Bg**), finding that our proposed _“unified analysis of subgraph GNNs is very valuable”_ (**qAH9**). As an important part of our contribution we have proposed a design framework for new Subgraph GNNs (ReIGN); this aspect has been positively appreciated as well. For example, Reviewer **qAH9** has reported this framework _“offers interesting avenues to extend subgraph GNNs”_ while featuring a very good _coverage of existing models_. Last, we are delighted to notice the Reviewers have found our presentation _“clear and easy to follow”_ (**GpAd**) and the paper to be _“well-written”_ (**F5Bg**). \n\nIn the next revision of our manuscript we will take into consideration the actionable feedback provided by the Reviewers to enhance the readability and overall quality of the manuscript. As suggested by Reviewers **F5Bg** and **qAH9**, we will make sure to devote a paragraph to better introduce Invariant Graph Networks, the structure of their linear equivariant layers and of the objects they process. Reviewers **SuEM** and **F5Bg** also pointed out how discussing the computational complexity of IGNs and Subgraph GNNs would be an important addition to the paper. We will add this analysis in the new revision of the manuscript, and refer to the relevant bibliographic sources which have previously studied this aspect.\n\nHere below we provide responses to each Reviewer in a way to address their comments in more detail and answer their specific questions.\n",
" We are delighted to see the Reviewer recognised the value in our theoretical contribution as well as in the framework we proposed for the design of novel Subgraph GNN layers. At the same time, the Reviewer raised important points we address in the following.\n\n_**“The experimental section is rather limited, as the paper makes no ablation analyses” / “SUN could have been better analyzed in terms of the limitations stemming from its model choices”**_\n\nWe agree with the Reviewer that a principled ablation analysis could be beneficial in understanding the impact of each term in the update equations of the SUN layer: we have proceeded by performing the study we detail in the following.\n\nWe considered the ZINC molecular dataset, using GIN as base graph encoder, and we made sequential changes to the SUN layer equations (Equations 5 and 6) until recovering an architecture similar to NGNN, DS. We performed hyperparameter tuning for every change, while maintaining the 500K parameter budget. The table below reports the test performance for the EGO policy.\n\nAs it can be seen, each ablation generally produces some performance degradation, with the removal of $\\sum_{j} x^{k,(t)}_{j}$ having no significant impact.\n\n| Method | ZINC (MAE $\\downarrow$) |\n|----------------------------------------------------------------------|:----------------------------------:|\n| SUN | 0.083 ± 0.003 |\n| w/o $x_{i}^{i,(t)}, x_{k}^{k,(t)}$ | 0.089 ± 0.004 |\n| $\\theta_1 = \\theta_2$ | 0.093 ± 0.003 |\n| w/o $\\sum_{j} x^{k,(t)}_{j}$ | 0.093 ± 0.004 |\n| w/o $\\sum_{h} x_{i}^{h,(t)}, \\sum_{j \\sim i} \\sum_{h} x_{j}^{h,(t)}$ | 0.111 ± 0.005 |\n\n\nThe next table reports the test results obtained for the EGO+ policy. Interestingly, although root nodes are explicitly marked, the architecture seems to still benefit from not sharing parameters between root and non-root updates: imposing the weight sharing $\\theta_1 = \\theta_2$, deteriorates the overall performance, which gets similar to the one obtained for the EGO policy. In this case, term $\\sum_{j} x^{k,(t)}_{j}$ is even proved to be detrimental when the other changes are made.\n\n| Method | ZINC (MAE $\\downarrow$) |\n|----------------------------------------------------------------------|:----------------------------------:|\n| SUN | 0.084 ± 0.002 |\n| w/o $x_{i}^{i,(t)}, x_{k}^{k,(t)}$ | 0.089 ± 0.002 |\n| $\\theta_1 = \\theta_2$ | 0.093 ± 0.004 |\n| w/o $\\sum_{j} x^{k,(t)}_{j}$ | 0.090 ± 0.004 |\n| w/o $\\sum_{h} x_{i}^{h,(t)}, \\sum_{j \\sim i} \\sum_{h} x_{j}^{h,(t)}$ | 0.101 ± 0.007 |\n\nOverall, this ablation analysis indicates that, in the SUN layer, most of the terms concur to the strong empirical performance of the architecture, including the choice of not sharing parameters between root and non-root updates. We will make sure to include this analysis in the next paper revision.\n\n_**“I strongly recommend a slightly more detailed coverage of the 3-IGN construction and of orbits”**_\n\nWe agree with the recommendation of the Reviewer. In a way compatible with space limitations, in the next revision of our manuscript we will make our best to introduce more thoroughly those aspects underpinning our theoretical results and proofs. We will try to better introduce 3-IGNs by describing their equivariant layers, how orbits partition the cubed tensor they are applied onto and their semantics when the tensor is interpreted as a node-based bag of subgraphs. Finally, we will properly refer readers to [Morris et al., 2021], which includes an articulated review of Invariant Graph Networks.\n\n_**“I strongly suggest you revisit your source file [...] clearly is to the disadvantage of the authors”**_\nAfter a careful check we found out that, likely due to a human error, we imported an unwanted package in our latex source code. As the Reviewer hypothesised, because of this, the current font is actually slightly more space-greedy, and fixing this problem will indeed free up some useful space which we will employ to address the comment above. Thanks!\n\n_**References**_\n\n[Morris et al., 2021] “Weisfeiler and Leman go Machine Learning: The Story so far”",
" _**“How large is the expressivity gap between IGN(3) and ReIGN(2)?”**_\n\nThis is a very interesting aspect and we thank the reviewer for bringing it up. On one hand, we do not have a definite answer to this precise question; on the other hand, we believe it would be an important direction for future work to understand whether ReIGN(2), or any of the node-based Subgraph GNNs discussed in the paper, are already 3-WL expressive. Given that it intrinsically operates on a second-order object, it is our intuition that ReIGN(2) may not be able to _implement_ 3-IGNs. However, this does not necessarily imply the former model is less expressive than the latter when considering graph separation: ReIGN(2) may still be able to distinguish between the same pairs of graphs distinguished by 3-IGNs, hence attaining 3-WL expressive power. Either way, we believe this aspect would require a more focussed effort we are eager to make in a follow-up work.\n\n\n_**“Time complexity is not discussed” / “Time complexity of IGN(3)?”**_\n\nWe will discuss these aspects more thoroughly in the next version of our manuscript. The complexity of Subgraph GNNs has been studied in previous works [Bevilacqua et al., 2022; Zhao et al. 2022; Zhang et al., 2021]. For generic node-based subgraph selection policies it amounts to $T(n) = O(n^2 d)$, where $n$ is the number of nodes of an input graph, and $d$ is the maximum node degree. For subgraphs significantly smaller than the original graph (as it may be the case for shallow egonets) it can better be estimated with $T(n) = O(n c d)$ with $c$ the maximum subgraph size.\n\nAs reported in [Bevilacqua et al., 2022], 3-IGNs have a time complexity which is cubic in the number of nodes in the input graph. Essentially, their equivariant layers update each element in a third-order tensor by means of shared pooling-broadcasting operations whose complexity is upper bounded by $O(n^3)$. We will stress this aspect in our next manuscript revision.\n\n_**References**_\n\n[Morris et al., 2021] “Weisfeiler and Leman go Machine Learning: The Story so far”\n\n[Bevilacqua et al., 2022] “Equivariant Subgraph Aggregation Networks”\n\n[Papp and Wattenhofer., 2022] “A Theoretical Comparison of Graph Neural Network Extensions”\n\n[Zhang et al., 2021] “Nested Graph Neural Networks”\n\n[Zhao et al., 2022] “From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness”",
" We gladly notice that, in their feedback, the Reviewer highlighted the significance of the research topic and the novelty of our approach. They also made some relevant comments and raised a few questions. We address these below.\n\n_**“The discussion is heavily based on IGN. However, this paper does not introduce it.”**_\n\nWe thank the reviewer for bringing this point to our attention. We have already expanded on these models in Appendix B, but we agree that a more thorough introduction of Invariant Graph Networks in the main paper would improve the quality of our manuscript.\n\nIn the next revision of our manuscript we will make all possible efforts to better describe these models in a way that is compatible with space limitations, giving more emphasis to those aspects with a pivotal role in our results and their proofs, i.e. the structure of IGN equivariant layers and of the tensorial object they process. Additionally, we will make sure to properly refer readers to the recent, comprehensive review of Invariant Graph Networks included in [Morris et al., 2021].\n\n\n_**“The design space of subgraph GNN is based on extended IGN(2) and has little connection to the theoretical analysis based on IGN(3).”**_\n\nIn fact, it is exactly our theoretical analysis based on 3-IGNs that sparked the intuition on employing an extended 2-IGN model for a reduced, yet expressive, design space.\n\nIn general, 3-IGN layers update all elements in their input third-order tensor. When interpreting these tensors as node-based bags of subgraphs, this corresponds to updating the representations of nodes, edges as well as non-edge node pairs across subgraphs. Subgraph GNNs, on the other hand, typically update only node representations. In the aforementioned third-order tensor, these correspond to only those elements in its main diagonal plane, and it was our intuition that a sensible approach to reduce the layer design space would be to restrict to operations updating these entries only (captured by those orbits which in Appendix B we refer to as $o_{iii}, o_{ijj}$). At the same time, we importantly noticed that this diagonal plane corresponded to a second-order object with symmetries described by the diagonal action of the group $S_n$ over $\\mathbb{R}^{n^2}$, the same to which 2-IGNs are equivariant to. Given this context, we deemed it natural to consider 2-IGNs as the main computational framework in this reduced design space, to be further extended to support sparse message passing operations.\n\nEquations 2 and 3 describe the action of the symmetry groups separately over node connectivity ($\\mathcal{A}$) and node representations ($\\mathcal{X}$), even if, in fact, the two are both embedded in the same third-order tensor 3-IGNs operate on, as described above. This presentation choice has been made as it is customary in the Graph Neural Network community to distinguish these two entities. However, within the scope of the observation made by the Reviewer, we appreciate it may be at the cost of being somewhat deceptive. We will improve this presentation aspect in the next revision of our manuscript, and include a figure to better support the comprehension of the rationale described above. \n\n\n_**“Discussing how subgraph selection rules affect expressivity will also be very interesting.”**_\n\nThe Reviewer is raising a very interesting point. However, we believe that to properly enquire into this aspect reasonably falls outside of the scope of the present work.\n\nNonetheless, it may be interesting to report that some recent works have already marked some initial steps in this direction, that is in trying to characterise the impact of subgraph selection policies on the expressive power of a Subgraph GNN. For example, in [Bevilacqua et al., 2022], the authors have shown how edge-deletion may lead to superior expressive power than node-deletion or ego-network policies: contrary to the latter ones, edge-deletion allows to disambiguate pairs of Strongly Regular graphs in the same family. Notably, edge-deletion is not a node-based policy since subgraphs are not in a bijection with nodes in the original graph. Lastly, we note that the architecture proposed in [Papp and Wattenhofer., 2022] operates by node marking, and the authors showed this approach is strictly stronger than the popular node-deletion policy.\n",
" We are thankful to the Reviewer for their constructive comments. We are pleased to notice they appreciated the solidity of our theoretical contribution as it offers a deeper understanding of Subgraph GNN models. We reply to more specific comments and questions here below.\n\n_**“Scalability or computation/storage cost is not discussed” / “How is the complexity of subgraph GNN?”**_\n\nWe thank the reviewer for bringing this point to our attention. We acknowledge the fact that discussing the computational complexity of Subgraph GNNs would improve the quality of our manuscript and we will make sure to include this aspect in the next revision of our paper.\n\nThe complexity of Subgraph GNNs has been discussed in previous papers, see e.g. [Bevilacqua et al., 2022; Zhang et al., 2021; Zhao et al., 2022]. In [Bevilacqua et al., 2022], the authors analyse the space and time complexity of a model performing traditional message passing on each of the subgraphs obtained by a generic subgraph selection policy. Let $n, d$ refer to, respectively, the number of nodes and maximum node degree of an input graph. If $b$ is the size of the subgraph bag, the asymptotic time complexity amounts to $O(b n d)$, while the memory complexity to $O(b (n + n d))$. For a node-based selection policy, these become, respectively, $O(n^2 d)$ and $O(n (n + n d))$.\n\nOther than local message passing operations, more sophisticated Subgraph GNNs may include “global” pooling terms in their layer equation: see, e.g. the “subgraph” and “context” econdings in GNN-AK+ [Zhao et al., 2022] or the aggregation over node representations across subgraphs that is operated by DSS-GNN [Bevilacqua et al., 2022]. In principle, these operations require a squared asymptotic computational complexity ($O(n^2)$). However, these terms are shared in the update equations of nodes / subgraphs: in practice, it is only sufficient to perform the computation once. As $T(n) = O(n^2 d + n^2)$ implies $T(n) = O(n^2 d)$, these Subgraph GNNs retain the same asymptotic complexity described above.\n\nSUN layers involve the same “local” message passing and “global” pooling operations. The above considerations are, thus, directly applicable yielding the same asymptotic bounds.\n\nIt is worth noting that these bounds can be tightened in the case of ego-network policies. Let $c$ be the maximum ego-network size. The time complexity of a Subgraph GNN equipped with an ego-network policy becomes: $O(n c d)$. As observed in [Zhang et al., 2021], when ego-networks are of limited depth, the size of the subgraphs may be significantly smaller than that of the input graph; in other words $c \\ll n$, reducing the overall message passing complexity.\n\n_**“The empirical improvement is not impressive”**_\n\nWe respectfully believe that the performance attained by SUN on both synthetic and real-world benchmarks are, in fact, particularly solid. Generally, SUN tends to outperform all previous Subgraph GNNs, and on the ZINC and ogbg-molhiv datasets it approaches or outruns architectures which, contrary to SUN, explicitly model domain-relevant graph substructures (such as rings). This is extremely relevant for example on ZINC: here targets (penalised constrained solubility) linearly depend, amongst other terms, on the number of cycles whose length is at least 6 (see https://arxiv.org/pdf/2003.00982.pdf). SUN is a domain agnostic model, and we believe it is remarkable that, as such, it is able to attain such performance on these competitive and well-studied molecular benchmarks, where improvements are typically marginal.\n\n_**References**_\n\n[Bevilacqua et al., 2022] “Equivariant Subgraph Aggregation Networks”\n\n[Zhang et al., 2021] “Nested Graph Neural Networks”\n\n[Zhao et al., 2022] “From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness”",
" We thank the Reviewer for their feedback. We are glad they found the paper well-presented, while appreciating the provided theoretical and experimental analyses. We proceed by answering the questions raised by the Reviewer in the following.\n\n\n_**“I think the experiment table 1 is missing an entry”**_\n\nWe believe the Reviewer refers to the triple question mark symbol (“???”) reported in place of the standard deviation for the GNN-AK+ model [Zhao et al., 2022]. As explained in Appendix G.2.2, the performance of this method has been reported directly from the authors’ rebuttal comment on Open Review, as the one compliant with the 500k parameter budget (see https://openreview.net/forum?id=Mspk_WYKoEH¬eId=2oeomvjT4eg). As the comment does not report the standard deviation, we used the question mark symbol (“???”). In our next revision we will include this note directly in the main corpus of the manuscript.\n\n\n_**“Compared with traditional GNNs that achieves better performance in benchmark datasets, what's the potential reason? I understand it is not a disadvantage that SUN can not beat GIN/CIN on existing benchmarks?”**_\n\nWe would like to bring to the attention of the Reviewer that SUN demonstrated to significantly outperform the “traditional” GIN model on the ZINC and ogbg-molhiv molecular benchmarks as well as the synthetic subgraph counting ones (see Tables 1 and 2).\n\nGSN [Bouritsas et al., 2022] and CIN [Bodnar et al., 2021] are the only provably expressive, non-traditional MPNN models with clearly superior performance on the ogbg-molhiv benchmark. On ZINC, SUN is outperformed by CIN, but, interestingly, not by GSN. Yet, in these cases, SUN is the best amongst all other baseline models. We believe these are particularly solid results, especially when put into context. While SUN is a domain agnostic model, CIN and GSN _explicitly model ring substructures_, which are known to play a pivotal role in molecular modelling. Because of this reason, we believe these results are particularly promising.\n\n\n_**“Do you require pre-defined motifs in node-based selection policies?”**_\n\nNo, we do not. While being provably expressive, Subgraph GNNs can attain empirically competitive performance by employing generic, domain agnostic policies such as ego-networks or node-marking. This flexibility represents one of the main advantages of Subgraph GNNs in general and SUN in particular.\n\n_**References**_\n\n[Zhao et al., 2022] “From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness”\n\n[Bouritsas et al., 2022] “Improving Graph Neural Network Expressivity with Subgraph Isomorphism Counting”\n\n[Bodnar et al., 2021] “Weisfeiler and Lehman Go Cellular: CW Networks”\n",
" This paper focus on understanding the expressiveness of subgraph graph neural networks. Under its proposed node-based subgraph selection policy, the author demonstrates the representation power as strong as 3-WL test via bridging it with 3-IGNs. In the mean time, the algorithm is realized as Re-IGN and reports superior performance than other existing subgraph GNNs. **Strengths**\n1. The presentation is clear and easy to follow.\n2. Subgraph GNN is a uprising research topic in graph neural networks, theoretical understanding and its symmetries to existing invariant or equivariant GNNs help explain the design choices and applications.\n3. The extensive experiments in main paper and appendix not only shows its superior performance in family of subgraph GNNs but also on par performance with established traditional graph neural networks.\n\n\n**Weaknesses**\nNot I can think of. I think the experiment table 1 is missing an entry. 1. Compared with traditional GNNs that achieves better performance in benchmark datasets, what's the potential reason? I understand it is not a disadvantage that SUN can not beat GIN/CIN on existing benchmarks? Some insights that why SUN with 1 MLP operators cannot beat existing methods with geometric group symmetries, might be helpful for the community. \n\n2. Do you require pre-defined motifs in node-based selection policies? N.A.",
" This paper extends and analyzes the class of Subgraph GNNs. Importantly, the authors proved that the expressive power of these Subgraph GNNs is bounded by 3-WL. Then, a new family of layers for the class of Subgraph GNNs is proposed, with better generalization abilities. Overall, this is a solid theory paper, and provides a deeper understanding for subgraph GNNs. Strength: 1. Provide a deeper understanding of subgraph GNN\n2. Design a novel Subgraph GNN, which unifies previous architectures and provides better empirical performance.\nWeakness: 1. Scalability or computation/storage cost is not discussed. \n2. The empirical improvement is not impressive How is the complexity of subgraph GNN? The authors have adequately addressed the limitations and potential negative societal impact of their work.",
" This work provides a theoretical framework for subgraph GNNs. It first points out that the theoretical analysis of subgraph GNN with node-based subgraph policy can be simplified. Then, with the existing invariant graph networks (IGN) model, the author embeds node-based subgraph GNN into the 3-IGN and thus bounds the expressive power of node-based subgraph GNNs by 3-WL. Besides expressivity analysis, this work also provides a detailed illustration of the design space of subgraph GNN and a new subgraph GNN model, SUN. SUN exhibits good generalization ability in experiments. Strength:\n1. The paper is clear and well-written.\n2. The connection between subgraph GNN and 3-WL is novel. \n3. The topic is significant as using subgraph is an important method to boost expressivity.\n\nWeakness:\n1. The discussion is heavily based on IGN. However, this paper does not introduce it.\n2. The design space of subgraph GNN is based on extended IGN(2) and has little connection to the theoretical analysis based on IGN(3).\n3. Time complexity is not discussed.\n4. Discussing how subgraph selection rules affect expressivity will also be very interesting. 1. How large is the expressivity gap between IGN(3) and ReIGN(2)?\n\n2. Time complexity of IGN(3)? The societal impact is adequately addressed.",
" This paper proposes a unified analysis of subgraph GNNs based on node selection (e.g., ego-networks, node deletion, etc), and shows that these GNNs map directly to 3-IGNs (invariant graph networks) by representing their different components (subgraph selection, layers, pooling, MLP) as 3-IGN layers. Based on this correspondence, the paper then shows that any subgraph GNN based on node selection (where every subgraph is computed through a bijection over the selected node and the original input graph) can be implemented through a 3-IGN. This result is shown first by proving that all known graph selection policies can be emulated by a 3-IGN (Lemma 4) and then showing how the following components of subgraph GNNs can also be captured by this same model (Lemma 5). Based on this result, the paper proves that all subgraph GNNs based on node selection have expressive power upper-bounded by 3-WL, the upper bound for 3-IGNs. \n\nBuilding on this insight, the paper then considers potentially novel designs for invariant/equivariant operations over nodes and subgraphs, and draws on the potential operations of 3-IGN and 2-IGN. In particular, it looks for equivariant functions with an at-most quadratic memory footprint (like 2-IGN) and then extends 2-IGNs with local node neighborhood aggregation (as standard in MPNNs) so as to naturally emulate subgraph GNNs. This extension, called ReIGN, is sufficient to capture all known subgraph GNNs. Finally, the paper proposes subgraph union networks (SUNs), which build on the ReIGN framework, and evaluate this model empirically on subgraph counting benchmarks, demonstrating strong performance. Strengths: \n- The unified analysis of subgraph GNNs is very valuable, and the upper-bound is quite useful to understand the limitations of current models. I have briefly checked the 3-IGN construction in the appendix, and all proofs appear sound.\n- The proposed ReIGN framework offers interesting avenues to extend subgraph GNNs, and the coverage of existing models is very good.\n\nWeaknesses: \n- The experimental section is rather limited, as the paper makes no ablation analyses of the SUN model, nor does it conduct case studies to further validate its hypotheses. I believe an interesting question is how SUN performance changes relative to the different components in its equations (i.e., how much gain comes from, e.g., using different update functions v_theta1, v_theta2, using all the different components fed into the update equation).\n- The intuition provided in the paper is not sufficient to fully understand the result. In particular, I had to consult the appendix on multiple occasions to understand orbits and the constructions for the proof. Therefore, I strongly recommend a slightly more detailed coverage of the 3-IGN construction and of orbits to facilitate the explanation\n- It seems that the font used in the submission differs from what is expected from the NeurIPS template: To demonstrate, the \"Do not distribute\" footer stretches into a second line, which isn't the case in the normal template. This could well be a bug, and clearly is to the disadvantage of the authors (i.e., the font is less generous in terms of space than it should be). Hence, I strongly suggest you revisit your source file. The added space could help you more clearly explain your proofs as suggested above. None at the moment Limitations: The model properties are thoroughly discussed in this paper, and the corresponding limitations of ReIGN are clear. However, SUN could have been better analyzed in terms of the limitations stemming from its model choices (See ''Weaknesses\" section for more details.)\n\nSocietal Impact: Not applicable"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN",
"vOBDwssPglD",
"pLygWmGGkY",
"sznMeQVAz5Z",
"WI3vsIIQPsI",
"NPzrQvkmQv",
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN",
"nips_2022_sc7bBHAmcN"
] |
nips_2022_e3qH65r_eZS | Semi-supervised Semantic Segmentation with Prototype-based Consistency Regularization | Semi-supervised semantic segmentation requires the model to effectively propagate the label information from limited annotated images to unlabeled ones. A challenge for such a per-pixel prediction task is the large intra-class variation, i.e., regions belonging to the same class may exhibit a very different appearance even in the same picture. This diversity will make the label propagation hard from pixels to pixels. To address this problem, we propose a novel approach to regularize the distribution of within-class features to ease label propagation difficulty. Specifically, our approach encourages the consistency between the prediction from a linear predictor and the output from a prototype-based predictor, which implicitly encourages features from the same pseudo-class to be close to at least one within-class prototype while staying far from the other between-class prototypes. By further incorporating CutMix operations and a carefully-designed prototype maintenance strategy, we create a semi-supervised semantic segmentation algorithm that demonstrates superior performance over the state-of-the-art methods from extensive experimental evaluation on both Pascal VOC and Cityscapes benchmarks. | Accept | This paper proposes a teacher-student scheme for semi-supervised semantic segmentation. A consistency regularization is setup between a prototypical classifier and a linear classifier and different augmentation degrees (weak vs. strong) are applied to the teacher and student networks. On the positive side, the reviewers have found the ideas in this paper simple and strong in practice and they have indicated that the proposed setting is interesting. While the novelty of this paper may seem incremental since consistency regularization, in general, is heavily explored in semi-supervised training, the proposed setting is new for the semantic segmentation problem. One of the main criticisms of this submission is that it consists of many moving parts that are not well motivated and how they are orchestrated during training is missing from the original submission. After careful discussion, I believe that the merits of this submission outweigh the issues, and I am happy to recommend this paper for acceptance.
Last but not least, I strongly recommend the authors bring the algorithms to the main (if possible), provide additional implementation details, and make their code publicly available. | test | [
"O_dJihF5AF-",
"GZo_FzF6ZNa",
"QyyJDE5S613",
"NpQz5rlNWvq",
"TGM5e45yytT",
"WnSiDnsCW3R",
"o8632XKKleq",
"xZkgjt-8iDL",
"DrZWi_TXtlQ",
"H6gEh6mToeB",
"CtJgMj2aW2-",
"2jf6M6YbL0O",
"lxX42O3_RvD",
"KXM6dSoNRTq",
"dK-Td-AfT4-",
"uCkPpy5gtjd"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the author responses to questions raised by other reviewers, and decided to further increase the rating. I am highly impressed by the simplicity of this approach (à la x-Match style works in SSL), and very surprised that this simple method can achieve the results it does. This line of thought deserves deeper scrutiny from the community, and hence I think this should be accepted.",
" We sincerely thank Reviewer sBQB for your prompt response.",
" Dear Reviewer sZ7j,\n\nThanks again for your insightful suggestions and comments. As the deadline for reviewer-author discussion is approaching. We are glad to provide any additional clarifications that you may need.\n\nWe have carefully studied your comments and added additional clarifications and experiments in our previous responses to address your concerns. We genuinely hope you could kindly check our response.\n\nWe hope that our previous responses have convinced you the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again.\n\nBest wishes,\n\nAuthors",
" Dear Reviewer LwA4,\n\nThanks again for your insightful suggestions and comments. As the deadline for reviewer-author discussion is approaching. We are glad to provide any additional clarifications that you may need.\n\nWe have carefully studied your comments and added additional clarifications and experiments in our previous responses to address your concerns. We genuinely hope you could kindly check our response.\n\nWe hope that our previous responses have convinced you the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again.\n\nBest wishes,\n\nAuthors",
" Dear Reviewer P8JB,\n\nThanks again for your insightful suggestions and comments. As the deadline for reviewer-author discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nWe have carefully studied your comments and added additional clarifications and analysises in our previous responses to address your concerns. We genuinely hope you could kindly check our response.\n\nWe hope that our previous responses have convinced you the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again.\n\nBest wishes,\n\nAuthors",
" I thank the authors for responding to each of my queries. Most of my questions have been addressed, and I am updating my rating based on the responses.",
" We thank the reviewers for providing valuable and thoughtful comments on our paper. Based on the reviews, our paper has been mainly revised from the following perspective views:\n\n- **Prototype visualization.** We revise the t-SNE figure of our method through visualization prototypes in the same space of pixel representations.\n- **Citation details.** We add the detail publication information of the references cited in our paper.\n- **Algorithm tables.** We add two algorithm tables for the prototype initialization and global view of our approach respectively for better undertanding.\n- **Quantitative metric for intra-/inter-class discrimination.** We borrow the principle of linear discriminant analysis(LDA) and calculate the intra-/inter-class variance of the feature representations for each comparing methods. \n- **Ablation studies.** We add two ablation studies to further inspect the effectiveness of our approach, i.e., various strong data augmentations and the confidence threshold.\n\nDue to the space constraint of the main paper, we add the first two revisions to the main paper and the latter three revisions to the supplementary material. \n\nWe hope that our responses have fully addressed all of the reviewers' concerns and remain committed to clarifying any further questions that may arise during the discussion period.",
" > Question3: How is the teacher model initialized and updated?\n\nThe initialization and updating of the teacher model in our work is identical to the ways used in the popular semi-supervised learning method Mean-Teacher approach[6]. Specifically, at the beginning of model training, parameters of the teacher model are initialized with parameters of the student model. During the model training, parameters of the teacher model are the exponential moving average of the parameters of the up-to-date student model.\n\n[6] Antti Tarvainen, Harri Valpola. ''Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results''. NeurIPS2017\n\n> Question4: From fig2, any reason why 3-4 prototypes should work better instead of just one? Wouldn't the intra-class separation be least when all features are clustered around one center rather than distributed around 3-4 centers?\n\nAs the description at Line28-31 of our paper, per-pixel dense prediction task, e.g., the semi-supervised semantic segmentation task, may suffer from the large intra-class variation problem. We choose to use multiple prototypes per class to capture this intra-class variation rather than forcefully eliminate this variation by only using a single prototype. In the literature, it has been observed that eliminating intra-class variation at the training stage may lead to a poorly generalized prediction model, which is known as neural collapse[7].\n\n[7] Papyan, V., Han, X. Y., and Donoho, D. L. ''Prevalence of neural collapse during the terminal phase of deep learning training''. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020.\n\n> Question5: Are the prototypes computed in the feature space? If so, I believe that the feature space for segmentation models is of lesser resolution - how do you get pixel level labels at this reduced resolution to compute class wise prototypes?\n\nYes and Line239-240 presents how prototypes are generated from, i.e., the prototypes are computed based on the feature representations before feeding into the classifier module of DeepLabv3+. The resolution of feature representation is a quarter of the resolution of the input image. In order to assign pixel-level labels to the feature representations, we conduct an interpolation on the feature representation to transform to the input image size.\n\n> Question6: It would also be interesting to visualize the prototypes themselves along with the features in the same space, and know what is actually being learnt by the prototypes and the classifier. I am more curious to know how they are different. Assume a case when all prototypes belonging to a class match the classifier weight vector for that class. All the conditions would be satisfied, but the consistency criterion would be trivial in that case.\n\nThanks for your suggestions and we have updated Figure2 ( c ) of our manuscript. As the Figure2 ( c ) shown, the learned micro-prototypes spread among the distribution of each semantic class which demonstrates that these micro-prototypes have already captured diversified representations for the same semantic class due to the large intra-class variation caused by the appearance changing.\nRegarding the trivial case proposed by the Reviewer sBQB, the consistency criterion will never become trivial under the student-teacher weak-strong data augmentation framework. In such a framework, a strongly augmented input (not the one used for generating the pseudo-label) will be fed into the student network and thus the output logits cannot be guaranteed to be identical to the teacher model. The same situation is true for our proposed consistency loss.\n\n> Limitations: [...] the benefits from your method seem limited if more unlabeled data is available. Do you have an intuition why?\n\nIn our paper, we mainly report experimental results of varying numbers of labeled samples (e.g., 1/16, 1/8, 1/4 and 1/2 of the training set are selected to construct the labeled set, respectively) but not the unlabeled ones. According to the results, performance improvement of our approach with more labeled samples is not as big as with fewer labeled data. We think the potential reason is that more finely annotated labeled samples will alleviate the challenge of large intra-class variation problem and ease the label-propagation from labeled pixels to unlabeled ones, and thus the benefits of the proposed approach become less prominent.",
" Thank you for your positive review and useful remarks! Below please find our responses\n\n> Weakness1: [...] is there a concrete metric that helps us understand that existing methods really perform poorly on intra-class differences, while your method does better? [...]\n\nThank you for your suggestions and we can borrow the principle of linear discriminant analysis(LDA) and calculate the intra-/inter- class variance of the feature representations for each comparing method. The results shown in the following table reveal that our method has not only improved the intra-class variance but also the inter-class variance, and thus the overall discrimination.\n\n| Classic VOC(1/16 setting) | $\\frac{tr(\\text{inter-class var})}{tr(intra-class var)} \\uparrow$ | $tr(\\text{inter-class var})\\uparrow$ | $tr(\\text{intra-class var})\\downarrow$ |\n| :--- | :----: | :----: | :----: |\n| U2PL | 0.48 | 80.78 | 168.30 | \n| Ours w/o prototype-based classifier |0.45 |76.01| 168.92 |\n| Ours | 2.22 | 283.43 | 127.63 |\n\n\n> Weakness2: There are really two parts to this framework - one is FixMatch style teacher-student model with weak-strong data augmentation, while another is the proposed consistency loss. Each of these have to evaluated separately to better understand the benefits. Currently, ablations are only present on the consistency part.\n\nOur method is based upon the student-teacher weak-strong augmentation framework and our main contribution is the new consistency regularization loss. That’s why we focus more on the consistency part. To evaluate the impact of the first part, we conducted an experiment by varying the data augmentation approaches while keeping the consistency part unchanged. The results shown in the following table present that our method can still achieve overall best segmentation results with different strong data augmentations.\n\n| Classic VOC (1/16 setting) | Cutout[1] | ClassMix[2]|\n| :--- | :----: | ---: |\n| U2PL | 66.82 | 67.77 |\n| Ours w/o prototype-based classifier | 66.86 | 66.93 |\n| Ours | 69.24 | 69.36 |\n\n[1] Terrance DeVries and Graham W Taylor. ''Improved regularization of convolutional neural networks with cutout.'' CoRR, abs/1708.04552, 2017. \n\n[2] Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, Lennart Svensson. ''ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning'', WACV2021\n\n> Weakness3: Authors are encouraged to use the published versions in citations instead of the arxiv-only versions in an updated version.\n\nThanks for your suggestions and we have updated the references in our paper.\n\n> Question1: [...] why this method has to work. What part of your pipeline is the most effective and makes the most difference compared to prior work? [...]\n\nThe newly proposed prototype-based consistency regularization is the main contribution in our paper and it makes the most difference compared to prior works (Line96-101 of our paper presents the difference between our proposed approach and existing methods). The performance presented in Table.1, 2 and 3 of our paper already demonstrate the superiority of our work compared to previous state-of-the-art methods. The ablation studies in Table 4 of our paper further verify the effectiveness of the proposed prototype-based consistency regularization module in our approach (comparing ① and ④, the prototype-based consistency regularization module brings 2.1% and 3.7% performance improvement on the classic PASCAL VOC 2012 1/16 and 1/8 setting, respectively).\n\n> Question2: What is done at test-time? If I understand correctly, you use the student model along with the classifier based predictions at test-time? How does the performance vary if we use the prototype based prediction instead?\n\nWe use the teacher model with the linear classifier based predictions at test-time and the prototype classifier will be discarded after training. This is consistent with most of the comparing methods, e.g., U2PL[3], AEL[4] and CPS[5], which also use the teacher model for prediction at test-time and thus the comparison is fair. The prototype-based performance has been presented in the following table and It is obvious that the prototype-based classifier can also achieve comparable performance (but slightly worse) to the linear based classifier.\n\n| Classic VOC | 1/16(92) | 1/8(183) | 1/4(366) | 1/2(732) | full(1464) | \n| :--- | :----: | :----: | :----: |:----: |:----: |\n| Linear classifier | 70.06 | 74.71 | 77.16 | 78.49 | 80.65 \n| Prototype-based classifier | 69.89| 74.31| 77.01 |77.94 | 79.82\n\n[3] Yuchao Wang, et al. ‘‘Semi-supervised semantic segmentation using unreliable pseudo-labels’’. CVPR2022.\n\n[4] Hanzhe Hu, et al. ‘‘Semi-supervised semantic segmentation via adaptive equalization learning’’. NeurIPS 2021.\n\n[5] Xiaokang Chen, et al. ''Semi-supervised semantic segmentation with cross pseudo supervision''. CVPR2021.",
" Thank you for your review and useful remarks! Below please find our responses\n\n> Weakness1: Limited novelty. From my point of view, the paper only proposes a consistency regularization method on the prediction head. The contribution is quite limited.\n\nComparing with the baseline model, our method only introduces a simple auxiliary prototype predictor (will be removed after training) and an additional consistency regularization loss. With them, we show that we can achieve significant performance improvement. We believe that this is an interesting discovery and could be valuable for advancing semi-supervised segmentation research. Moreover, we want to highlight that the contribution of many recent semi-supervised learning methods is “only proposes a consistency regularization”, e.g., see examples in the following papers. In our view, a simple-but-effective consistency regularization should be considered as a merit rather than a disadvantage.\n\n[1]Temporal ensembling for semi-supervised learning, ICLR2017\n\n[2] Virtual Adversarial Training:a Regularization Method for Supervised and Semi-supervised Learning, TPAMI2017\n\n[3]Mean teachers are better role models: Weight-averaged **consistency** targets improve semi-supervised deep learning results. NeurIPS2017\n\n[4] Interpolation **Consistency** Training for Semi-Supervised Learning, IJCAI2019\n\n[5] WCP: Worst-Case Perturbations for Semi-Supervised Deep Learning, CVPR2020\n\n[6] FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning, ECCV2020\n\n[7] Unsupervised Data Augmentation for **Consistency** Training, NeurIPS2020\n\n[8] FixMatch- Simplifying Semi-Supervised Learning with **Consistency** and Confidence, NeurIPS2020\n\n[9] Time-**Consistent** Self-Supervision for Semi-Supervised Learning, ICML2020\n\n[10] Adaptive **Consistency** Regularization for Semi-Supervised Transfer Learning, CVPR2021\n\n> Weakness2: Prototype generation. How to deal with false prediction on unlabeled images?\n\nPrototypes in our approach are initialized by clustering pixel representations from labeled samples (see Line 237-247) and they will be dynamically updated based on pseudo-labels of unlabeled samples (see Line 194-208). Same as most of the state-of-the-art semi-supervised learning/segmentation approaches, pseudo-labels are inevitably noisy. But the existing teacher-student semi-supervised learning framework has shown being robust towards noisy pseudo-labels. Moreover, our prototype is calculated by averaging features from a cluster. This average operation could also resist the distraction from wrongly-labeled features.\n\n> Weakness3: Experimental results. The results of previous methods on Cityscapes dataset seem strange to me. They are different from the original results of the papers.\n\nSince U2PL[11] is a recently proposed state-of-the-art semi-supervised semantic segmentation method, we directly use the same codebase of U2PL in our paper for a fair comparison.\n\nTo make a fair comparison, the original U2PL paper reimplemented existing methods with an unified setting. We directly quote the evaluation results reported in U2PL for our comparing methods. In this way, we can ensure all the methods are under the exact same setting. Note that due to implementation discrepancy, e.g., the authors of AEL method[12] use random label splits in their experiments, while the authors of U2PL use a fixed labeled set for the convenience of comparison, the results reported in the original paper and in U2PL (and thus our paper) could be different.\n\n[11] Yuchao Wang, et al. ''Semi-supervised semantic segmentation using unreliable pseudo-labels''. CVPR2022\n\n[12] Hanzhe Hu, et al. ''Semi-supervised semantic segmentation via adaptive equalization learning''. NeurIPS 2021. \n\n> Weakness4: Feature visualization. Although intra-class compactness is improved, inter-class discrimination is weakened. Will this have bad influence on segmentation?\n\nActually, the inter-class discrimination of our approach has also been improved comparing with other methods. In order to quantitatively measure the intra-/inter- class discrimination, we can borrow the principle of linear discriminant analysis(LDA) and calculate the intra-/inter- class variance of the feature representations for each comparing methods. As seen, our approach produces higher inter-class variance and thus be more discriminative.\n\n| Classic VOC(1/16 setting) | $\\frac{tr(\\text{inter-class var})}{tr(intra-class var)} \\uparrow$ | $tr(\\text{inter-class var})\\uparrow$ | $tr(\\text{intra-class var})\\downarrow$ |\n| :--- | :----: | :----: | :----: |\n| U2PL | 0.48 | 80.78 | 168.30 | \n| Ours w/o prototype-based classifier |0.45 |76.01| 168.92 |\n| Ours | 2.22 | 283.43 | 127.63 |",
" Thank you for your positive review and useful remarks! Below please find our responses\n\n> Weakness1: Ablation study of the choice of confidence threshold is absent.\n\nThanks for your suggestion. We have added such an ablation study to the newly updated supplementary material. From the result, we find that our approach can achieve good performance when the confidence threshold falls into a reasonable range, e.g., [0.75, 0.95]. \n\n| Classic VOC(1/16 setting) | 0.95 | 0.90 | 0.85 | 0.80 | 0.75 | 0.70 |\n| :--- | :----: | :----: | :----: | :----: | :----: |:----: |\n| Linear classifier | 71.01 | 70.97 | 70.30 | 70.06 | 69.43 |64.89 \n| Prototype-based classifier|70.72|70.74| 70.10 | 69.89 | 68.92 |64.68\n\n> Weakness2: Some details are missing. For example, L189 'in a fully-supervised way for several epochs', what the exactly hyper parameters are used?\n\nSorry for the confusion. L189 presents how prototypes are initialized in our approach. We train the segmentation network on given labeled samples with the same training protocols as the Supervised Only baseline, i.e., train DeepLabv3+ with `batchsize=16`, `initial learning rate=1.0*10^-3`, `weight decay=1.0*10^-4` and `80 training epochs`. The corresponding training details have been presented in Line237-247.\n\n> Questions1: I thinks an Algorithm describing the global process is necessary. Otherwise it is difficult to reproduce the experiment results.\n\nThanks for your suggestions. We have added the following algorithm table to the supplementary material due to the space constraint of the main paper.\n\n```\nAlgorithm procedure of our approach\nInput: labeled images $D^l$, unlabeled images D^u\nOutput: teacher semantic segmentation network with linear predictor only\nProcess:\n1. Prototype initialization, please see Section 3.4 for details\n2. For step in range(epoch):\n3. \tStudent semantic segmentation network update:\n4. \t\tSample a batch of labeled samples and unlabeled samples;\n5. \t\tFor labeled data, the student model is updated based on the given ground truth, please refer to Eq.(3)-(6) of main paper;\n6. \t\tFor unlabeled data, weakly augmented version is fed into the teacher model to generate pseudo-labels and the student model is updated with the strongly augmented unlabeled sample based on the pseudo-labels. Please refer to Eq. (8)-(10) of main paper;\n7. \t\tUpdate prototypes based on the ground truth of labeled samples and the pseudo-labels of unlabeled samples, please refer to Eq. (11) of main paper;\n8. \tTeacher semantic segmentation network update: exponential moving average (EMA) of the parameters of the student model.\n```\n\n> Question2: Please explain “EMA update” in Figure 1.\n\nThe “EMA update” operation in our paper follows the approach proposed in Mean Teacher method[1] which is a popular semi-supervised algorithm, i.e., the parameters of the teacher network are updated through the exponential moving average of the parameters of the student network at each optimization step.\n\n[1] Antti Tarvainen and Harri Valpola. ''Mean teachers are better role models: Weight-averaged con- sistency targets improve semi-supervised deep learning results''. NeurIPS2017.\n\n> Question3: Is Prototype similar to assign more than one template in classifier layers? For example, if there are 21 category, the classifier layer is set to 512x48.\n\nWe have conducted the experiment suggested by the Reviewer LwA4 and the results are shown in the following table. As seen, simply increasing the number of classifiers per class does not bring any improvement over the baseline. This suggests that using prototype-based classifier and our cross-predictor consistency regularization loss is the key to success rather than using multiple classifiers.\n\n| Classic VOC | 1/16(92) | 1/8(183) | 1/4(366) |\n| :--- | :----: | :----: | :----: |\n| single classifier layer per class | 67.95 | 70.99 | 75.43 | \n| 4 classifier layer per class |67.76|70.89| 75.41 |",
" Thanks for your positive review and valuable feedback! Below, we address your points individually.\n\n> Weakness1.1: The motivation and the principle are not clear. \n\nMotivation has been presented in Line1-7 and Line28-31 of our paper. Specifically, we target the problem of large intra-class variation and challenges of propagating pseudo labels in semi-supervised semantic segmentation. Our key idea is to regularize the feature representation to improve the label propagation process.\n\n> Weakness1.2: Why do you use the prototype-learning predictor? What are the inherent differences between the linear predictor and the prototype-based predictor? Why does this work? Line 124-130 only demonstrates the differences on the practice. It is better to analyze or provide more reasons behind the regularization.\n\n''Inherent difference and why it works'' have been presented in Line 34-40, Line 124-130 and Line 170-183 of our paper. Moreover, the linear classifier has learnable parameters and can adapt to imperfect features to fit pseudo-labels. The prototype-based classifier does not involve learnable parameters and thus to match similar predictions of the linear classifier, the prototype-based classifier calls for better feature representation (see the example introduced in Line 170-183). Thus our cross-predictor consistency loss encourages better feature learning for semi-supervised learning.\n\n> Weakness2.1: It is better to provide more details about the prototype initialization and updating, which are confused now. \n\nSorry for the confusion. Line 188-193 of our paper gives the details for prototype initialization, which consists of the following steps.\n1. Train the semantic segmentation network on the given limited fully-labeled samples.\n2. Use the trained segmentation network to extract feature representations of these labeled samples (i.e. the feature representation before feed into the classifier of DeepLabv3+ and perform interpolation on the feature representation to match the input image size). We then sample a certain amount of pixels with their representations for each category.\n3. Perform k-means clustering (other clustering methods are also possible) on sampled pixel representations from each category. This step creates K sub-classes for each category. We use the feature average of samples in each subclass to obtain the initial prototypes of each category.\n\nMeanwhile, we dynamically udpate the prototypes during the model optimization and Line194-205 presents the details of how prototypes are updated.\n\n > Weakness2.2: What does the learned prototype represent? \n \n Intuitively, the learned prototypes represent the centers of subclasses in each category. Please see the Figure 2 \\(c\\) of our newly updated paper.\n \n > Weakness2.3: How to perform clustering on them to find out internal sub-classes?\n\nThe sub-classes are identified by performing K-means clustering on pixel representations of each category from the labeled images only. More details are given in Line 188-193 and our response to weakness 2.1 of Reviewer sZ7j.\n \n > Weakness2.4: What is the definition of the distance between the pixel and the prototypes?\n\nWe use the cosine similarity between the feature representation of pixels and the prototypes in our paper. Please find the description in Line150 of our paper.\n\n> Weakness3: Why are the improvements on the blender setting of the PASCAL VOC 2012 incremental?\n\nBlender setting of the PASCAL VOC 2012 is constructed with a finely annotated dataset and an augmented coarsely annotated dataset. Given the labeled set is noisy, the semi-supervised semantic segmentation algorithms may be mis-guided and the performance will be compromised.\n\n> Weakness4: It is better to perform the ablation studies about the CutMix. What is the performance about using other strong augmentations?\n\n\n| Classic VOC (1/16 setting) | Cutout[1] | ClassMix[2]|\n| :--- | :----: | ---: |\n| U2PL | 66.82 | 67.77 |\n| Ours w/o prototype-based classifier | 66.86 | 66.93 |\n| Ours | 69.24 | 69.36 |\n\nWe perform ablation studies about using other strong data augmentation in our approach by incorporating another two popular data augmentations: Cutout[1] and ClassMix[2]. According to the results shown in the above table, our method can still outperform other comparing methods. It verifies the effectiveness of our proposed prototype-based consistency regularization and indicates that it is not just for CutMix but should apply to any strong data augmentations.\n\n[1] Terrance DeVries and Graham W Taylor. ''Improved regularization of convolutional neural networks with cutout.'' CoRR, abs/1708.04552, 2017. \n\n[2] Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, Lennart Svensson. ''ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning'', WACV2021",
" This work focuses on the semi-supervised semantic segmentation. To solve the large intra-class variation problem, this work attempts to regularize the distribution of within-class features. The proposed methods is mainly based on the student-teacher framework with a well-designed prototype-based predictor and a widely-used linear predictor. It encourages the consistency between the linear predictor and prototype-based predictor to regularize the distribution of within-class features. Extensive experiments have been conducted to validate the effectiveness of this method. The proposed method achieves promising performance on different datasets and settings. Strengths:\n1. The idea is simple and clear.\n2. Comprehensive experiments have conducted to validate the improvements of the proposed methods.\n\n\nWeakness:\n1. The motivation and the principle are not clear. Why do you use the prototype-learning predictor? What are the inherent differences between the linear predictor and the prototype-based predictor? Why does this work? Line 124-130 only demonstrates the differences on the practice. It is better to analyze or provide more reasons behind the regularization.\n2. It is better to provide more details about the prototype initialization and updating, which are confused now. What does the learned prototype represent? How to perform clustering on them to find out internal sub-classes? What is the definition of the distance between the pixel and the prototypes?\n3. Why are the improvements on the blender setting of the PASCAL VOC 2012 incremental?\n4. It is better to perform the ablation studies about the CutMix. What is the performance about using other strong augmentations? 1. Please demonstrate more details about how the regularization between prototype-based predictor and linear predictor works.\n2. Please provide more details about the prototype initialization and updating.\n3. Please conduct ablative experiments about the CutMix. This work discussed the limitation on the model architectures. \nMaybe it is necessary to consider the potential data distribution unbalance problem.",
" This paper is about semi-supervised semantic image segmentation. A moment distillation method is used and the authors proposed to use pixels only with confidence larger than 0.8 as supervision.\nExperiments are done on Pascal VOC12 and Cityscapes.\n [Strengths]\n\n1. The experimental results seems pretty good from Table 1 and Table 3. \n2. The combination of cutmix and distillation is interesting.\n\n\n[Weakness]\n\n1. Ablation study of the choice of confidence threshold is absent.\n2. Some details are missing. For example, L189 'in a fully-supervised way for several epochs', what the exactly hyper parameters are used?\n3. Eevey 1. I thinks an Algorithm describing the global process is necessary. Otherwise it is difficult to reproduce the experiment results.\n\n2. Please explain “EMA update” in Figure 1.\n\n3. Is Prototype similar to assign more than one template in classifier layers? For example, if there are 21 category, the classifier layer is set to 512x48. Yes",
" The paper proposes a novel approach to regularize the distribution of intra-class features to ease label propagation difficulty. In particular, the proposed method adopts a standard linear predictor and a prototype-based predictor and encourages the consistency between predictions from two predictors. Experimental results validate the effectiveness of the proposed method. Strength:\n\nThe paper is easy to follow. The proposed method is effective. Experimental results are promising.\n\nWeakness:\n\n(1) Limited novelty. From my point of view, the paper only proposes a consistency regularization method on the prediction head. The contribution is quite limited.\n\n(2) Prototype generation. How to deal with false prediction on unlabeled images?\n\n(3) Experimental results. The results of previous methods on Cityscapes dataset seem strange to me. They are different from the original results of the papers. \n\n(4) Feature visualization. Although intra-class compactness is improved, inter-class discrimination is weakened. Will this have bad influence on segmentation? Please see weakness above. Yes, the authors have addressed the limitations and potential negative social impact.",
" This paper proposes a consistency based scheme for semi-supervised semantic segmentation, where they employ a student-teacher framework and enforce consistency between predictions made by a prototypical classifier and a linear classifier. By selecting multiple prototypes for each class, it is hypothesized that the limitation of intra-class distances can be addressed. -- Strengths\n\n- The idea of using complementary training signals from linear classifier and prototypical classifier to enforce consistency is interesting.\n- Apart from a few required clarifications (see below), the paper is well written and easy to follow, and the intuitions and concepts have been explained well. \n- The performance on reported datasets is extremely strong. I am indeed surprised that a method as simple as this results in such strong performance and large improvements. \n\n-- Weaknesses\n\n- While the abstract and the intuition is built upon the fact that this method reduces \"intra-class\" discrepancies, it has not been explained or demonstrated that this is indeed a limitation in prior methods. It has also not been shown that this method better alleviates this issue compared to prior methods. For example, is there a concrete metric that helps us understand that existing methods really perform poorly on intra-class differences, while your method does better? The tSNE plots are inconclusive (the highlighted cluster already looks compact in first two plots as well, and the cluster for label \"4\" still forms two distinct clusters using your method, same as \"supervised-only\"). \n\n- There are really two parts to this framework - one is FixMatch style teacher-student model with weak-strong data augmentation, while another is the proposed consistency loss. Each of these have to evaluated separately to better understand the benefits. Currently, ablations are only present on the consistency part. \n\n-- Minor\n- Authors are encouraged to use the published versions in citations instead of the arxiv-only versions in an updated version.\n\n - Overall, while I appreciate the simplicity and effectiveness of the method, it is not clear why this method has to work. What part of your pipeline is the most effective and makes the most difference compared to prior work? This analysis is not present in the paper. I am willing to raise my rating if this is cleared.\n\n- What is done at test-time? If I understand correctly, you use the student model along with the classifier based predictions at test-time? How does the performance vary if we use the prototype based prediction instead?\n\n- How is the teacher model initialized and updated?\n\n- From fig2, any reason why 3-4 prototypes should work better instead of just one? Wouldn't the intra-class separation be least when all features are clustered around one center rather than distributed around 3-4 centers?\n\n- Are the prototypes computed in the feature space? If so, I believe that the feature space for segmentation models is of lesser resolution - how do you get pixel level labels at this reduced resolution to compute class wise prototypes?\n\n- It would also be interesting to visualize the prototypes themselves along with the features in the same space, and know what is actually being learnt by the prototypes and the classifier. I am more curious to know how they are different. Assume a case when all prototypes belonging to a class match the classifier weight vector for that class. All the conditions would be satisfied, but the consistency criterion would be trivial in that case. The societal impact has been addressed. The limitations section could be better. For example, the benefits from your method seem limited if more unlabeled data is available. Do you have an intuition why?"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"GZo_FzF6ZNa",
"WnSiDnsCW3R",
"lxX42O3_RvD",
"KXM6dSoNRTq",
"dK-Td-AfT4-",
"xZkgjt-8iDL",
"nips_2022_e3qH65r_eZS",
"uCkPpy5gtjd",
"uCkPpy5gtjd",
"dK-Td-AfT4-",
"KXM6dSoNRTq",
"lxX42O3_RvD",
"nips_2022_e3qH65r_eZS",
"nips_2022_e3qH65r_eZS",
"nips_2022_e3qH65r_eZS",
"nips_2022_e3qH65r_eZS"
] |
nips_2022_5L-wxm0YLcZ | CoupAlign: Coupling Word-Pixel with Sentence-Mask Alignments for Referring Image Segmentation | Referring image segmentation aims at localizing all pixels of the visual objects described by a natural language sentence. Previous works learn to straightforwardly align the sentence embedding and pixel-level embedding for highlighting the referred objects, but ignore the semantic consistency of pixels within the same object, leading to incomplete masks and localization errors in predictions. To tackle this problem, we propose CoupAlign, a simple yet effective multi-level visual-semantic alignment method, to couple sentence-mask alignment with word-pixel alignment to enforce object mask constraint for achieving more accurate localization and segmentation. Specifically, the Word-Pixel Alignment (WPA) module performs early fusion of linguistic and pixel-level features in intermediate layers of the vision and language encoders. Based on the word-pixel aligned embedding, a set of mask proposals are generated to hypothesize possible objects. Then in the Sentence-Mask Alignment (SMA) module, the masks are weighted by the sentence embedding to localize the referred object, and finally projected back to aggregate the pixels for the target. To further enhance the learning of the two alignment modules, an auxiliary loss is designed to contrast the foreground and background pixels. By hierarchically aligning pixels and masks with linguistic features, our CoupAlign captures the pixel coherence at both visual and semantic levels, thus generating more accurate predictions. Extensive experiments on popular datasets (e.g., RefCOCO and G-Ref) show that our method achieves consistent improvements over state-of-the-art methods, e.g., about 2% oIoU increase on the validation and testing set of RefCOCO. Especially, CoupAlign has remarkable ability in distinguishing the target from multiple objects of the same class. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/CoupAlign. | Accept | The paper was reviewed by four reviewers and received all positive scores at the end: 2 x Borderline Accepts and 2 x Weak Accepts. Most initial concerns with the paper were with exposition and experimental validation. These concerns, however, were addressed convincingly during the rebuttal period with additional experiments and ablations, as well as direct edits to the manuscript itself. In the current form the paper would be a valuable contribution to NeurIPS program. | train | [
"Rg8LYtmTkIC",
"SOaYDpPKvL",
"zT0RE0-WyH",
"Jzcf1vWZT_h",
"avTqqobEXZv",
"UMuFrU0gPv5",
"yB6W1dO7OiM",
"4micdEBIcDm",
"-wLYglob9v3",
"SbHbNz-BrcS",
"tHmkXJ1TFfF",
"Kxu5HXiEMr-",
"eTZ3yEH8DS",
"voy10ZiwcrY",
"stUL42Asjk-",
"tDT8k4vmxaV",
"PoGEGBimkU",
"FmnonUmP3hh",
"Qmz6SSE-XkK",
"SufrqIMEU-"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThank you very much for your support!\n",
" Thanks for solving my concerns with new experiments and detailed explanation. And I am glad to raise my rating up for accepting the paper.",
" Dear reviewer,\n\nThank you very much for your support!",
" Thanks for the new experiments on comparing WPA and CrossAttn, it's assuring to see that the specific instantiation of the attention module does not matter much. \n\nOverall, I appreciate the author's efforts and I'm satisfied with the author's response to my questions and concerns. I'm happy to raise my rating up to accept the paper. ",
" Dear reviewer, thank you very much for your reply!\n\n### **ReferIt and RefCOCO+**\n\nGood suggestion! We have added the experimental results of ReferIt and RefCOCO+ in the revised paper. \n\n### **Difference between WPA and CrossAttn**\n\nWe further conduct two experiments and show the results in the table below.\n\n1) When we replace CrossAttn with WPA the performance slightly drops (74.70% vs. 74.17% oIoU, 75.49% vs. 75.02% mIoU). The reason is that the CrossAttn is multi-head attention while WPA is single-head. In general, multi-head attention is more effective than single-head attention since it can jointly attend multiple positions.\n\n2) When we replace WPA with CrossAttn in our model, the performance is sligntly lower than CoupAlign (74.70% vs. 74.32% oIoU, 75.49% vs. 74.92% mIoU). The single-head WPA modules are changed into CrossAttn (which is multi-head) but does not cause performance improvement, because CoupAlign encoder uses multiple WPA modules such that the multi-layer single-head attention can also attend multiple positions. Moreover, increasing the attention heads of WPA can also increase the model complexity, which may harm the model robustness and cause slight damage to the performance.\n\n| Experimental Settings | oIoU | mIoU |\n| ------------------- | ----- | ----- |\n| CoupAlign (Ours) | **74.70** | **75.49** |\n| Replace CrossAttn with WPA | 74.17 | 75.02 |\n| Replace WPA with CrossAttn | 74.32 | 74.92 |",
" Dear reviewer, thank you very much for your reply! \n\n### **Difference between WPA and CrossAttn**\n\nAs you suggested, we further conduct two experiments and show the results in the table below.\n\n1) When we replace CrossAttn with WPA the performance slightly drops (74.70% vs. 74.17% oIoU, 75.49% vs. 75.02% mIoU). The reason is that the CrossAttn is multi-head attention while WPA is single-head. In general, multi-head attention is more effective than single-head attention since it can jointly attend multiple positions.\n\n2) When we replace WPA with CrossAttn in our model, the performance is sligntly lower than CoupAlign (74.70% vs. 74.32% oIoU, 75.49% vs. 74.92% mIoU). The single-head WPA modules are changed into CrossAttn (which is multi-head) but does not cause performance improvement, because CoupAlign encoder uses multiple WPA modules such that the multi-layer single-head attention can also attend multiple positions. Moreover, increasing the attention heads of WPA can also increase the model complexity, which may harm the model robustness and cause slight damage to the performance.\n\n| Experimental Settings | oIoU | mIoU |\n| ------------------- | ----- | ----- |\n| CoupAlign (Ours) | **74.70** | **75.49** |\n| Replace CrossAttn with WPA | 74.17 | 75.02 |\n| Replace WPA with CrossAttn | 74.32 | 74.92 |\n\n\n### **About Fig.3**\n\nWe agree with you that showing too many such examples may mislead the readers. \nIn Fig.3 of the revised paper, we have deleted some similar examples, and no longer compare specific cases with SOTA to show the superiority of our method. The new figure aims to illustrate the remarkable capability of our method in accurately segmenting the target object from crowds. What's more, we show failure cases of CoupAlign to analyze the limitation. \n\n### **Ablation study of WPA module**\n\nThanks for your recognization! We have added the experimental results in the revised paper.",
" ### Thanks for the authors' response.\n* It is better to integrate the additional experiments of ReferIt and RefCOCO+ in Table 1 (main paper).\n* Function descriptions are more precise than the previous version.\n* It is better to clarify the difference between WPA and CrossAttn, as recommended by the reviewer Vuxv.",
" Dear reviewer, \n\nWe have tried to address your concerns and all your questions in our earlier responses. If you have any additional questions or suggestions, we are very happy to discuss with you.\n",
" I thank the authors for their responses. \n\n- Difference between WPA and CrossAttn\nThe response regarding this point is still not fully satisfactory, since I was asking about the justification of using two different types of attentions modules. It would be nice to have an experiment ablating on this by applying either WPA and CrossAttn throughout. \n\n- About Fig.3\nI understand conceptually why the proposed method might produce more accurate delineation of object boundaries. However, as I expressed in my original question, I wasn't sure this translates to qualitative difference like in the case of \"man far right second step\" or \"right first bottom fridge\", or \"whole bear next to empty spot on center seat\". Therefore, I want to know whether this is a widespread qualitative difference that the authors observe, or these are more of a cherry picked results (which is fine, but you might not want to include too many examples like this, as that leads people to think it's a general phenomenon) \n\n- Ablation study of WPA module\nThanks for the explanation, this is helpful for better understanding the approach. ",
" \n### **Ablation study of WPA module**\n\nGreat suggestion! First of all, we insert a WPA module at different encoding stages. As shown in the table below, the WPA module at the 4-th stage is more effective than those inserted at other stages. In our experiment, we use four WPA modules, two of which are in the early encoding stage and the other two are in the late encoding stage. We also conduct two baseline models that alternatively remove two WPA modules at early or late encoding stages. As shown in the following table, when we remove the last two WPA modules the performance drops about 2\\% (74.70\\% vs. 72.74\\% oIoU, 75.49\\% vs. 73.87\\% mIoU), and when we remove the first two WPA modules the performance drops about 0.8\\% (74.70\\% vs. 73.61\\% oIoU, 75.49\\% vs. 74.68\\% mIoU). These results validate the effectiveness of WPA modules at both early and late stages and indicate that the latter WPA modules play a more important role in our model.\n\n| WPA's number | WPA's position | oIoU | mIoU |\n| ------------ | -------------- | ----- | ----- |\n| 4 | stage 1,2,3,4 | **74.70** | **75.49** |\n| 2 | stage 1,2 | 72.74 | 73.87 |\n| 2 | stage 3,4 | 73.93 | 74.88 |\n| 1 | stage 4 | 73.61 | 74.68 |\n| 1 | stage 3 | 72.53 | 73.47 |\n| 1 | stage 2 | 72.63 | 73.48 |\n| 1 | stage 1 | 72.59 | 73.97 |",
" We thank all the reviewers for their time, insightful suggestions, and valuable comments. \n\nWe respond to each reviewer's comments in detail below. We have also revised the manuscript according to reviewer's suggestions, and we believe this makes our paper much stronger. The main changes we made include:\n\n* In Section 3 of the revised paper, we correct typos and add notation clarifications. \n* In Table 4 of the revised paper, we correct the values in the 4-th row.\n* In Table 1 of the revised paper, we add the experimental results on two more datasets, i.e., ReferIt and RefCOCO+. \n* In Table 5 of the revised paper, we add the ablation study on the number and position of WPA modules.\n\nIn the revised manuscripts, we have marked the revisions in blue.",
" We list all your questions and respond to them point-to-point:\n\n* **Results in the 3-rd and 4-th row in Tab.4:** \"The precision@0.5/0.7/0.9 values of the 3-rd and 4-th row are the same.\"\n* **Attention weights of different words:** \"The attention weights on different words are not analyzed in the WPA module. It is better to show if the model correctly attends to the referring motion or appearance words.\"\n* **Remove both SMA and Aux Loss:** \"In table 4, I think there is one experiment missing, i.e., w/ Bi-WPA, w/o Uni-WPA, w/o SMA, w/o Aux Loss.\"\n* **SMA self-weighting baseline:** \"From Figure 2 and Table 4, I think the baseline without SMA is that directly summarizes all mask query embeddings without weights generated from the sentence features.\"\n* **Diversity statistics of generated masks:** \"Could you please show the diversity of masks generate from mask queries by statistics?\"\n* **Computational cost:** \"The computational cost or runtime should be discussed.\"\n* **Experiments on RefCOCO+:** \"Experiments on RefCOCO+ are missing. Why?\"\n* **Results on G-Ref:** \"Does it mean the proposed methods overfit the train and Val splits?\"\n* **Ablation study of WPA module:** \"The improvements of WPA when inserting it into different stages of the encoders should be discussed.\"\n\n\n### **Results in the 3-rd and 4-th row in Tab.4**\n\nWe apologize for the typos of the prec@0.5/0.7/0.9 values in the 4-th row of the table. The correct values are 85.32\\%, 75.31\\%, and 30.14\\%, respectively. We have corrected them in the revised paper. \n\n### **Attention weights of different words**\n\nHere we show some examples of word attention weights. They correspond to the visualization examples in Fig.4 in our paper. The word-level attention weights for \"child in blue shirt\" are (0.5, 0.0, 0.4, 0.1), the weights for \"man on right\" are (0.6, 0.1, 0.3), and the weights for \"lady in white dress\" are (0.5, 0.0, 0.1, 0.4).\n\n### **Remove both SMA and Aux Loss**\n\nIn Tab.4 of our paper, we have conducted ablation studies to remove each one of the three components (WPA, SMA, Aux Loss) to validate their effectiveness. Here we remove both SMA and Aux Loss as you suggested, obtaining 72.84\\% oIoU and 73.69\\% mIoU. \n\n### **SMA self-weighting baseline**\n\nThanks for your suggestion. We use an MLP layer to generate weights for the mask embeddings, and summarize the mask embeddings according to the weights. The performance of this self-weighting baseline drops (74.70\\% vs. 73.95\\% oIoU, 75.49\\% vs. 74.78\\% mIoU), which demonstrates the effectiveness of the cross-modal weighting of our SMA. \n\n\n### **Diversity statistics of generated masks**\n\nThanks for your suggestion. For each image, we calculate the IoU score between the generated masks and the ground-truth mask, and then count the number of masks according to the IoU scores to see the mask diversity. In the following table, we summarize the mask numbers at each IoU ranges on the test set of RefCOCO. We can observe that the distribution of the numbers of masks is approximately uniform over different IoU levels, which demonstrates the diversity of the generated masks. \n\n| IoU range | 0-0.2 | 0.2-0.5 | 0.5-0.7 | 0.7-1.0 |\n| --------------- | --------------- | ----- | --------------- |------|\n| percentage (\\%) | 25 | 10 | 27 | 38 |\n\n### **Computational cost**\n\nWe test the inference time of our CoupAlign and the most recent SOTA method LAVT on a NVIDIA V100 GPU. CoupAlign costs 38ms per image on average and LAVT costs 41ms, which demonstrates the computational efficiency of CoupAlign.\n\n### **Experiments on RefCOCO+**\n\nAs shown in the following table, the performance of CoupAlign on RefCOCO+ is comparable to or better than LAVT which is the most recent state-of-the-art method (published in CVPR2022). The improvements on RefCOCO+ are less significant than those on G-Ref and RefCOCO. This is because the language captions in RefCOCO+ hardly contain the descriptions of the relative or absolute spatial locations (e.g., \"closest\" or \"right bottom\") of the target object in images, and the effectiveness of our method mainly lies in the ability of localizing objects in such challenging scenarios, such as \"right first bottom fridge\" and \"middle row second kid from right\" in Fig.3 of our paper. The new experimental results have been added to the supplementary materials.\n\n| Method | val | testA | testB |\n| --------------- | --------------- | ----- | --------------- |\n| ReSTR[4] | 55.78 | 60.44 | 48.27 |\n| CRIS[6] | 62.27 | 68.08 | 53.68 |\n| LAVT[7] | 62.14 | **68.38** | 55.10 |\n| CoupAlign(Ours) | **62.92** | 68.34 | **56.69** |\n\n\n### **Results on G-Ref**\n\nIn our experiment, the role of validation set is the same as that of the test set. The performance difference may come from their different data distributions.\n",
" Thank you for your detailed comments. We will explain the novelty and differences from other works you mentioned in details below. We sincerely hope you can recognize the significance of our work. \n\n\n### **Novelty**\n\nRegarding the implementation of CoupAlign, the design of the core components is inspired by some existing modules in other computer vision tasks such as vision-language grounding [1], semantic segmentation [2] and object detection [3] as you mentioned in [Weaknesses]. However, directly combining these modules **CANNOT** achieve **fine-detailed word-pixel alignment** and **accurate sentence-mask alignment** (as shown in Fig.4) which are the main challenges of referring image segmentation. Here we compare our method with the works you mentioned one by one:\n\n1) Compare with GLIP [1]: Our WPA module is different from the cross-modal feature fusion in GLIP [1] in three aspects. First, the fusion module in [1] only achieves word-region alignment, while our WPA performs word-pixel alignmemt which is more fine-grained. Second, the fusion in [1] can only align object names to visual features, while WPA can align not only object names but also attribute and position words (e.g., blue, right, second, etc.). Last, our WPA contains two Gate units (see Eq.2 and Eq.6 in our paper) to regulate the information flow between language and visual features, but in [1] the features are directly fused without any regulating and filtering. \n\n2) Compare with loss in [2]: The main difference between our Aux Loss and the contrastive loss in [2] is that the loss in [2] enforces cross-image pixel embeddings belonging to the same semantic class to be more similar than embeddings from different classes, while our Aux Loss divides the pixels of the referred instance from others within an image no matter whether they belong to the same class or not.\n\n3) Compare with DETR [3]: The DETR decoder enumerates all possible box proposals in an image without any guidance, while SMA generates possible mask proposals guided by the language caption. To achieve this, we further employ the language embedding as an external query for weighting the masks and summarize their locations to find the target. \n\nIn summary, our modules are both well-motivated and elaborately designed for accurately segmenting object instances described by arbitrary lanaguage input. \n\nIn the following we conclude **our main contributions** from four perspectives: \n\n1) Well-designed framework components: The WPA module achieves fine-grained word-pixel alignment and can balance fusion and computation. The SMA module that achieves sentence-mask alignment is coupled together with the word-pixel alignment to form a new cross-modal alignment from local to global. \n\n2) A novel referring segmentation framework: Existing referring segmentation methods typically ignore the semantic consistency of pixels within the same object. To address this, our CoupAlign proposes to introduce mask constraints for enhancing the cross-modal alignment and generate more accurate segmentation results. CoupAlign is general and can be easily extended to weakly supervised data like image-text pairs.\n\n3) Strong performance: The quantitative results show that CoupAlign consistently outperforms existing SOTA methods on G-Ref, RefCOCO, RefCOCO+ and ReferIt, and the qualitative results show the remarkable ability of CoupAlign in localizing objects from crowds (see Fig.4). \n\n4) Code release: Since there are few available code bases to reproduce existing referring segmentation methods, we will release all our source code and models to promote future research.\n\n\n[1] Grounded Language-Image Pre-training, CVPR, 2022.\n\n[2] Exploring Cross-Image Pixel Contrast for Semantic Segmentation, ICCV, 2021.\n\n[3] End-to-End Object Detection with Transformers, ECCV, 2020.",
" Thanks for your detailed and constructive comments. We are glad to see that you appreciated our research on the interesting problem of segmenting images with arbitrary language input, our illustration and visualization, and the strong performance. \nWe believe that your main concern and rating are from some confusing notations in our manuscript. In the following we will address these issues one-by-one and we have clarified your concerns in the revised paper. We sincerely hope you can recognize the significance of our work. First of all, we list your questions:\n\n* **Notation clarification:** \"Notations are confusing.\"\n* **Difference between WPA and CrossAttn:** \"Since both WPA and Cross Attention are fusing the visual and language information, why using different attention computation for these two?\"\n* **About Fig.3:** \"Fig 3, are these examples representative or through cherry picking?\"\n\n* **Ablation study of WPA module:** \"it would be better to have more detailed ablations on questions like how many layers of fusion is needed? Or which fusion layer contributes the most?\"\n* **Questions about equations:** \"Eq.5\", \"Eq.14\", and \"L287\".\n\n### **Notation clarification**\n\nWe apologize for the confusing notations. \n1) Sorry for the typo. $d_k$ is equal to $d$. In our experiment, $d_s$ is set to 256 and $d_q$ is set to 512. They are not equal.\n2) In Eq.11, $Q_o$ is first projected via a fully-connected layer from $N \\times d_q$ to $N \\times D$, and then multiplied with $L_g$.\n3) In Eq.12, similar to $Q_o$, $Y_1$ is projected from $\\frac{H}{4} \\times \\frac{W}{4} \\times d_s$ to $\\frac{H}{4} \\times \\frac{W}{4} \\times D$ and then multiplied with $Q_o$.\n\n### **Difference between WPA and CrossAttn**\n\nThe WPA module is implemented based on bidirectional cross-attention to integrate both language information into visual embeddings and visual information into language embeddings. In contrast, CrossAttn in Eq.8 is a kind of unidirectional attention which only integrates language information into visual embeddings.\n\n### **About Fig.3**\n\nWe visualize the representative examples where the target objects are correctly localized by CoupAlign but typically missed by LAVT. The localization ability of CoupAlign is superior over LAVT, because 1) the word-pixel alignment exchanges information between visual and language encoders to generate rich and compact pixel-level embeddings and better understands the language captions, 2) the sentence-mask alignment built upon the well aligned pixel-level embeddings introduces mask constraints for the cross-modal alignment. In Tab.2, we can see that in different IoU ranges, the number of failure cases of CoupAlign is less than that of LAVT, which indicates that the masks predicted by CoupAlign have greater overlap with the ground-truth, and are more accurate than those of LAVT. \n\n\n### **Ablation study of WPA module**\n\nThanks for your valuable suggestion. In our experiment, we use four WPA modules, two of which are in the early encoding stage and the other two are in the late encoding stage. To study the effects of the numbers of WPA modules, we first conduct two baseline models that alternatively remove two WPA modules at early or late encoding stages. As shown in the following table, when we remove the last two WPA modules the performance drops about 2\\% (74.70\\% vs. 72.74\\% oIoU, 75.49\\% vs. 73.87\\% mIoU), and when we remove the first two WPA modules the performance drops about 0.8\\% (74.70\\% vs. 73.61\\% oIoU, 75.49\\% vs. 74.68\\% mIoU). These results validate the effectiveness of WPA modules at both early and late stages and indicate that the latter WPA modules play a more important role in our model. Then, we only use one WPA module, which is inserted at different encoding stages. As shown in the table below, the WPA module at the 4-th stage is more effective than those inserted at other stages. \n\n| WPA's number | WPA's position | oIoU | mIoU |\n| ------------ | -------------- | ----- | ----- |\n| 4 | stage 1,2,3,4 | **74.70** | **75.49** |\n| 2 | stage 1,2 | 72.74 | 73.87 |\n| 2 | stage 3,4 | 73.93 | 74.88 |\n| 1 | stage 4 | 73.61 | 74.68 |\n| 1 | stage 3 | 72.53 | 73.47 |\n| 1 | stage 2 | 72.63 | 73.48 |\n| 1 | stage 1 | 72.59 | 73.97 |\n\n### **Questions about equations**\n\n1) Eq.5: $V'_i$ and $L'_i$ in Eq.5 should be swapped. \n\n2) Eq.14: The negative samples $y_k^-$ are sampled from the same image.\n\n3) L287: Because the calculation of SMA depends on the mask generator's output, when SMA is removed, the mask generator is also removed, and an MLP layer is directly used to the output of SegHead $Y_1$ to obtain the final prediction.",
" We appreciate that you recognize the significance of our work. We will respond to your concerns in the following:\n\n* **Evaluation on more datasets:** \"The experiment misses evaluating standard datasets, ReferIt and RefCOCO+, for comparison with other RIS methods.\"\n\n* **Function descriptions:** \"Missing some function descriptions harms the readers to reimplement the proposed model.\"\n\n* **Projection matrices in section 3.2:** \"The shapes of the projection matrices mentioned in section 3.2 are not correct.\"\n\n### **Evaluation on more datasets**\n\n**1. ReferIt:** As shown in the table below, the performance of CoupAlign on ReferIt is higher than the state-of-the-art method ReSTR, which is published in CVPR2022. \n\n| Method | test |\n| --------- | --------------- |\n| LCSM [1] | 66.57 |\n| EFN [2] | 66.70 |\n| ReSTR [3] | 70.18 |\n| CoupAlign (Ours) | **73.28** |\n\n\n\n**2. RefCOCO+:** As shown in the following table, the performance of CoupAlign on RefCOCO+ is comparable to or better than LAVT which is the most recent state-of-the-art method (published in CVPR2022). The improvements on RefCOCO+ are less significant than those on G-Ref and RefCOCO. This is because the language captions in RefCOCO+ hardly contain the descriptions of the relative or absolute spatial locations (e.g., \"closest\" or \"right bottom\") of the target object in images, and the effectiveness of our method mainly lies in the ability of localizing objects in such challenging scenarios, such as \"right first bottom fridge\" and \"middle row second kid from right\" in Fig.3 of our paper.\n\n\n| Method | val | testA | testB |\n| --------------- | --------------- | ----- | --------------- |\n| ReSTR [3] | 55.78 | 60.44 | 48.27 |\n| CRIS [4] | 62.27 | 68.08 | 53.68 |\n| LAVT [5] | 62.14 | **68.38** | 55.10 |\n| CoupAlign (Ours) | **62.92** | 68.34 | **56.69** |\n\nThese new experimental results have been added to the supplementary materials.\n\n### **Function descriptions**\n\nSorry for the confusion. The SwinStage represents the network stages in Swin Transformer [6] and consists of linear embedding or patch merging layers together with Swin Transformer Blocks. Our visual encoder has four SwinStage. As is described in BERT [7], each of the three hidden layers are referred to as a BERTStage. Our language encoder is a BERT-BASE model which has 12 hidden layers and the layers can be divided into four BERTStage. WPA modules can be inserted after each of the SWinStage and BERTStage. CrossAttn represents a plain multi-head self-attention layer that takes the concatenated vision and language features $\\mathbb{R}^{(H_oW_o+T)\\times D}$ as inputs and outputs the vision features $\\mathbb{R}^{H_oW_o\\times D}$. MaskGenerator is a 6-layer transformer decoder, $S_o$ is the key and the value, and $Q$ is the query. \n\n### **Projection matrices in section 3.2**\n\nWe apologize for the typo in Line 149 of the original manuscript, where the shape of the $W_i^l$ should be $D\\times d$, but not $D\\times d_k$.\n\n___\n**Reference**\n\n[1] Linguistic structure guided context modeling for referring image segmentation. ECCV, 2020.\n\n[2] Encoder fusion network with co-attention embedding for referring image segmentation. CVPR, 2021.\n\n[3] Restr: Convolution-free referring image segmentation using transformers. CVPR, 2022.\n\n[4] Cris: Clip-driven referring image segmentation. CVPR, 2022.\n\n[5] Lavt: Language-aware vision transformer for referring image segmentation. CVPR, 2022.\n\n[6] Bert: Pre-training of deep bidirectional transformers for language understanding. ACL, 2018.\n",
" We are grateful for your comprehensive and encouraging review. We are pleased that you appreciate the technical contributions, the state-of-the-art performance, and the effectiveness of our modules and aux loss. In the following, we will respond to your concerns and questions:\n\n\n* **Role of WPA and ablation of WPA module:** \"What is the role of WPA module and how many do we need it.\", \"What is the good balance between fusion and computing?\"\n* **Compare with different cross-attention operations:** \"so it would be good to compare the difference between different cross-attention operations. Why not use multi-head attention on the WAP module?\"\n* **Extend to weakly supervised data:** \"Is it possible to extend the current framework to incorporate weakly supervised data such as image caption pairs?\"\n\n### **Role of WPA and ablation of WPA module**\n\n1) Role of WPA: Our model uses four WPA modules to achieve cross-modal alignment from local to global. Such elaborate alignments generate rich and compact embeddings at both the modalities. \n\n2) Ablation study of WPA module: In our experiment, we use four WPA modules, two of which are in the early encoding stage and the other two are in the late encoding stage. To study the effects of the numbers of WPA modules, we first conduct two baseline models that alternatively remove two WPA modules at early or late encoding stages. As shown in the following table, when we remove the last two WPA modules the performance drops about 2\\% (74.70\\% vs. 72.74\\% oIoU), and when we remove the first two WPA modules the performance drops about 1\\% (74.70\\% vs. 73.61\\% oIoU). These results validate the effectiveness of WPA modules at both early and late stages and indicate that the latter WPA modules play a more important role in our model. Then, we only use one WPA module, which is inserted at different encoding stages. As shown in the table below, the WPA module at the 4-th stage is more effective than those inserted at other stages. \n\n3) Balance between fusion and computing: When we remove all the WPA modules, the inference time reduces from 38ms to 34ms per image on a single NVIDIA V100 GPU, and the performance drops a lot, i.e., from 74.70\\% to 70.43\\% oIoU (see Tab.4 in our paper). Therefore, WPA module can bring significant performance gains with a small computational cost.\n\n\n| WPA's number | WPA's position | oIoU | mIoU |\n| ------------ | -------------- | ----- | ----- |\n| 4 | stage 1,2,3,4 | **74.70** | **75.49** |\n| 2 | stage 1,2 | 72.74 | 73.87 |\n| 2 | stage 3,4 | 73.93 | 74.88 |\n| 1 | stage 4 | 73.61 | 74.68 |\n| 1 | stage 3 | 72.53 | 73.47 |\n| 1 | stage 2 | 72.63 | 73.48 |\n| 1 | stage 1 | 72.59 | 73.97 |\n\n### **Compare with different cross-attention operations**\n\n\nOur WPA module is implemented based on the bidirectional cross-attention (BiAttn) module. The parallel co-attention module in [1] is similar to BiAttn but uses more learnable parameters. To compare with other attention variants, we replace BiAttn with the unidirectional attention (UniAttn) module and obtain lower results (74.70\\% vs. 72.70\\% oIoU, 75.49\\% vs. 73.44\\% mIoU, see Tab.4 in our paper). The reason is that UniAttn can only combine language information into the visual encoder but cannot exchange information between the two modalities. \n\nMoreover, if we change BiAttn into the multi-head version (four attention heads), the performance is slightly lower than the single-head counterpart (74.70\\% vs. 74.32\\% oIoU, 75.49\\% vs. 74.92\\% mIoU). The reasons include: 1) the effectiveness of multi-head attention stems from the ability of jointly attending to multiple positions, and the multi-layer single-head attention modules in our model can also attend to multiple positions; 2) increasing the attention heads of WPA can also increase the model complexity, which may harm the model robustness and cause slight damage to the performance. \n\n| WPA attention types | oIoU | mIoU |\n| ------------------- | ----- | ----- |\n| Bi-Attn | **74.70** | **75.49** |\n| Uni-Attn | 72.70 | 73.44 |\n| Bi-MultiHeadAttn | 74.32 | 74.92 |\n\n[1] Hierarchical question-image co-attention for visual question answering. NeurIPS, 2016.\n\n\n### **Extend to weakly supervised data**\n\nThanks for your valuable suggestion. CoupAlign can be trained in two ways with weak supervision data. 1) Do not change the existing framework: directly use the pseudo masks generated by conventional weakly supervised segmentation approaches as ground-truth masks. 2) Only small changes to the loss functions: before calculating the binary cross entropy loss, a global pool layer is added, and the loss is changed into the image classification loss. Then, change the auxiliary loss to a self-supervised version, i.e., first cluster the samples into groups and then divide the positive and negative samples. ",
" This paper proposed a cross-modal alignment model (CoupAlign) for referring image segmentation. The authors propose to use coupled sentence-mask alignments with word-pixel alignment to enforce the model learned more accurate and consistent segmentation masks. Word-pixel alignment serves as an early fusion module that fuses the features between each block of swing transformer feature and BERT feature. The Sentence mask alignment learns to weight the mask using the sentence embedding to localize the referred object. The authors benchmark the proposed method on 2 refer segmentation benchmarks and achieve state-of-the-art performance. [Strength]\n\n- WPA module serves as an early fusion module that can balance the fusion and compute. \n\n- The sentence mask alignment (SMA) module is a general module that can scale up to weakly supervised learning such as GLID. \n\n- The proposed model achieves state-of-the-art performance on refer segmentation benchmark. \n\n- The visualization of word-pixel alignment shows the effectiveness of the module and aux loss. \n\n[Weakness]\n\n- It would be good to conduct an ablation study on how many WPA module is needed in the model. Does the early fusion (first few blocks of the model) need the WPA module or the more WAP module the better? \n\n- It seems the WPA module use cross-attention similar to [1], so it would be good to compare the difference between different cross-attention operations. Why not use multi-head attention on the WAP module?\n\n[1] Lu, J., Yang, J., Batra, D. and Parikh, D., 2016. Hierarchical question-image co-attention for visual question answering. Advances in neural information processing systems, 29.\n\n - Is it possible to extend the current framework to incorporate weakly supervised data such as image caption pairs? \n\n- What is the role of WPA module and how many do we need it. What is the good balance between fusion and computing? \n\n- For cross attention, why not use multi-head attention? is there any specific reason? \n Yes",
" In order to employ the semantic consistency of pixels within the same object, this paper proposes a CoupAlign mechanism to couple a work-pixel alignment (WPA) and a sentence-mask alignment (SMA). WPA fuses the linguistic and pixel-level features within the intermediate layers of the feature encoders. SMA weights the generated masks to localize the referred object. In addition, the authors provide an auxiliary contrastive loss for facilitating the segmentation accuracy. The experiments show that the proposed CoupAlign mechanism achieves state-of-the-art referring image segmentation performance. [Strengths] \n+ The idea of implementing the CoupAlign mechanism is interesting.\n+ The manuscript is well organized, and most paragraphs are easy to follow.\n+ The references are adequate.\n\n[Weaknesses]\n- The experiment misses evaluating standard datasets, ReferIt and RefCOCO+, for comparison with other RIS methods. Such an incomplete comparison makes the evaluating experiments weak.\n- Missing some function descriptions harms the readers to reimplement the proposed model. For example, what are the functions of SwinStage and BERTStage in (3)? It is unclear why these functions relate to two different stages. Also, what is the function of CrossAttn in (8) and MaskGenerator in (9)? Please explain these functions' structures mentioned above, including the inputs and outputs, to clarify the model design.\n- The shapes of the projection matrices mentioned in section 3.2 are not correct.\n Though the proposed CoupAlign mechanism coupling WPA and SMA is interesting, the manuscript lacks some clear terminology definitions, as mentioned in [Weaknesses], and the experiments are not supported by sufficient standard evaluating datasets. I would like to see more information in the authors’ response. The authors adequately addressed the limitations and potential negative societal impact of their work.",
" This paper proposes a new model for referring image segmentation (RIS). The model takes in an image and a natural language description as input, and outputs the segmentation mask corresponding to the language input. The proposed approach features 1) a hierarchical fusion architecture in its encoder to gradually fuse information from the visual and language inputs (dubbed as Word-Pixel Alignment or WPA), and 2) a module to assemble segmentation masks from a set of candidate masks and the language representation (dubbed as Sentence-Mask Alignment or SMA). In terms of experiments, the authors train and test their models for RIS on two datasets -- RefCOCO and G-Ref, and achieved competitive results compared to previous SOTA methods. The authors also carried out ablation studies on various design choices and provided some qualitative visualizations to better understand the model's behavior. \n Strengths\n+ The task of segmenting images with arbitrary language input is interesting and could be a prominent task in near future. \n+ For presentation, 1) the illustration in Fig. 2 is comprehensive and helpful and 2) the visualization in Fig. 4 is very helpful for readers to better understand the model behavior. \n+ Overall, the proposed model is intuitive and achieves strong performance when compared to other SOTA methods on popular datasets.\n\nWeaknesses\n- Notations are confusing. For example, there are d_k, d_s, d_q, are they equal to each other? In Eq 11, L_g is with shape 1 x D while Q_o is with N x d_q, does this imply D == d_q? Also, in Eq 12, Y_1 is with shape H/4 x W/4 x d_s while Q_o is with size N x d_q, how can you multiply these two (unless d_q == d_s, which I did not see stated anywhere)? \n- Since both WPA and Cross Attention are fusing the visual and language information, why using different attention computation for these two? The design seems arbitrary and I did not see any justification for the difference. \n- Fig 3, are these examples representative or through cherry picking? Since I did not see why the proposed method would have a qualitatively different behavior compared to previous models (e.g. LAVT) by adding more intermediate level fusion. If this is not representative, I think it's better to not include these as they can be misleading (i.e. to make readers think the proposed model is qualitatively better/different). \n- L281-6, since this is where the main technical novelty lie, it would be better to have more detailed ablations on questions like how many layers of fusion is needed? Or which fusion layer contributes the most? \n\n############# POST REBUTTAL NOTE #############\nThe authors have properly addressed my comments above with the added experiments and paper revisions. With this, I will raise up my rating to accept the paper. \n\n\n\n\n\n\n\n\n - In Eq 5, V'_i is \"assembled\" from the language feature while L'_i is assembled from the visual feature? Should these two be swapped? \n- In Eq 14, is y^-_k sampled from the same image or from some other images? \n- L287, \"when SMA is removed\", how do you remove SMA while still generating the segmentation conditioned on the language input? This requires more elaboration. \n Yes",
" This paper proposes CoupAlign for referring image segmentation, which couples sentence-mask alignment with word-pixel alignment to enforce the mask-aware constraint. Experiments validate the effectiveness of the method. Strengths:\n\n- The paper is mostly clear and easy to follow.\n- The writing of this paper is good and the figures are appealing.\n\nWeaknesses\n\n- Novelty. The basic idea of the proposed CoupAlign framework is to couple sentence-mask alignment with word-pixel alignment for consistent and accurate segmentation results. However, the proposed Word-Pixel Alignment (WPA) module is just the same as the Language-Aware Deep Fusion proposed in GLIP [1]. The Auxiliary Loss can be regarded as a simple supervised pixel-level contrastive learning loss, which has been demonstrated by recent works [2]. The proposed Sentence-Mask Alignment (SMA) module can be a contribution, but I think it is just a cross-modal version weighted-summation of queries from the DETR-decoder-like mask generator. In conclusion, the novelty of this paper is limited.\n\n[1] Grounded Language-Image Pre-training, CVPR, 2022.\n\n[2] Exploring Cross-Image Pixel Contrast for Semantic Segmentation, ICCV, 2021.\n - The precision@0.5/0.7/0.9 values of the 3-rd and 4-th row are the same. Is this a typo error? Or it is just a coincidence.\n- The attention weights on different words are not analyzed in the WPA module. It is better to show if the model correctly attends to the referring motion or appearance words.\n- In table 4, I think there is one experiment missing, i.e., w/ Bi-WPA, w/o Uni-WPA, w/o SMA, w/o Aux Loss.\n- From Figure 2 and Table 4, I think the baseline without SMA is that directly summarizes all mask query embeddings without weights generated from the sentence features. What if evaluating the importance of each mask embedding from themselves? That is, generate the weights from the mask embeddings (e.g., using MLP like the equation (3) in [3]) without the sentence embeddings.\n- Could you please show the diversity of masks generate from mask queries by statistics?\n- The computational cost or runtime should be discussed.\n- Experiments on RefCOCO+ [4] are missing. Why?\n- Why the improvement in the test split of G-Ref is so small (i.e., 0.13%) but significant on Val split (i.e., 1.6%). Does it mean the proposed methods overfit the train and Val splits?\n- The improvements of WPA when inserting it into different stages of the encoders should be discussed. And what if only using the words features from the last stage of the language encoder instead of from the intermediate layers.\n\n[3] SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation, Arxiv, 2021.\n[4] Modeling context in referring expressions, ECCV, 2016.\n The limitations and potential negative societal impact are discussed in the supplementary files."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"SOaYDpPKvL",
"4micdEBIcDm",
"Jzcf1vWZT_h",
"UMuFrU0gPv5",
"yB6W1dO7OiM",
"-wLYglob9v3",
"stUL42Asjk-",
"SbHbNz-BrcS",
"voy10ZiwcrY",
"Kxu5HXiEMr-",
"nips_2022_5L-wxm0YLcZ",
"eTZ3yEH8DS",
"SufrqIMEU-",
"Qmz6SSE-XkK",
"FmnonUmP3hh",
"PoGEGBimkU",
"nips_2022_5L-wxm0YLcZ",
"nips_2022_5L-wxm0YLcZ",
"nips_2022_5L-wxm0YLcZ",
"nips_2022_5L-wxm0YLcZ"
] |
nips_2022_noyKGZYvHH | coVariance Neural Networks | Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning. Principal component analysis (PCA) involves the projection of data on the eigenspace of the covariance matrix and draws similarities with the graph convolutional filters in GNNs. Motivated by this observation, we study a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs. We theoretically establish the stability of VNNs to perturbations in the covariance matrix, thus, implying an advantage over standard PCA-based data analysis approaches that are prone to instability due to principal components associated with close eigenvalues. Our experiments on real-world datasets validate our theoretical results and show that VNN performance is indeed more stable than PCA-based statistical approaches. Moreover, our experiments on multi-resolution datasets also demonstrate that VNNs are amenable to transferability of performance over covariance matrices of different dimensions; a feature that is infeasible for PCA-based approaches. | Accept | This paper proposes coVariance neural networks (VNN), which is a new architecture of graph neural network that is more robust to perturbations in covariance matrix. Most reviewers liked the new architecture as the intuition is clearly presented and the experiment results are interesting (in particular the results demonstrating multi-scale transferability). There are some concerns that this new architecture can be viewed as a more direct modification of GNN, I recommend the authors to clarify this relationship more clearly and emphasize the motivation. | train | [
"pR2roJ-6v1K",
"ciM0a9HDR9S",
"7lefPejZhnKx",
"uX5nJ0eSz65s",
"uPQ7XFRhCnk6",
"ha_L3DK_9_g1",
"O5GeMELKAA",
"Byfa96xBZVs",
"F1acbaHS4EW"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for considering our previous response. We address further concerns raised by the reviewer as follows.\n\n>*The analogy of VNNs to GNNs with CNN vs GNN does not quite hold since graph convolutions are SIGNIFICANTLY different from image convolutions.*\n\nPlease note that there exists a rich literature in signal processing that provides a unified view for convolutional operations, where the convolutions over time (1-D or 1-Dimensional), space (2-D), and graphs are **specific instances of the same mathematical object** that exploits the symmetries and relationships in different data domains [i]. For instance, in seminal works on graph signal processing, the motivation for graph convolutions comes from rewriting time convolutions as graph convolutions on a directed line graph.\n\nIndeed, a number of works on graph signal processing and graph neural networks discuss the bridge between the graph convolutional operations and 2D convolutions (pertinent to CNN); see Section IV-C in [ii], Section 2.2 in [iii], Fig. 1 and Section V-B in [iv]. Moreover, in natural images represented as graphs, the covariance kernels recover the classical convolutional operations over images [v]. \n\nIn summary, we find it indisputable that the graph convolutions are generalisations of 2-D image convolutions or equivalently, 2-D convolutions are special cases of graph convolutions. **If we have misunderstood your statement, we would greatly appreciate further clarifications.** \n\n\n*i. Puschel, Markus, and José MF Moura. \"Algebraic signal processing theory: Foundation and 1-D time.\" IEEE Transactions on Signal Processing 56.8 (2008): 3572-3585.*\n\n*ii. Ortega, A. et al. (2018). Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE, 106(5), 808-828.*\n\n*iii. Narang, Sunil K. et al. \"Graph-wavelet filterbanks for edge-aware image processing.\" 2012 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2012.*\n\n*iv. Wu, Zonghan, et al. \"A comprehensive survey on graph neural networks.\" IEEE transactions on neural networks and learning systems 32.1 (2020): 4-24.*\n\n*v. Bronstein, M. M. et al. (2017). Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4), 18-42.*\n\n>*If the dimension of the dataset changes, I believe that VNNs could incorporate this to a limited extent in the sense that the new dimension would be connected to a limited number of original dimensions via the addition of the node in the graph. But this would not accurately re-evaluate the dependencies on other nodes without retraining. Note that PCA is already \"transferable\" to new datapoints in a way that most non-linear dimensionality reduction methods are not.*\n\nWe refer the reviewer to the notion of transferability of GCNs as discussed in [vi], where the graphs are considered to be random instances sampled from an object called graphon. In this sense, graphs that are sampled at different resolutions retain an information structure that can be exploited **without re-training** in GCNs. Our experiments demonstrate this convincingly for VNNs over different-resolution datasets collected over the brain, where the brain surface can be thought of as a continuum in the spirit of a graphon object. We will clarify this in the paper.\n\n\nWe note that there is no trivial way through which PCA can be transferred to new data points of a different dimensionality. For instance, PCA performed on the FTDC100 dataset is not practically meaningful for the FTDC300 or FTDC500 datasets. We reiterate that classical dimensionality reduction methods (linear or non-linear) do not have this property. So respectfully, we do not follow the reviewer’s argument that *‘PCA is already \"transferable\" to new datapoints’* in this context. **Further clarifications by the reviewer on this aspect will be much appreciated.**\n\n*vi. Ruiz, Luana et al. \"Graphon neural networks and the transferability of graph neural networks.\" Advances in Neural Information Processing Systems 33 (2020): 1702-1712.*\n\n>*If it admits non-linearity then this has to be related to non-linear dimensionality reduction which is not taken up in this paper.*\n\nWe clarify that we study VNN as a non-linear information processing architecture (including its stability and transferability properties and connections with PCA), whose impact and applications go well-beyond that possible for dimensionality reduction tasks. \n\nWe also note that our experiments accommodate non-linear relationships for PCA as we use ‘rbf’ kernel after performing PCA as a baseline. Since VNNs and PCA exploit the same covariance matrix, we believe that our experiments provided a fair comparison between VNNs and PCA-based methods while accommodating non-linearity. \n\n\n\n\n",
" The analogy of VNNs to GNNs with CNN vs GNN does not quite hold since graph convolutions are SIGNIFICANTLY different from image convolutions. \n\nIf the dimension of the dataset changes, I believe that VNNs could incorporate this to a limited extent in the sense that the new dimension would be connected to a limited number of original dimensions via the addition of the node in the graph. But this would not accurately re-evaluate the dependencies on other nodes without retraining. Note that PCA is already \"transferrable\" to new datapoints in a way that most non-linear dimensionality reduction methods are not. \n\nIf it admits non-linearity then this has to be related to non-linear dimensionality reduction which is not taken up in this paper. ",
" We thank the reviewer for their valuable feedback. We have added an overview of GNNs that summarizes different GNN architectures and robust PCA as related work in Appendix E in the supplementary material. If the paper is accepted, we will add this as Subsection 1.3 in the introduction in the final version. \n\nThe computational cost for a covariance perceptron defined in (14) is given by $O(m^2 T F_{\\sf in} F_{\\sf out})$, where $T$ is the maximum number of filter taps in any filter in its associated filter bank and $m$ is the covariance matrix size. Therefore, scalability to large covariance matrices is indeed the most challenging aspect due to increased computational complexity in terms of $m$. The transferability property of VNNs as illustrated by our experiments in Section 5.2 addresses the issue of scalability, where VNNs can be trained on a ‘coarser’ dataset first and then, the model is transferred to a higher dimensional/higher resolution dataset while retaining performance on the inference task. We also note that the factor of $m^2$ in $O(m^2 T F_{\\sf in} F_{\\sf out})$ is driven by the maximum density of the covariance matrix and therefore, could potentially be reduced by adopting sparse covariance estimation like approaches. \n\nWe have added a brief discussion on the scalability of VNNs to large graphs as a potential limitation in the revised version of the paper (see Remark 2 in the revised manuscript). Our work does not have any negative social impacts. \n",
" We thank the reviewer for their evaluation of our paper. The computational cost in any layer of VNN is determined by the cost of the convolution operation. Therefore, the computational cost for a covariance perceptron defined in (14) is given by $O(m^2 T F_{\\sf in} F_{\\sf out})$, where $T$ is the maximum number of filter taps in any filter in its associated filter bank and $m$ is the covariance matrix size. Moreover, there is a computation cost associated with the calculation of the covariance matrix, which is given by $O(mn^2)$, for $n$ number of samples when $n>m$. We also note that the factor of $m^2$ in $O(m^2 T F_{\\sf in} F_{\\sf out})$ is driven by the maximum density of the covariance matrix and therefore, could potentially be reduced by adopting sparse covariance estimation like approaches.\n\n\nIn practice, for a VNN with 2 layers, 2 filter taps per layer and 44 features per dimensions, and trained over 100 epochs, the total training time is:\n* 22.86 seconds for FTDC100 (m=100)\n* 47.86 seconds for FTDC300 (m=300)\n* 89.73 seconds for FTDC500 (m=500)\n\nIn addition, the aforementioned observations also help elucidate the contribution of transferability property of VNNs in enabling scalability to high resolution datasets by training the model on coarser/low resolution data. \n\nIn the revised paper, we have briefly discussed the computational complexity in Remark 2 as well as the role of transferability in scalability of VNN. \n\n\n",
" In this comment, we continue our response to address the following concerns:\n\n3. **Motivation behind studying VNN separately from GCNs:**\nWe agree that VNNs are indeed implemented as GCNs with covariance matrices as graphs, where the data features act as nodes and data points act as signals on the nodes. Due to the ubiquity of PCA in statistical analysis and widespread use of covariance matrices to model relationships in various domains, we believe that the link between VNNs and PCA is a significant observation that merits the study of VNNs independently of GCNs. We also remark that independent study of related inference approaches to bring into focus the significant concepts or domain-specific novelties have precedence in both machine learning and statistics. For instance, graph neural networks and convolutional neural networks (CNN) are commonly studied independently even as the graph convolutions supersede convolution operations in CNN and the images on which CNNs primarily operate can be thought of as a grid graph. Furthermore, application-specific variations of PCA are studied independently, such as Karhunen–Loève transform in signal processing [d] and empirical orthogonal functions in atmospheric science [e]. We have clarified the relationship between VNN and GCN in the related work in Appendix E in the revised manuscript, which will be included in the introduction as Section 1.3 in the final version of the paper. \n\n _[d] Dony, R. \"Karhunen-loeve transform.\" The transform and data compression handbook 1.1-34 (2001): 29._\n\n _[e] Hannachi, Abdel, Ian T. Jolliffe, and David B. Stephenson. \"Empirical orthogonal functions and related techniques in atmospheric science: A review.\" International Journal of Climatology: A Journal of the Royal Meteorological Society 27.9 (2007): 1119-1152._\n4. **VNNs and linear SVD approximation:**\nVNNs as neural networks are capable of inference tasks with non-linearity. This is because every layer of VNN consists of a covariance filter and a pointwise nonlinearity function (e.g. ReLU). Therefore, VNNs subsume any inference tasks that use linear SVD approximation as the preliminary step. \n\n We also re-iterate that traditional feature selection methods like _PCA and SVD are not transferable_, i.e., if the dimension of the dataset changes, PCA and SVD need to be performed again and cannot leverage the features extracted on a dataset of a different dimension. The transferability property of VNNs is inherited from GCNs and we illustrate it in our experiments in Section 5.2.\n\n5. **Relevance of Correlation/Covariance Matrices:**\nIt is true that covariance matrices cover only linear relationships between data features. However, we argue that our results regarding stability and transferability of VNNs in this context will be of interest to a wider audience that relies on PCA and correlation matrices for data analysis. Correlation or covariance matrices are very commonly used as graphs to model the brain connectivity and as inputs to graph neural networks when applied in neuroimaging applications and bioinformatics. This motivated our experiments in Section 5. Besides this application, correlation matrices are used for analyses in diverse fields such as traffic forecasting [f], environment monitoring [g], and natural language processing [h], to name a few. \n\n \n _[f] Mallick, Tanwi, et al. \"Dynamic graph neural network for traffic forecasting in wide area networks.\" 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020._\n \n _[g] Cotta, Higor Henrique Aranda, Valdério Anselmo Reisen, and Pascal Bondon. \"Identification of redundant air quality monitoring stations using robust principal component analysis.\" Environmental Modeling & Assessment 25.4 (2020): 521-530._\n\n _[h] Malekzadeh, Masoud, et al. \"Review of graph neural network in text classification.\" 2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). IEEE, 2021._\n\n6. **Computational complexity as potential limitation:** The computational cost for a covariance perceptron defined in (14) is given by $O(m^2 T F_{\\sf in} F_{\\sf out})$, where $T$ is the maximum number of filter taps in any filter in its associated filter bank and $m$ is the size of covariance matrix. Therefore, scalability to large covariance matrices is the most challenging aspect due to increased computational complexity in terms of $m$. We have added a brief discussion on the scalability of VNNs to large graphs as a potential limitation in the final version of the paper (see Remark 2 in the revised manuscript). \n\n\nWe hope that we addressed the reviewer's concerns sufficiently, in which case, we would be grateful if your rating of our paper could be re-evaluated. We would be happy to clarify any additional concerns.\n",
" We thank the reviewer for their insightful feedback. We also find the connections made by the reviewer between kernel PCA and covariance Fourier transform very interesting. We discuss the comparison between robust PCA and VNNs in this comment. Other concerns on motivation, relevance of covariance/correlation matrices and comparison with linear SVD are addressed separately in Part 2. \n\n1. **Robust PCA and VNNs have different objectives:** Robust PCA and the method in our paper tackle fundamentally distinct problems and therefore, the notions of stability or robustness in these approaches are different. Specifically, the notion of stability in our work pertains to outputs of VNNs being robust to the statistical uncertainty in the covariance matrix estimation due to finite sample size. In contrast, robust PCA using principal components pursuit aims to recover low rank structure in a given, high-dimensional data when the data is corrupted by gross errors or outliers. We further elaborate on the differences between robust PCA and VNNs further by discussing the settings in robust PCA literature from a few seminal works in [a,b,c] and VNNs separately.\n\n **_Robust PCA_**: Let’s assume that the data matrix is given by $X$. Robust PCA is typically studied when $X$ conforms to the decomposition $X = L_0+S_0+Z_0$, where $L_0$ is the low rank structure in the data $X$, $S_0$ is a matrix that models _gross_ outliers (e.g. due to missing data, adversarial behavior, defects in in data collection), and $Z_0$ is random noise [b,c]. The robust PCA framework using principal component pursuit aims to find an estimate $L$ for $L_0$ by solving an optimization problem that is guaranteed to recover $L_0$ perfectly under assumptions on the sparsity of singular vectors or principal components of $L_0$ and the structure of $S_0$ (for instance, $S_0$ is not desirable to be low rank in [a,b]). In effect, robust PCA aims to recover the PCA decomposition for $X - S_0$ while being stable to noise $Z_0$ [b,c].\n\n\n **_VNN_**: In our paper, we discuss potential instability of PCA-based statistical models due to perturbations in the data, for e.g., by adding a new sample to the dataset. However, we note that in contrast to robust PCA, the perturbations in the data are not driven by corrupted data points. Furthermore, ill-defined eigenvalues and eigenvectors of the sample covariance matrix can be the source of instability in statistical inference in this scenario.\n\n Our theoretical contribution can be summarised as follows: _given a sample covariance matrix $\\hat C_n$ of data $X$, we have a VNN output given by $\\Phi(X; \\hat C_n, {\\cal H})$, where ${\\cal H}$ is the set of covariance filters learnt for the inference task. Through our analysis, we establish that if $\\hat C_n$ is replaced with another sample covariance matrix $\\hat C_m$ (estimated from a different data matrix with the same underlying distribution as $X$), the VNN output will be stable, i.e., the difference between $\\Phi(X; \\hat C_n, {\\cal H})$ and $\\Phi(X; \\hat C_m, {\\cal H})$ will be bounded._\n\n To establish this result, we leverage the perturbation theory of sample covariance matrices and identify that the closeness of eigenvalues of the covariance matrix and kurtosis of the underlying distribution for data matrix $X$ determine the design of filters ${\\cal H}$ that ensures the stability of VNN outputs. Therefore, stability is an inherent property of VNNs and unlike robust PCA, we do not perform or leverage any decomposition or denoising in the training procedure. We have summarised the above differences between robust PCA and VNN in the Related Work section in Appendix E in the revised manuscript. \n\n\n _[a] Candès, Emmanuel J., Xiaodong Li, Yi Ma, and John Wright. \"Robust principal component analysis?.\" Journal of the ACM (JACM) 58, no. 3 (2011): 1-37._\n\n _[b] Zhou, Z., Li, X., Wright, J., Candes, E., & Ma, Y. (2010, June). Stable principal component pursuit. In 2010 IEEE international symposium on information theory (pp. 1518-1522). IEEE._\n\n _[c] Xu, Huan, Constantine Caramanis, and Sujay Sanghavi. \"Robust PCA via outlier pursuit.\" Advances in neural information processing systems 23 (2010)._\n\n\n2. **Robust PCA is not transferable:** Furthermore, there is no notion of ‘transferability’ in robust PCA, i.e., there is no possibility of generalising the framework to datasets of different dimensions. Our experiments on multi-resolution datasets in Section 5.2 show that VNNs learnt on a dataset with dimension 100 can be transferred to a higher resolution dataset of 500 without any retraining and vice-versa. We have also clarified this in Section 5.2. \n\n\n\n",
" In this paper, the author first makes an observation that the filters of GNNs show similarities with principal component analysis (PCA), in which data is projected on the eigenspace of the covariance matrix. Then they proposed the covariance neural network (VNN) that operates on sample covariance matrices as graphs are motivated by this observation. They theoretically demonstrate VNN stability to perturbations in the covariance matrix, indicating a qualitative advantage over traditional PCA-based data analysis approaches that are prone to instability due to close eigenvalues and principal components. The author also makes several real work experiments to empirically prove their statement. \nStrengths:\nIn this paper, the author links the coVariance filter with the graph convolution filter in GNN and proposes a DL architecture based on the coVariance filter. The computing covariance matrix for a large graph is always computation expensive, by alternatively computing graph Fourier transformation show potential for further GNN development.\n\nThe author also theoretically analyzes the stability of the covariance filter and covariance graph neural network and empirically evaluates VNNs for transferability and stability.\n\n\nWeakness:\n\nThere is no related work section in this paper. Although those who study GNN are familiar with GNN, PCA, and graph Fourier transformation, it would be better to give an overview of different kinds of GNN and summaries them.\n\n How about the scalability of VNN? The experience of this paper is mainly focused on 'small' graphs. When it comes to large graphs, is it still computationally feasible? When VNN is used in large graphs, what will be the biggest challenge? There is no limitation and social impact in this paper. According to Neurips instructions, hope these two parts will appear in a future version.",
" Motivated by similarities between principal components analysis (PCA) and graph convolutional filters in GNN, the authors introduce a new GNN architecture that uses sample covariance matrices as the graph representation of the data. They develop “coVariance” filters analogous to graph convolutional filters in GNNs, show that PCA is a special case of applying such filters, and propose a deep learning architecture based on “coVariance” filters they call “coVariance Neural Networks” (VNNs). Using perturbation theory for sample covariance matrices they theoretically establish the stability of VNNs to perturbations in the sample covariance matrix (in terms of number of samples). They then empirically evaluate VNN stability relative to PCA on synthetic and real world data, and show that VNNs can be used transferably on one multiresolution dataset. Strengths:\n- The work is original, making an interesting connection between GNNs and PCA and rigorously following this through the development, theoretical stability, and empirical validation of VNNs.\n- The results are likely to be of interest to a broad audience, given the widespread use of PCA and prevalence of correlation matrices across disciplines. \n- The theoretical stability analysis is sound and backed up by validation on synthetic and interesting real world neuroimaging data.\n- The demonstration of multi-scale transferability is very cool, and goes beyond what PCA is capable of.\n- The manuscript is clearly written and code is provided for replicability.\n\nWeaknesses: \n- I find the paper to have few weaknesses. \n I would like to know more about model training time and computational cost in terms of covariance matrix size and number of samples. I think the authors adequately address limitations in their work.\n",
" The authors propose a coVariance Neural Network (VNN) in analogy to graph neural networks based on covariance matrices of datasets, similar to what is used in PCA. The authors also introduce a “covariance transform” which projects a new datapoint into the PCA space by projecting to the eigenvectors of the covariance operator $UX$ and a covariance filter which learns coefficients of polynomial of the covariance matrix. They show that using covariance filters one can recover the covariance transform. They also show that this polynomial filter is stable with respect to perturbations in the sampled data from which the sampled covariance matrix is taken. Strengths:\n\nThe authors correctly note the similarity between the covariance matrix and an adjacency matrix of a graph, i.e., positive semidefinite matrices whose eigenvectors and values have specific properties in terms of describing the heterogeneity in the data. In fact it can be argued the other way that graph fourier transforms and graph signal processing are the result of a “kernel trick” based on PCA which is far older. So in this sense GNNs/GCNs are a more general class of networks that subsumes VNNs. \n\nThe note that this results in a stable PCA is interesting although over the years there have been robust forms of PCA that have been developed including variations of Principal component pursuit.\n\nWeakness:\n\nHowever, I do not believe their proposal constitutes a new neural network of any sort. I believe this is just an application of GCNs to the graph consisting of data features as nodes and data points as signals on the nodes (rather than vice versa). While this may be desirable for some types of analysis (see Tong et al. IEEE ICCASP 2022 for analysis of cells as signals over gene graphs), I do not believe it has advantages over robust PCA for this specific application. \n\nThe paper should potentially be reconfigured to simply talk about an application of GCNs/GNNs to feature covariance matrices and situations where that could be useful. \n\nFurther a key weakness here is regarding relationships between data features as defined by covariances. This is a strictly linear relationship, if this was changed to mutual information or some other relationship type then indeed a more complex relational graph would be necessary and this is precisely where GNNs/GCNs have contributed. \n\nOther uses of the covariance filter seem to amount to low-rank approximations done via SVD and again I don't see much advantage in using a neural network for this kind of linear operation. \n\n What are other applications of the covariance filter that would not be subsumed by SVD low rank approximations? I do not see a discussion of limitations in this manuscript. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"ciM0a9HDR9S",
"uPQ7XFRhCnk6",
"O5GeMELKAA",
"Byfa96xBZVs",
"F1acbaHS4EW",
"F1acbaHS4EW",
"nips_2022_noyKGZYvHH",
"nips_2022_noyKGZYvHH",
"nips_2022_noyKGZYvHH"
] |
nips_2022_kB9jrZDenff | Unsupervised Cross-Task Generalization via Retrieval Augmentation | Humans can perform unseen tasks by recalling relevant skills acquired previously and then generalizing them to the target tasks, even if there is no supervision at all. In this paper, we aim to improve this kind of cross-task generalization ability of massive multi-task language models, such as T0 and FLAN, in an unsupervised setting. We propose a retrieval-augmentation method named ReCross that takes a few unlabelled examples as queries to retrieve a small subset of upstream data and uses them to update the multi-task model for better generalization. ReCross is a straightforward yet effective retrieval method that combines both efficient dense retrieval and effective pair-wise reranking. Our results and analysis show that it significantly outperforms both non-retrieval methods and other baseline methods. | Accept | This paper presents an approach called ReCross that improves zero-shot task performance by retrieving and fine-tuning on examples of similar supervised tasks. This method is shown to help multi-task finetuned models when evaluated zero-shot on novel tasks.
The interesting finding of the paper is that fine-tuning on relevant examples from different but possibly related tasks can help. This finding can help researchers in the areas of zero-shot learning and multitask models.
Otherwise, the method although conceptually simple, includes significant additional machinery, which likely makes it practically difficult to use as the reviewers point out. Similarly, the relative contribution of the re-ranking step seems small and the steps appears to add significant complexity. As one of the reviewers points out, the paper and the method may be clearer without that step.
The review process included a lengthy and productive discussion, which helped the paper clarify and improve on several points. As result two of the reviewers increased their scores. There is now consensus among the three reviewers that the paper should be accepted.
| train | [
"APpb-aofGO",
"U7Cvnnei0JH",
"veRU2Qckoax",
"BXB1OEJt5To",
"i2qNx3irkA",
"13LUkU81gd",
"C81PrmuHV91",
"cyrJSSD0zTp",
"e7TLPAfvo55Y",
"IdzEMFLyPYx",
"si2TefgGDf",
"8zHjw6dxseu",
"J6y02YWJdA",
"a1AesqcUEA",
"n6vt8rRpASL",
"pV4bG3CeeiWR"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your detailed reply and raised score! \n\nWe will revise the final version accordingly based on these valuable suggestions and comments. Specifically, we will reframe the introduction of the reranker such that we have more space to add our analysis to the main paper. We will also rephrase the conclusion about the random retrieval such that the analysis will focus more on eliminating confounds instead. \n\nThank you very much again for your thoughtful review and detailed discussion! :D ",
" Thanks for the follow-up. Looking forward to the additional results and discussion that you plan to add. ",
" Thanks for the updates and the detailed response.\n\n- The qualitative analysis seems interesting and, agreed, is probably sufficient for this paper.\n- I hadn't noticed that the re-ranker helps to reduce the stdev in many cases, that seems like a useful outcome, worth staying in the main paper I suppose. It is still true that the gains from the re-ranker are just not very good or very consistent. The paper would just be a lot stronger if the vanilla recross method -- which provides better improvements over baselines and frankly, is a lot more elegant -- were the focus of the paper. And the re-ranker can be introduced as a secondary extension -- one of the signals you mentioned, that can help nudge scores to be higher sometimes. This mostly just needs some reframing; the base ReCross method without reranking is quite a nice approach and will probably draw majority of the interest. That being said, I'll certainly raise my score so the paper isn't blocked by this.\n- The conclusion from the random retrieval baseline is still not quite right and calling the max score from the trials \"lucky\" seems really not right. Random retrieval is an important control to eliminate confounds and help determine the true value added by smarter retrieval. Reading too much into the result of a single trial seems gnarly and I'd suggest against it. \n",
" Thank you for clarifying the differences between SoftEM and EM, as well as for providing extra analyses in the Appendix.",
" Dear Reviewer NtmR,\n\nThank you very much for reading our response and raising your rating for the ReCross paper. We just noticed there were a few additional comments (\"EDIT after rebuttal\") in the main review. To make the discussion clearer, we list four scenarios for the training and testing stages:\n\n1. Upstream Training with Templates + Unsupervised Generalization (N inputs with templates)\n2. Upstream Training with Templates + Few-Shot Generalization (N inputs with templates + N labels)\n3. Upstream Training with Templates + Few-Shot Generalization (N inputs ***w/o*** templates + N labels)\n4. Upstream Training ***w/o*** Templates + Few-Shot Generalization (N inputs ***w/o*** templates + N labels)\n\nSetting 1 is our main experiment in the paper, and Setting 2 is what we add in Appendix D.2. We show that ReCross can improve the BART0's performance in both Setting 1 and Setting 2. For Setting 3, our preliminary results show that the performance of BART0 is much worse than in Setting 2 and we think it is because the training and generalization stage is inconsistent with the model. For Setting 4, the base model was even worse because it is less capable of using the cross-task information from the upstream data, and even adding meta-learning elements in the upstream learning cannot help much (we refer to the CrossFit paper for similar experiments).\n\nAs for the cost of annotation cost, we agree that adding 16 annotation labels for one given task is possible, while the unsupervised setting is more general if we consider the scalability and the efficiency in the inference stage. Therefore, Setting 1 is less expensive than 2. Given that the base model performance in Setting 3/4 is not at the same level as in Setting 1/2, so we did not consider them in our evaluation. We will add a comprehensive analysis including these two settings as well to provide more discussion. \n\nFinally, we would like to argue again that the main focus of our paper is not on a new setting, but on the retrieval augmentation method (ReCross) that can help cross-task generalization. Therefore, our evaluation used the settings, which most prior works like T0, FLAN, and other recent instruction-based generalization methods focus on. We love the discussion with you and will definitely add new empirical results and new discussion in our final version of the paper.\n\nThank you very much again! \n\nBest regards,\n\nAuthors of the ReCross paper",
" Dear Reviewer fTmN for the ReCross paper,\n\nThank you for your time and efforts again in reviewing our paper. We kindly remind you that the discussion period will end very soon. We believe that we sincerely and successfully address your comments by covering all the questions.\n\nIf you have any further concerns or questions, please do not hesitate to let us know.\n\nWe understand that you are very busy, so we would appreciate it a lot! Thank you very much! :D\n\nSincerely,\n\nAuthors of the ReCross paper",
" Dear Reviewer YUN8 for the ReCross paper,\n\nThank you for your time and efforts again in reviewing our paper. We kindly remind you that the discussion period will end very soon. We believe that we sincerely and successfully address your comments, with the results of the supporting experiments.\n\nIf you have any further concerns or questions, please do not hesitate to let us know.\n\nWe understand that you are very busy, so we would appreciate it a lot! Thank you very much! :D\n\nSincerely,\n\nAuthors of the ReCross paper",
" Hi Reviewer YUN8,\n\nThank you again for your review! Based on your thoughtful feedback, we wrote a detailed rebuttal about the metrics. To make it easier for you to check out the EM-based results, we attach a brief version of the new Table 7 (in Appendix D.1) below in this comment.\n\nIn light of the imminent discussion deadline (Aug. 9), it would be awesome to know if our rebuttal sufficiently addressed your concerns and questions. If your concerns were well addressed, would you please consider raising the score for the ReCross paper? By the way, Reviewer NtmR has raised the rating to 6 (weak accept) after the rebuttal.\n\nWe understand that you are busy, so we would appreciate it a lot! Thank you very much! :D\n\nSincerely, \n\nAuthors of the ReCross paper\n\n\n| task | T0_3B | BART0 | Random | SBERT | ReCross-init. | ReCross | \\Delta |\n|------------------|--------|--------|--------|--------|---------------|---------|--------|\n| Overall - mean | 36.43% | 33.82% | 34.27% | 34.43% | 37.01% | 37.47% | 3.65% |\n| Overall - Median | 36.43% | 33.82% | 34.90% | 34.91% | 36.62% | 37.17% | 2.34% |\n| Overall - Min | 36.43% | 33.82% | 31.33% | 32.91% | 36.22% | 36.93% | 1.05% |\n| Overall - Max | 36.43% | 33.82% | 35.35% | 35.79% | 38.41% | 38.75% | 1.70% |\n",
" Hi Reviewer fTmN,\n\nThank you again for your review! Based on your thoughtful feedback, we wrote a detailed rebuttal covering the following points:\n1. more analysis to understand the performance gains by ReCross \n1. the importance of the role of reranking module \n1. the clarification of our statement about the comparisons between random retrieval and BART0 model \n1. our revisions according to your great editing suggestions \n\nIn light of the imminent discussion deadline (Aug. 9), it would be awesome to know if our rebuttal sufficiently addressed your concerns and questions. If your concerns were well addressed, would you please consider raising the score for the ReCross paper? By the way, Reviewer NtmR has raised the rating to 6 (weak accept) after the rebuttal.\n\nWe understand that you are busy, so we would appreciate it a lot! Thank you very much! :D\n\nSincerely, \n\nAuthors of the ReCross paper",
" Dear reviewers,\n\n\nThank you for your thoughtful and positive reviews! We are happy to hear that you liked our contributions to unsupervised cross-task generalization. We appreciate all of you for your __positive comments__ highlighting the strengths of our work for a summary:\n* __YUN8__: a simple but effective method, impressively works, reasonable set of baselines and an informative choice of ablations; contribution of the reranker; adequately addressed the limitations.\n* __fTmN__: “the entire setup being explored is an important and high-impact problem to study”; “a compelling solution worth exploring”; quite interesting result; \n* __NtmR__: clearly written, lucidly presented, sound method, non-trivial performance gains\n\n\nWe also sincerely thank reviewers for your constructive feedback and questions to improve our manuscript. We have __addressed all the questions__ raised by reviewers with new experiments during this rebuttal period. \n\n\nWe summarize how we address the main questions as follows:\n* __YUN8__: We added the empirical results using the standard EM metric in Appendix D.1. We revised the paragraphs in the main paper about SoftEM for more clarifications. \n* __fTmN__: In our response below, we summarized our analysis to better understand the performance gain and model behavior in Appendix (A~B).\n* __fTmN__: We used a detailed paragraph to illustrate the importance of the reranker by showing the relative improvement over the ReCross without reranking. \n* __fTmN__: We rephrased a few paragraphs to make them more concise and refined Figures 1 and 2 for better visualization.\n* __NtmR__: We discussed the cost-effectiveness of ReCross in the unsupervised setting from multiple perspectives in our response below.\n* __NtmR__: We added the result comparison under the few-shot settings in Appendix D.2. and found that ReCross can also boost the performance of few-shot learning for task generalization. This finding shows that our approach improves performance even in settings where few-shot data is available or easy to generate. \n\n\nWe submitted our __revised draft and supplementary file__ (i.e., Appendix) that addressed individual concerns. We marked changed parts with blue fonts. To make it more convenient for reviewers to check our appendix, we only uploaded the appendix pdf file as the supplementary file, and our previous code zip is still accessible in the revision history. \n\n\nThank you for your consideration,\n\nAuthors",
" Thank you for your thorough review! We are pleased to hear that you feel that ReCross is technically solid. We understand the concern about the usefulness of the unsupervised setting (vs. the few-shot setting). We provide more clarification below and have done some additional experiments. Please see the new Appendix D.2 for the experiments and analysis. \n\nIn this response, we list the reasons why we think our setting is practical, and show that ReCross can also boost the performance under the few-shot setting with new empirical results. \n\n## Practicality of unsupervised setting\n\n### Cost of obtaining task labels\n\nThe unsupervised setting in the paper does not require any human annotation of labels. For some tasks (NLG tasks in particular, e.g., summarization), the expected output (label) are open-ended and possibly lengthy and thus human annotation is much more expensive and time-consuming. Also, few-shot learning must ask humans to label examples for __each__ new task, and it is thus less practical when there are a large number of emerging tasks from the users. Meanwhile, ReCross requires only a natural-language task template, which does not require potentially expensive manual annotation or domain expertise.\n\n### Scalability & Real-Time response\n\nDeploying the ReCross pipeline is a one-time process. All we need to do is to pre-compute the upstream index with LM and configure the reranker (a simple masked LM) by running our script. In production, once the users input the examples with NL instructions, we do not need to wait for any human annotations anymore, and thus it is much more efficient in the long run at scale. \n\n### One query example at a time \nIn the scenarios where users only provide one query example and want to get its label from the model, ReCross also shows great performance (i.e., |Q|=1 in Table 3). Now, users aim to get the labels from the model because they don’t know the truth. It is then impractical to assume there are a few labeled data from the users too. \n\n## Emprical studies\n\n__The unsupervised ReCross performance is comparable to few-shot learning with label annotations.__ In Appendix D.2, we report the performance of directly fine-tuning BART0 with the labeled query examples. Although it is an unfair comparison with our previous ReCross results, we found that they are comparable. Plus, we find it is also very challenging and time-consuming to tune the hyper-parameters (because there is no dev set) and know when to stop to avoid overfitting. This again suggests that the unsupervised setting is more practical in production. \n\n__ReCross can boost few-shot performance.__ More importantly, the ReCross framework does not conflict with the few-shot setting. Given a labeled query set for a target task, retrieved examples from the ReCross can still benefit few-shot learning as additional training data. We designed two simple methods for applying ReCross under the FS setting and report the empirical results in Appendix D.2. It turns out that ReCross can also boost the performance under the FS setting by about 3 points.\n\nAll in all, we believe that the problem setting studied in this paper can be very common and less costly than the few-shot setting when there are a large number of users and emerging tasks. Plus, our method ReCross can also boost performance in the few-shot setting settings.\n\n## Cost of creating prompt templates \n\nThe prompt templates used in the paper are natural language (NL) instructions from the PromptSource repo, which are part of the input texts. They are the foundations of current unsupervised cross-task generalization methods (including T0, FLAN, instruction-GPT, and ours). And it’s not that difficult for users to create a template for a target task. Instead, only when we have such NL elements, the users can naturally interact with the LMs by describing the task in their own words. For example, “select the best choice as the answer to the question: xxx..” or “Is the review supportive or not? Review: XXXX. Yes or No.” \n\nWe can also consider problems such as summarization or tasks which rely on domain-specific knowledge as examples of tasks where forming a simple prompt template or gathering inputs is substantially easier than building an effective number of input/output pairs for few-shot learning. Although there may be some sensitivity to the chosen prompt, prior work (e.g., Scao and Rush 2021, “How Many Data Points is a Prompt Worth?”) suggests that prompt selection is not a dominant driver of performance.\n\nWithout such NL templates, then we will need to either put the task names as the prefix of the input sequences or just give no signals for the LMs to distinguish/connect across tasks. Neither method can hardly enable cross-task generalization without any training data. We also studied the importance of the NL templates in retrieval as an ablative study in Appendix A.2. \n\n",
" Thank you for your thorough and constructive review! We are pleased to hear that you feel that our ReCross is a compelling solution to an important and high-impact problem to study. We also appreciate your concerns and questions, each of which we address in detail below.\n\n## More analysis\n\nIn Appendix, we presented some analysis to help understand “how” and “when” the retrieval augmentation works: Figure 4, Table 5, Appendix A.1~A.2, and Appendix B.\n\nWe investigate whether the utility of upstream examples in retrieval augmentation is related to the similarity in terms of the task formats. From Appendix A.1, we found some counterintuitive results. For example, if removing MCQA upstream tasks from the upstream index, then the ARC target task can have an even better performance, although it is an MCQA-formatted task. Thus, we hypothesize that similarity in terms of reasoning types is more important than format similarity for retrieval augmentation. After all, the upstream model has been already trained to work with these basic task formats. Re-learning the tasks of the same format might lead the model to overfit the seen domains. Additionally, to provide a more concrete analysis, we also present case studies with two specific tasks (CB and SQUADv2) in Appendix B.\n\nWe think the natural language instructions in the templates are necessary for ReCross to get impressive results. Therefore, we investigated two ways of perturbing the instructions and monitoring the performance changes in Appendix A.2. We find it is indeed true that perturbations of the instructions will lead to much worse performance. \n\nWe will move these analyses to the main paper once given more space limit in the final version. We believe that a rigorous, principled way of analyzing the correlation between query and (optimal) retrieval examples will be a great future direction, given the strong evidence from this paper that ReCross works so well. \n\n## The importance of the ReRanker.\n\n__Performance__: In Table 1, we can see that using the reranker not only improves the mean, min, and max but also reduces the std. Specifically, the overall mean is improved by 1 point in SoftEM (i.e., about 2.3% relative improvement compared with no-reranking ReCross) -- two tasks get 7% relative improvement, four tasks get 2~4% gain, and three others get 1% gain. The min of the overall performance (when using multiple query sets) is improved by 4% relatively. These results show that re-ranking can consistently enhance the overall performance and make the ReCross more stable. Therefore, we argue that the role of the reranker is of great importance for ReCross. \n\n__Extensibility__: The reranker also provides a great space for future research. Indexing the upstream data can be very time-consuming so we don’t want to do that frequently. Therefore, if we have more training signals to improve the retriever, it is very important to have an efficient reranker module for learning to rank. \n\n## Clarification regarding the conclusion on random retrieval\n\nSorry for the confusion about this statement. We wanted to show the potential benefits of retrieval augmentation by comparing the BART0 column with the **maximum performance** of the Random retrieval baseline. We have clarified this statement in the revised paper. \n\nThe max performance among the five rounds of random retrieval is usually comparable to or larger than the performance of the vanilla BART0 for all tasks (looking at the 2nd column and the 3rd column’s mean+std in Table 1). For example, although the mean of random for SquadV2 is 29.86 which is smaller than BART0’s 32.40, the max of random is about 35.32.\n\nThe better maximum performance of “lucky” rounds of random retrieval suggests that it is worth developing better retrieval augmentation methods. Plus, it also suggests that if given suitable retrieved data, such a simple “re-learning” method could already enhance the upstream model.\n\n## Other suggestions.\n- Conciseness: We rephrased a few paragraphs to make them more concise and they are highlighted in blue. \n- Figures: We refined Figures 1 and 2 accordingly based on the comments. \n- Limitations: We talked about the limitations of our work in Sec. 3.5 (for the re-learning method) and Sec. 6 -- the three future directions can be interpreted as the limitations. \n\nThank you very much for the valuable questions and suggestions. \n",
" Thank you very much for your comments and questions! We are pleased to hear that you feel that our ReCross is a simple but effective method of unsupervised cross-task generalization. We understand your concerns and questions. We have revised the paper accordingly and added the new empirical results in __Appendix D.1__. \n\n## Results with the standard EM. \n\nThank you for bringing this up! We add the EM-version of Table 1 in Appendix D.1 for a more comprehensive evaluation as the reviewer suggested (please check Table 7). The relative difference between the methods is similar to the ones with the SoftEM, and our findings still remain almost the same. \n\n## Motivation of the SoftEM metric.\nIn Line 233-234 of the initial submission, we describe the only difference between SoftEM and the standard EM is that SoftEM also counts the substring matches. We adopt this variant because we observed that sometimes even though T0-like models (including ours) answer the input questions correctly, their raw outputs are not exactly the same as the truth outputs generated by the PromptSource templates. In particular, the ground-truth outputs for multiple-choice QA tasks are often in the form of “[A/B/C/D]: [answer]”, while the models often only output the id of the correct choice (e.g., “A”) or the text part of the choice. We also find that the model can output some noise (such as additional punctuation) after the answer (e.g., “True” vs “True.”). The standard EM will discard such matches and cause inaccurate measurements. Although SoftEM might add false positives due to substring matches, we found it is very rare according to our manual inspection of the 10 tasks. Considering both SoftEM and EM have their pros and cons, we will present both results in our final version. \n\n",
" The authors present ReCross, a retrieval augmentation method to improve cross-task generalization of seq2seq models. Starting with a T5-like model (\"upstream model\") trained on a diverse set of tasks (in seq2seq format), the training data is then embedded using the encoder's top layer and stored in a dense index. To evaluate on an unseen task, the query is used to retrieve training data examples that might be helpful in evaluating this new example (using a two-stage retrieve+rerank process). These helpful examples are used to fine-tune the upstream model, and the model is finally used to perform the target task. Strengths:\n* This is a simple but effective method for allowing a model to retrieve helpful previously seen training data, and re-learn from it. As the authors say, there are probably approaches that would be more effective than continuous fine-tuning. However, it's impressive how well such a simple method already works.\n* The authors provide a reasonable set of baselines and an informative choice of ablations (including those in the Appendix). We see the importance of choosing a way of embedding examples that is compatible with the upstream model's internal representation. We also see the contribution of the reranker to the overall performance, and the interactions of various central hyperparameters.\n\nWeaknesses:\n* ~One crucial issue I see with this work is that the authors create a new metric (SoftEM) and use it throughout the paper without properly motivating it. They refer to the Appendix, stating \"We discuss more on this selection with illustrative examples in Appendix\". No such discussion or examples are found in the Appendix, as far as I can tell. It's not clear whether the proposed method would still perform as well using a more standard metric.~ EDIT: Fixed in the latest draft * How does ReCross compare to the other methods of Table 1 if using exact match as the metric? I see it's implemented in the code, so I assume this was tested? The authors adequately address in section 3.5 the fact that their continuous fine-tuning approach is quite simple.",
" This paper presents an approach called ReCross for fine-tuning an LLM by retrieving seen-task data that is particularly relevant to unlabelled inputs from an unseen task. The seen-task examples that are most relevant are retrieved from an index which contains representations from a model that has been fine-tuned on the seen-task data. Following the retrieval, a re-ranker is used to score each retrieved example-query pair, and the top K are used for fine-tuning the model before evaluating on the unseen target task. Experiments on a range of NLU tasks including classification, QA etc. show that ReCross outperforms instruction-tuning baselines. Strengths: \nThis paper shows that carefully selecting data from related tasks for fine-tuning before zero-shot evaluations can help to improve performance. This is sort of a test-time adaptation of the model parameters at a coarser granularity than at an example-level. This means that the retrieval cannot be based on semantic similarity which is why the model used for creating the retrieval index has been fine-tuned on a large pre-selected subset of the seen-task data. The entire setup being explored is an important and high-impact problem to study, and retrieval augmentation provides a compelling solution worth exploring as done in this paper. \nAlso, the Bart0 vs. T0-3B result is quite interesting. \n\nWeaknesses: \n- The paper is missing a qualitative discussion on what sorts of examples are retrieved and what sorts of features might be in play for retrieving useful similar examples. Figure 3 doesn’t seem to be providing too many intuitions for this. This makes it a bit hard to understand where the improvements are coming from.\n- The role of the reranker is unclear. Improvements over and above the simpler version of ReCross without the reranker seem small, mostly non-existent given the standard deviation, which is bizarre especially given how complex the whole distant supervision setup is. Would it be better to just remove this from the main paper and replace it with more analysis, data-related ablations and discussion on where the improvements are coming from? ReCross without the distant supervision is still interesting and perhaps a bit simpler. \n- The conclusion from the random retrieval baseline that says “This suggests that it is promising to study better retrieval methods to get consistent improvement in overall performance” seems to only hold true for winogrande..maybe it isn't the right conclusion? Random retrieval is a control experiment to remove confounds and understand how to interpret the improvement from ReCross. E.g. h-swag, anli_r3, squadv2 do not benefit from ReCross. \n- The paper, while comprehensive, can sometimes be verbose. Perhaps some revisions would help to pare it down. Figures 1 and 2, as is, may not be helping the reader to understand the approach. Is the example in Figure 1 an actual retrieval from the amazon polarity and gigawords tasks? These don’t seem like they should help, should they? Maybe cherry-picking a clearer example could help with this one? Figure 2 might benefit from being broken down into stages. \n\nEDIT: Updating the score from 4-->6 after the author response. Please see the previous section which includes suggestions as well. Missed the section discussing limitations, where is it included?",
" This paper considers the problem unsupervised cross-task generalization of multi-task LMs such as T0, where only a few unlabeled examples from the target task are available. It proposes to retrieve, from the multi-task training corpus, samples that are likely to help the target task using the unlabeled target task inputs, and fine-tune the LM on these retrieved samples. \nTo retrieve helpful samples for a target task, they propose a retriever-ranker pipeline, where the more efficient retriever first retrieve, from the entire training set, an initial set of candidate samples, using the last-layer encoder representations from the upstream LM as embeddings for retrieval. It then trains a reranker using a meta-learning inspired approach, aiming to find training samples that are more likely to help an unseen target task.\nThey show that this retrieval-augmented approach achieves better cross-task generation than the original LM, as well as a naive retrieval approach based on semantic similarity (using SentenceBERT embeddings). Strengths:\n\n- The paper is clearly written. The problem setting and proposed approach is lucidly presented.\n\n- Given that no target task labels are available in the \"unsupervised\" setting, the proposed method is sound, and produces non-trivial gains over the applying the vanilla upstream LM to unseen tasks.\n\nWeaknesses:\n\n- My main concern is the practicality or usefulness of this \"unsupervised\" setting. Given that only a few target task samples are required (the paper uses 16), the cost of obtaining the inputs vs. (inputs + labels) would probably not differ that much in the real world. It would at least be helpful to compare to the \"supervised\" cross-task generalization methods to show how much the gap is. And if this approach lags significantly behind a few-shot supervised one, it is arguably less expensive / time-consuming to simply label a few samples than to apply a complicated retriever-ranker pipeline.\n\nEDIT after rebuttal: The added Appendix D.2 in the few-shot setting is welcome, and to some extent alleviates my concern here. Though it does not fully address this issue: Collecting labels is more expensive on certain tasks for sure, but this work only considers the tasks where the answer can be evaluated via exact match (e.g. multiple choice, classification, etc.). Arguably the cost for collecting 16 inputs vs. 16 (inputs + labels) does not differ that much. In addition, the concern over the availability of prompt templates for unseen tasks remains. Few-shot methods can transfer to new tasks with a few inputs + labels. I'm not convinced that this is inferior to the proposed paradigm of requiring a prompt template and a few inputs. Q1: Does this work assume the availability of prompt templates for the unseen target tasks? If I understand correctly, these templates are part of the \"input\" of the target task, which are assumed to exist? If this is correct, then it amplifies my main concern in the Weakness section. It is arguably more difficult / expensive to come up with these natural language prompts that work well for the upstream LM for each unseen task than to label a few samples. N/A"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"veRU2Qckoax",
"i2qNx3irkA",
"8zHjw6dxseu",
"J6y02YWJdA",
"si2TefgGDf",
"e7TLPAfvo55Y",
"cyrJSSD0zTp",
"a1AesqcUEA",
"n6vt8rRpASL",
"nips_2022_kB9jrZDenff",
"pV4bG3CeeiWR",
"n6vt8rRpASL",
"a1AesqcUEA",
"nips_2022_kB9jrZDenff",
"nips_2022_kB9jrZDenff",
"nips_2022_kB9jrZDenff"
] |
nips_2022_WESmKHEH5nJ | Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe Algorithm under Parallelization | We consider the problem of minimizing the sum of two convex functions. One of those functions has Lipschitz-continuous gradients, and can be accessed via stochastic oracles, whereas the other is ``simple''. We provide a Bregman-type algorithm with accelerated convergence in function values to a ball containing the minimum. The radius of this ball depends on problem-dependent constants, including the variance of the stochastic oracle. We further show that this algorithmic setup naturally leads to a variant of Frank-Wolfe achieving acceleration under parallelization. More precisely, when minimizing a smooth convex function on a bounded domain, we show that one can achieve an $\epsilon$ primal-dual gap (in expectation) in $\tilde{O}(1 /\sqrt{\epsilon})$ iterations, by only accessing gradients of the original function and a linear maximization oracle with $O(1 / \sqrt{\epsilon})$ computing units in parallel. We illustrate this fast convergence on synthetic numerical experiments. | Accept | The authors design an algorithm for composite stochastic optimization that leverages both smoothness and strong convexity with respect to the same (general) norm, using a stochastic counterpart to recent work by Diakonikolas and Guzman. They then show how to leverage this algorithm and randomized smoothing in order to create an algorithm for constrained smooth convex optimization based on exact gradient evaluations and linear optimization computations. Compared to Frank-Wolfe, the algorithm requires strictly less gradient evaluations and parallelizes the same amount of linear optimization computations.
The paper received generally favorable reviews, with the exception of reviewer 3QVT who did not engage in discussion and whose critique I found unclear. I agree with reviewer rQnJ’s assessment that even though “all the building block are quite known in optimization community (accelerated methods, duality, Bregman distances, smoothing, etc.), the whole approach fits perfectly together and provides the reader with a number of nice and useful observations.” Consequently, I recommend acceptance. | train | [
"iWkpb1DImCP",
"ErysE6CfxxJ",
"qDx_aOY0mAl",
"1ZWIsjeMc9i",
"7PE69WHbwvD",
"rSF_B8AvVdxN",
"Z7gBYHbavy",
"QGjH3nVq5G",
"nEDRpQkusGe",
"Ixuan0zTW6S"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear AC,\n\nThank you for your question. We are replying now, as soon as possible after your question, because you asked this directly to us, expecting an answer.\n\nThe work that you reference does not put in question our novelty claims, as explained in the following points. Overall, there are indeed many papers on very closely related topics (the closest ones probably being those already mentioned in our work (see [12] Diakonikolas and Guzmán and [18] Gasnikov and Nesterov), but, as discussed in the paper, the ingredients required for our analyses are not present in the works we are aware of, including that of Ghadimi and Lan. In more details:\n\n(1) Ghadimi and Lan consider general norms, but the strong convexity and smoothness are placed on the same function. For general norms, moving strong convexity from one function to another is not an option (contrary to the Euclidean case), so one cannot do so trivially. This essential fact (that of not being able to move strong convexity from one side to the other) motivates a number of works on the topic, including, e.g., [12].\n\n\n(2) More importantly, the convergence rate in Ghadimi and Lan is a convergence to the exact minimizer, with rate of the form $O(\\sigma^2/ (\\mu \\epsilon) + \\sqrt( \\beta / \\epsilon) )$ (using notation from our paper). On the contrary, we prove convergence only up to a ball around the minimizer, but with a much better accelerated linear rate of the form $O( \\sqrt( \\beta / \\mu) \\log 1/\\epsilon)$. Note that this accelerated linear rate is absolutely necessary to obtain the result in Section 4 which could not be obtained from the results in Lan, to the best of our knowledge.\n\nAs for what concerns the second claim by the AC, it may be true that some other algorithms achieving the same rate as Algorithm 1 of our paper could be used as a black-box to design accelerated Frank Wolfe methods (although to the best of our knowledge, no such algorithm has been studied in the literature). However one needs to be extra careful, as we apply Algorithm 1 to the stochastically smoothed dual of the constrained optimization problem introduced in Section 4. It is thus only by a careful choice of the distance generating function $w$ that we are able to develop an algorithm that does not require computing values or gradients of the conjugate function $f^*$. It is not guaranteed that such a trick would work for some (hypothetical) algorithm achieving the same rate as that of Algorithm 1.\n\nIn summary, to the best of our knowledge, the question of accelerating Frank-Wolfe is still open in general, and we are the first to provide an answer under the (stochastic) parallel linear optimization oracle. This was the original intent of our work, and it indeed all boils down to the fact that “reduction from accelerated Bregman method to accelerated Frank-Wolfe algorithm based on parallelization is a nice addition to the literature” (see reviewer BdLs’ comments), and the complexity analyses follow in the same way. From what we can tell, the required accelerated Bregman method has not been studied so far, and we therefore keep all our novelty claims unchanged.\n\nWe would be happy to cite this work and to explain these differences if you find it useful, and would be glad to further expand on the topic if need be.\n\nBest,\n\nThe authors.",
" Dear authors,\n\nI would like to request some clarification about the following question: which (if any) of the complexity bounds in the paper are new?\n\nIn particular, it seems that the rate of convergence given in Theorem 1 has already been shown (under the same assumptions*) by Ghadimi and Lan in \"Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms.\" \n\nMoreover, it appears that the algorithm developed in Section 4 can use any method achieving the rate of Theorem 1 as a black box, and consequently the novelty of the bounds of that section is also unclear.\n\nI am sorry that this question comes late in the discussion period, but I thought some opportunity to respond is better than not all. If the option to respond is no longer available by the time you are ready to do so, please consider contacting the program committee for assistance.\n\n\\* Ghadimi and Lan write the strong convexity assumption on the smooth component of the objective, but their proof trivially extend to the case that only the sum of the objective and the composite term is strongly convex.",
" We would like to thank the area chair and reviewers for their work on this submission, and for the fruitful discussion that we had with them. As recommended by Reviewer rQnJ, we will incorporate these clarifications to the final version of the paper.",
" Thanks a lot for your answers.\nPlease, consider adding your clarifications to the final version of the paper as they seem to be helpful.\n",
" We thank the reviewer for their overall positive feedback and review of our work. Their summary also very accurately describes our work; we answer to the two detailed questions below.\n\n1. It is correct to think of $x_k$ as living in the primal space, while $v_k$, $y_k$ and $z_k$ live in the dual space. Moreover, as the result in Theorem 3 states, $(x_k, y_k)$ can indeed be considered to converge to an optimal primal-dual pair. Regarding the notation $x_k^*$ and $y_k^*$, we do not use it in this work. We hope that we have well understood your question and would be happy to provide further clarification otherwise.\n\n2. Thank you for the catch, we were indeed missing a $(1 - e^{-z_1})$ factor. We have fixed this in the new version of the paper: It was a typo and did not affect the computation.\n",
" We thank the reviewer for their overall positive feedback and review of our work. Their summary is very accurate. We answer their main questions and comments below. \n\n\n**Strength and weaknesses:**\n1. The requirement that the probability distribution has positive differentiable density is made for technical purposes in order to prove Proposition 2 (which is done in [1]). In practice, note that this requirement is satisfied for many commonly-used distributions (multivariate normal, Gumbel, …), and it is not a difficulty at all.\n \n On the other hand, Assumption 2 on the variance does not depend on the distribution. Rather, it depends on the norm used to measure the smoothness of the function and on the resulting diameter of the constraint set. In particular, the constant ρ only depends on the underlying norm. In Remark 1, we compute the value of $\\rho$ for commonly-used norms. In particular, for the case of the Euclidean norm we have $\\rho= 1$, for the $\\ell_1$-norm $\\rho= d$ and for the $\\ell_\\infty$-norm $\\rho = O(\\log d)$.\n\n As for the choice of $\\alpha$, it is tightly related to the choice of $M$, $L$ and $R_K$, which we discuss in the next remark.\n\n2. We acknowledge that our proposed algorithm is not as universal as the classical Frank-Wolfe method. However, acceleration of Frank-Wolfe methods has been studied in the literature for a long time, and our work is the first to provide accelerated rates (using parallelization) without further assumptions on the constraint set and/or the objective function. \nWe now give deeper insights into the choice of the parameters:\n - Generally speaking, only upper bounds on all parameters are sufficient. In practice, it seems relatively reasonable and this is what we did in our experiments. Let us be a bit more specific and discuss each parameter.\n - The parameter of the distribution $M$: note that the user of the algorithm is free to choose practically any distribution (up to the assumptions of Proposition 2 discussed above). In particular, the user can always choose a distribution for which $M$ is easy to compute. This is what we do in the experiments, where we give the value of $M$ for the normal and Gumbel distributions.\n - The diameter of the set: we acknowledge that compared to classical Frank-Wolfe, this is a limitation, although we argue that it is not a major one. Indeed, in most practical applications the user knows the set over which the optimization is carried on. From that knowledge, obtaining an upper bound on the diameter is usually straightforward.\n - Lipschitz constant of the gradient: we acknowledge that this might be a limitation (if no suitable upper bound can be found), but as we discussed above, the goal of this work is not to be as universal as classical Frank-Wolfe, but rather to fill a missing gap in the literature. Moreover, we point out that several (but not all) other works that attempt to accelerate Frank-Wolfe methods assume knowledge of the Lipschitz constant, see for example [2, 3], and that upper bounds can often be computed in practice. \nFinally, it might be possible to use some type of linesearch within our method. From what we can tell, it is not straightforward as the smoothing of the dual induces stochasticity, and it is known that obtaining theoretical rates for linesearch techniques on stochastic objectives is a tedious task. We believe this is an interesting direction of future research. \n\n[1] Q. Berthet, M. Blondel, O. Teboul, M. Cuturi, J.-P. Vert, and F. Bach. Learning with differentiable perturbed optimizers. 2020.\n\n[2] D. Garber and E. Hazan. Faster rates for the Frank-Wolfe method over strongly-convex sets. 2015.\n\n[3] G. Lan and Y. Zhou, Conditional gradient sliding for convex optimization. 2016\n\n**Questions:**\n\n1. Thank you for the catch, this should indeed be $d\\pi$. We have fixed this in the new version of the paper.\n\n2. We have made this change in the new version of the paper.\n\n3. It is true that when $m=1$, our method does not correspond to the pure Frank-Wolfe method, as there is still stochasticity involved. However, when $m=1$, our claim is that the rate of our method matches in expectation the rate of the pure Frank-Wolfe (up to log terms).\n\n4. Using a linesearch with our method proved to be unsuccessful. As mentioned above, we believe this is due to the induced stochasticity, which tends to make linesearch techniques fail.\n\n For completeness, we have added to the new version of the paper a comparison with classical Frank-Wolfe with an exact linesearch (for which there is a simple closed-form formula since our experiments deal with quadratics), see Figure 2 of the new version of the paper (see PDF). As the plots show, the exact linesearch does not lead to significant improvements over the simple $\\frac{2}{k+1}$ stepsize strategy. This is also in line with the lower complexity bounds for Frank-Wolfe-type methods (see, e.g., [24, Section 3] or [29, Theorem 1]).",
" Thank you for your review of our work. We would like to clarify a few points regarding its objective, the problems that we solve, and how we solve them. Your comments seem to indicate that you have an understanding of our work that is different from the one we were aiming to present. We therefore bring these clarifications before replying to your questions individually. \n\nWe have also added further clarification in our main text, hoping that this will avoid some potential confusion for future readers. We thank you for giving us the opportunity to do so.\n\n### Section 3 (General case)\n\nWe tackle the problem of \n\n$ \\min_{y \\in V} G(y) + H(y) $\n\n- $G$ convex, $\\beta$-smooth. → Access to stochastic gradients\n- $H$ $\\mu$-strongly convex. → Access to prox operator (see Eq. (2))\n\nWe present an algorithm using these information accesses to $G$ and $H$, and give guarantees on its convergence. It does not refer directly to a linear optimization oracle or to Frank-Wolfe.\n\n### Section 4 (Special case)\n\nWe apply this to the special case of\n\n$\\min_{x \\in V*} f(x) + I_K(x)$ [the primal] ←→ $\\min_{y \\in V} s_{\\alpha}(-y) + f^*(y)$ [the $\\alpha$-smoothed dual]\n\nwhere the dual problem is an instance of the general case studied in Section 3, as suggested by the choice of variable names\n\n- $s_{\\alpha}(-y)$ plays the role of $G(y)$. → Access to stochastic gradients through a perturbed linear oracle on $K$, **parallelization here** used to reduce its variance without greater computation time, on algorithm-introduced stochasticity.\n- $f^*$ plays the role of $H$ → A special algorithmic technique allows to only use the gradient of f to solve the prox operator, without computing gradients of $f^*$ (in short, we choose the prox-function $w(\\cdot)$ using $f^*$ ).\n\nNote in particular that we consider that we can directly access the gradient of the objective $f$. The only parallel aspect is in reducing the variance of the stochastic gradient of $s_{\\alpha}$, the smoothed version of the support function of $K$.\n\n### Summary\n\nOur objective is general composite optimization, applied in a particular case to the dual of a constrained optimization problem on $K$.\n\nIn Section 4, the target optimization problem is not a stochastic one: stochasticity is introduced only in the algorithm for handling the constrained set, and not the objective function itself. We also do not compute all the components of the finite sum in parallel, as the stochasticity does not involve a finite sum. Finally, please note again that stochasticity is used for handling the constraint set, and that our claim related to parallelization is standard.\n\nWe hope that this will clarify the aim of our paper, and address some of the points raised in your review, and now reply to your questions individually.\n\n### About your questions:\n\n(1) We want to highlight again that in Section 4, and in the experiments, the role of $H$ is played by $f^*$ (respectively $f^*_1$ and $f^*_2$ in our experiments). It is never 0. Conversely, $G$ is the smoothed support function of $K$, noted $s_\\alpha$ (up to a minus sign). Our experiments are therefore a direct illustration of our theoretical results in 4.\n\nNote that in particular we do not use stochastic gradients of the objective function, so it would not be appropriate to compare with stochastic FW methods. As suggested by another reviewer, we have added a FW with linesearch to the vanilla FW results that we already had.\n\nRegarding the comparison between theoretical bounds and experimental results, their shapes match well : our theoretical analysis accurately predicts the behavior of the optimization error through the steps, and they differ by a multiplicative constant only. This is unavoidable due to the nature of worst-case analysis (which accounts for global problem constants, whereas local constants can only be better).\n\n\n(2) As explained above, we want to clarify that the “parallel” aspect of our algorithm refers to the ability to reduce the variance of gradient estimation for an algorithm-introduced stochastic smoothing, in Section 4. Please note that in this section, we do not parallelize the computation of gradients of the objective function $f$ : the access to the objective function $f$ is assumed deterministic for our Frank-Wolfe algorithm and there is no finite-sum structure assumed. We do parallelize the independent perturbed instances of linear minimization oracle to reduce the variance.\n",
" The authors propose a Frank-Wolfe-based algorithm for solving partially stochastic composite minimization problems over general convex constraints which admit a regularized linear minimization oracle. This general algorithm is applied to the case where the stochastic part of the objective is a finite sum and propose a parallelized step in which all components of the finite sum are computed in parallel. There are numerical results on least squares with L1-norm constraint and matrix factorization. 1. The paper is difficult to parse. The significance is unclear, especially given what appears to be a significant gap between the theory and the experiments.\n2. The presentation of the parallelized algorithm has the flavor of being dishonest since it does appear to be a truly parallel algorithm. If I understand correctly, with the authors view, classical SGD on a finite-sum problem can all be trivially \"parallelized.\"\n3. The experiments are very far away from the theory. Interestingly, this would enable the comparison to more recent stochastic FW algorithms rather than vanilla FW. 1. There seems to be a mismatch between the experiments and the theory. Is there an experiment that has nonzero H?\n2. Is this an actual parallelized algorithm or does it simply have a parallel step (computation of a stochastic gradient)? N/A",
" In this paper, the authors consider convex minimization problem over bounded domain. \nFirst, they propose a new accelerated gradient method that has several distinguising assumptions and features: \n- it works with stochastic gradients of the smooth part; \n- it uses strong convexity of the composite part; \n- it employs an arbitrary strongly convex prox function as a regularizer. \n\nThen, the authors demonstrate that this method can be applied for solving a smoothed dual problem of miniminizing a smooth function over a compact convex set. In this case, the new method has an interpretation of Frank-Wolfe algorithm. To decrease the variance of the stochastic estimation of the dual gradients, one can use a parallel mini-batching, which provides the method with a provable acceleration given a set of parallel computational units. Numerical experiments on synthetic data are provided. The paper is well written. I found the results very interesting and significant. Although, all the building block are quite known in optimization community (accelerated methods, duality, bregman distances, smoothing, etc.), the whole approach fits perfectly together and provides the reader with a number of nice and useful observations. \n\nIn particular, it is shown that minibatching for improving stochastic estimation of the gradient of the smoothed support function results in doing in parallel several linear minimization oracle calls for the corresponding convex set. The last operation is the most expensive in the Frank-Wolfe method. Then, the authors demonstrate that choosing the 'batchsize' of size $m$, the resulting complexity is $O( \\frac{1}{\\sqrt{\\epsilon}} + \\frac{1}{m \\epsilon} )$. Hence, for $m = \\frac{1}{\\sqrt{\\epsilon}}$, this gives the optimal rate. The most imporant feature of this result is that this 'minibatching' is done in parallel, which means that we can get a significant acceleration having several computational units.\n\nIn my opinion, there are several limitations of this technique. Probably, it might be benefitial for the presentation to address some of them:\n\n1. The main requirement of the approach is to have an access to a probability distribution with 'positive differentiable density'. It is not thoroughly discussed how difficult is it to find such a distribution, which is suitable for a given problem. Assumption 2 on the variance decrease seems to be also related. It defines a certain constant $\\rho$ that depends on the distribution and needs to be known by the method (as for the choice of $\\alpha$). \n\n2. Also, the method needs to be given several other paramers (Lipschitz constant of the gradient, diameter of the set, parameter of the distribution $M$). The classical Frank-Wolfe method known to be quite universal: no knowledege of any of this parameters is required. Morevore, it is possible to use a line search in each iteration, that improves convergence significantly. This is not clear, how practical the requirement of knowing all these parameters for the new method, and is it possible to eliminate them.\n\n\n------------------\nAfter rebuttal:\n------------------\n\nI vote for accepting this paper since I believe that the contribution of the paper and the new methods are solid.\n Minor questions and remarks:\n\n1. Algorithm 2: this is not clear, what is $du$ from which $\\Delta$ is sampled.\n\n2. It might be worth to move the choice of $\\alpha$ into the beginning of Theorem 3 (e.g. Let $\\alpha = ...$. Now it is easy to miss it).\n\n3. Is it correct that $m = 1$ does not correspond to the 'pure' Frank-Wolfe method? The method seems to be always stochastic.\n\n4. Is it possible to use some line search in experiments? --",
" In this paper, the authors start with a general framework that can solve composite problem G(y)+H(y) where G can only be accessed via a stochastic gradient oracle, with a proximal Bregman-type algorithm. Explicit rate is given along with the final radius of convergence for the framework. This is in turn applied to solving a constrained convex function, where the framework is evoked for its smoothed dual problem. The requirement on stochastic gradient boils down to a linear optimization oracle over the set, which is amenable to parallelization thanks to the smoothing operation. The resulting Frank-Wolfe-type algorithm is shown to have a convergence rate in the duality gap of O(1/sqrt(eps)) with 1/sqrt(eps) parallel queries. Numerical experiments are also conducted on the proposed algorithm. I find the paper generally well-written and think the result is of interest to the community. This reduction from accelerated Bregman method to accelerated Frank-Wolfe algorithm based on parallelization is a nice addition to the literature. - This is probably just confusion on my part. Right before (22), $z_k$ is set to be $\\nabla f(x_k)$ for all k, so the resulting Algorithm 2 only involves $x_k,y_k$ and $v_k$. But is the reason I should think of $x_k$ as being the current primal iterate because $y_k^* = \\nabla f(x_k^*)$ for the optimal primal-dual pair on (17) and (18), and both $y_k,z_k$ live in the dual space?\n- There's another factor of $(1-e^{-z_1})$ missing on the last line of page 24? Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"ErysE6CfxxJ",
"nips_2022_WESmKHEH5nJ",
"nips_2022_WESmKHEH5nJ",
"rSF_B8AvVdxN",
"Ixuan0zTW6S",
"nEDRpQkusGe",
"QGjH3nVq5G",
"nips_2022_WESmKHEH5nJ",
"nips_2022_WESmKHEH5nJ",
"nips_2022_WESmKHEH5nJ"
] |
nips_2022_CTqjKUAyRBt | Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization | We analyze the convergence rates of stochastic gradient algorithms for smooth finite-sum minimax optimization and show that, for many such algorithms, sampling the data points \emph{without replacement} leads to faster convergence compared to sampling with replacement. For the smooth and strongly convex-strongly concave setting, we consider gradient descent ascent and the proximal point method, and present a unified analysis of two popular without-replacement sampling strategies, namely \emph{Random Reshuffling} (RR), which shuffles the data every epoch, and \emph{Single Shuffling} or \emph{Shuffle Once} (SO), which shuffles only at the beginning. We obtain tight convergence rates for RR and SO and demonstrate that these strategies lead to faster convergence than uniform sampling. Moving beyond convexity, we obtain similar results for smooth nonconvex-nonconcave objectives satisfying a two-sided Polyak-\L{}ojasiewicz inequality. Finally, we demonstrate that our techniques are general enough to analyze the effect of \emph{data-ordering attacks}, where an adversary manipulates the order in which data points are supplied to the optimizer. Our analysis also recovers tight rates for the \emph{incremental gradient} method, where the data points are not shuffled at all. | Accept | All reviewers acknowledge that the paper fills a gap in the literature, with good results for a wide variety of settings. | train | [
"EeKYai0kES3",
"qVoUpUUgZSt",
"l8ZEKtymr34",
"_St1Vl_8gxQ",
"0bWG4Exgqfkg",
"gkdWiXksUH",
"L-LWu-5bkoc",
"AkX-rBHeFJg",
"dC3jjldRQ2XJ",
"4bDVik7dbsk",
"4iZ8x1lGYT",
"RWSHDIxuoMA",
"EKoV0tupZ-N",
"hfFZuMurO_j",
"v6Eaewj9L15"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for answering my questions. I will keep my score.",
" Thanks for the authors' response!\n\nI am not fully convinced about the technical novelty here. As Reviewer R12f pointed out, \"the main difficulty (or let's say the main difference to the existing analysis of RR) lies in rewriting the GDA-RR update as a GDA form, other parts are really similar to the existing RR analysis.\". However, the convergence rate for GDA in Theorem C.1 is quite standard in the literature, either by the book \"F. Facchinei and J.-S. Pang. Finite-dimensional variational inequalities and complementarity problems.\" or many recent literature. The proof of Theorem C.1 define some matrices M_k which makes the proof looks more involving, but actually in the end the norm of the related operator can be easily bounded (the proof can also be written in a way without defining M_k). Also, for the proof of Theorem C.2 looks complicated by defining matrices J_k and M_k, but in the end they can be easily bounded with Lipschitz. \n\nI do not believe that the proof of GDA-RR is necessarily more challenging than GD-RR in the minimization case. For example, look at the reference \"J. Haochen and S. Sra. Random shuffling beats SGD after finite epochs\", in its arXiv version, they also need to approximate RR with GD update. As far as I see, their proof can be easily modified to accommodate minmax or VI setting. For example, in the equation (A.1) of their appendix, it suffices to replace the inequality (that is specific to minimization) to the inequality using strongly monotone (this is the also key to why GD has kappa dependency in minimizaton, but kappa^2 in minmax). Then I believe the rest part of it can be derived similarly. \n\n\n",
" I thank the authors for their detailed reply.\n\nI did not have a lot of concerns, and they have all been addressed. I will raise my score to 7.\n\nHowever, I would like to point one little thing to authors. While they are right about the fact l.704 is well defined, the differentiability almost everywhere is not a sufficient argument. This is also used in l.649 where they say $\\nu$ is almost everywhere differentiable, hence l.649 is well defined. The integral is indeed well defined, but this argument is not sufficient to conclude on the equality above l.649. Indeed, one can think of $\\nu(t) = \\mathbb{1}_{t \\geq 1/2}$, a function that is almost everywhere differentiable with derivative 0 almost everywhere, hence the integral over $[0, 1]$ would not be $\\nu(1) - \\nu(0)$. The Lipschitz continuity argument makes it work, but must not be used only to conclude on the differentiability almost everywhere.\n\nMoreover, $\\nu(z^*)$ disappeared in the same equality (above l. 649).",
" Most of my concerns are addressed properly. In terms of removing the bounded variance assumption, I meant that the authors might mimic the techniques of [3] to establish the algorithmic recursion of the proposed algorithms, i.e., show something like [3, Lemma 3.2]. What the current paper is going to establish is complexity bound. It should not be necessary to use Chung's lemma. Instead, once you get something like [3, Lemma 3.2], the complexity of $O(1/nK^2)$ in the sense of expectation directly follows by using proper time-varying step sizes. Thus, it seems that the authors do not need to worry about the applicability of any technique after [3, Lemma 3.2]. \n\nOverall, I think this paper can be helpful for RR and Minimax community. However, as recommended by the scoring system, I would say 6 is appropriate as this paper has solid techniques and might have moderate-to-high impact. ",
" # Analysis of Extragradient without replacement\n\nThe linearization technique that we use for analyzing without-replacement GDA/PPM is general, and as such, can be adapted to any approximate proximal point method such as Extragradient (EG). For the particular case of the EG update rule, $z^k_{i+1} = z^k_i - \\alpha \\omega_{\\tau_k(i)}(y^k_{i+1})$ where $y^k_{i+1} = z^k_{i} - \\alpha \\omega_{\\tau_k(i)}(z^k_i)$, one possible approach could be to first linearize $y^k_{i+1}$ about $z^*$, plug in the obtained linearization into the expression for $z^{k}_{i+1}$, and linearize the resultant once again about $z^*$. One could then obtain a linearized epoch-level update rule for EG-RR/SO/AS and perform a unified analysis in a manner similar to Theorem 1. The analysis of without-replacement approximate proximal point methods is an interesting avenue for future work which could benefit from the techniques highlighted in our manuscript. \n\n# Step-sizes for PPM-RR/SO\n\nUnlike prior works that analyze stochastic proximal point methods for minimization [1, 2], our analysis allows the components $f_i$ to be arbitrary smooth nonconvex-nonconcave functions. To facilitate this increased generality, our analysis requires us to control the influence of the noise terms contributed by the nonconvex-nonconcave components (in both $\\mathbf{H}_k$ and $\\mathbf{r}_k$) by appropriately tuning the step sizes. As such, we conjecture that imposing further restrictive assumptions on the component functions $f_i$ (such as strong convexity-concavity) may allow us to use larger step sizes in our analysis. \n\n- [1] A. Patrascu, I. Necoara (2018) Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization. Journal of Machine Learning Research, 2018 \n\n- [2] E. Ryu, S. Boyd (2016) Stochastic proximal iteration: A non-asymptotic improvement upon\nstochastic gradient descent. https://web.stanford.edu/~boyd/papers/spi.html.\n",
" Thank you for taking the time to review our work. We are glad you find our presentation well-organized. We hope the following points are able to address your concerns regarding the technical novelty and contributions of our work.\n\n1. **Generality**: To the best of our understanding, the scope of our results is much more general than that of existing works on RR/SO for strongly convex minimization. Since our framework analyzes the broad class of strongly monotone Variational Inequality (or VI) problems, our results not only cover strongly convex minimization and strongly convex-strongly concave minimax problems, but also include problems such as multiplayer games with strongly convex cost functions. In addition, we believe that our proof techniques, which allow for a unified treatment of RR, SO and AS for strongly monotone VI problems without relying on any complex mathematical machinery (since the only tools that we employ are elementary properties of smooth functions and the variance of without-replacement sample averages), are an important contribution due to their high level of generality and accessibility. \n\n2. **Analysis of PPM**: Even in the context of minimization, our unified analysis of PPM-RR/SO is a novel contribution, since, to the best of our knowledge, our result is the first to establish that PPM-RR/SO can exhibit faster $O(1/nK^2)$ convergence than Stochastic PPM (with uniform sampling) for finite-sum smooth strongly convex minimization problems (and more generally for finite-sum smooth strongly monotone variational inequalities). Furthermore, as a consequence of our analysis of PPM-AS, we also obtain a non-asymptotic $\\tilde{O}(1/K^2 + \\exp(-K/5\\kappa^2))$ convergence rate for the Incremental Proximal Point Method.\n\n3. **Data Ordering Attacks**: To the best of our knowledge, our work is the first to explicitly quantify the effect of data ordering attacks on the convergence of optimization algorithms, by means of our analysis in the adversarial shuffling regime. This has also been noted as an interesting contribution by Reviewer KJCr and Reviewer 4yKG.\n\n4. Lastly, we believe it is not self-evident that results from strongly convex minimization can be extended to strongly convex-strongly concave minimax optimization (or more generally, to strongly monotone variational inequalities). To this end, we believe, demonstrating that RR/SO can outperform uniform sampling for strongly monotone VI problems is a valuable contribution, which is also recognized by Reviewer 4yKG. Furthermore, our work goes beyond the strongly convex-strongly concave regime by presenting guarantees for RR on a class of nonconvex-nonconcave problems. In addition, our analysis is general enough to capture adversarial shuffling with little to no modification.\n",
" # On the Possibility of Using $\\textrm{dist}(\\mathbf{z}, \\mathcal{Z}^*)$ as a Lyapunov Function\n\nThe choice of $V_{\\lambda}$ as a Lyapunov function is motivated by the fact that it is amenable to the derivation of a descent lemma using the noisy epoch level updates of AGDA-RR/AS, which can then be unrolled to obtain a convergence guarantee. We believe the difficulty in using $\\textrm{dist}(\\mathbf{z}, \\mathcal{Z}^*)$ as a Lyapunov function lies in the fact that deriving an equivalent descent lemma for this function using the noisy epoch-level update rule is, in our experience, considerably more involved (and might also not be possible). This situation parallels that of the convergence analysis of GD/SGD for PL function minimization, where the function gap $f(x) - f^*$ is typically used as a Lyapunov function [1, 2].\n\nHowever, we would like to highlight that it is possible to convert our convergence rates presented in terms of $V_{\\lambda}$ into an equivalent convergence rate in terms of $\\textrm{dist}(\\mathbf{z}, \\mathcal{Z}^*)^2$. In particular, using the properties of 2PL functions, one can relate $V_{\\lambda}(\\mathbf{z})$ to $\\textrm{dist}(\\mathbf{z}, \\mathcal{Z}^*)^2$ as follows:\n\n$ \\textrm{dist}(\\mathbf{z}, \\mathcal{Z}^*)^2 \\leq \\max [ \\frac{2}{\\mu_1}(\\frac{L^2}{2 \\mu_2^2} + 1), \\frac{4}{\\lambda \\mu_2} ] V_{\\lambda}(\\mathbf{z}).$\n \nWe have updated our manuscript to present a complete proof of this relation in Appendix E.4. In conjunction with our results in Theorem 3, the obtained inequality implies that AGDA-RR satisfies a convergence guarantee of the form $\\mathbb{E}[\\textrm{dist}(\\mathbf{z}^K_0, \\mathcal{Z}^*)^2] = \\tilde{O}(\\exp(\\frac{-K}{365 \\kappa^3}) + \\frac{1}{nK^2})$, while AGDA-AS satisfies $ \\max_{\\tau_1, \\pi_1, \\ldots, \\tau_K, \\pi_K \\in \\mathbb{S}_n} \\textrm{dist}(\\mathbf{z}^K_0, \\mathcal{Z}^*)^2 = \\tilde{O}(\\exp(\\frac{-K}{365 \\kappa^3}) + \\frac{1}{K^2})$. A detailed discussion on this subject has been performed in Appendix E.4.\n\n# On Removing the Bounded Variance Assumption \nThank you for bringing [3] to our notice. To the best of our understanding, [3, Theorem 3.10] uses a Chung's Lemma-style result (namely, [3, Lemma 3.9]) to obtain an *asymptotic* convergence rate of $O(1/K^2)$ for PL functions (in terms of the squared distance of the epoch iterates from the set of minimizers) without assuming bounded gradient variance or bounded iterates. (The result is asymptotic since [3, Theorem 3.10] holds only when the number of epochs $K$ is \"sufficiently large\" and it is not explicitly quantified how large $K$ needs to be for the result to hold). We have updated Lines 298-303 of our manuscript to acknowledge this work and have also referenced it in our literature review (Appendix F). Please note that the asymptotic rate presented in [3] is $O(1/K^2)$ and not $O(1/nK^2)$ since [3, Theorem 3.10] does not quantify the dependence of the convergence rate on $n$. \n\nAs such, we conjecture that incorporating carefully chosen time-varying step sizes into our analysis of 2PL functions would also allow us to remove the bounded gradient variance assumption, by means of relatively standard techniques in stochastic approximation (e.g. some variant of Chung's Lemma [4]). However, we believe doing so will add an additional layer of complexity, which has the potential drawback of distracting the reader from the key points of our analysis (namely, obtaining an epoch level update rule for AGDA-RR that resembles that of full-batch AGDA with added noise, controlling the influence of the noise in expectation using the variance of without-replacement sample averages, and finally performing a Lyapunov analysis for this noisy update rule). To this end, while the analysis of time-varying step sizes and the removal of the bounded variance assumption is an important contribution and an interesting avenue for future work, we believe it is outside of the scope of the current manuscript.\n\n- [1] H. Karimi, J. Nutini, M. Schmidt (2016). Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-{\\L}ojasiewicz Condition. European Conference on Machine Learning 2016.\n\n- [2] A Wilson (2018). Lyapunov Arguments in Optimization. PhD thesis, University of California, Berkeley, 2018.\n\n- [3] X. Li, A. Milzarek, J. Qiu (2021). Convergence of random reshuffling under the Kurdyka-{\\L}ojasiewicz inequality. arXiv preprint arXiv:2110.04926.\n\n- [4] K.L. Chung (1954). On A Stochastic Approximation Method. The Annals of Mathematical Statistics, 1954.",
" Thank you for your detailed review, positive evaluation of our contributions, and for your valuable suggestions which have greatly helped us in improving our manuscript. We hope your concerns are addressed in our response below.\n\n# Highlighting the key difficulty in our analysis\n\nThank you for this suggestion. We agree that one of the key challenges in our analysis lies in expressing the epoch level update rule of GDA without replacement (and PPM) in a form that resembles the linearized update rule of full batch GDA (and PPM, respectively). In fact, expressing the epoch level update in this form is key to developing a general proof strategy that simultaneously handles RR, SO and AS with little to no modification. We have updated our manuscript to highlight this point in the proof sketch of Theorem 1 (Lines 188-191). We have also discussed this point in greater detail in Theorem C.2 of the Appendix (complete unified proof of GDA-RR/SO, Lines 731-742), highlighting connections to the insights developed in some of the foundational works that analyze RR for minimization. \n\n# Lemma B.3 \n\nThank you for this suggestion. We have updated our manuscript to state that Lemma B.3 has been previously used in [1] to analyze RR/SO for minimization (which we consider to be an important contribution) and have also referenced [1, Lemma 1] in the statement of the Lemma. However, we believe that Lemma B.3 / [1, Lemma 1] by itself is a relatively standard result in statistics on the variance of random sampling without replacement [2, 3].\n\n# Literature Survey\n\nThank you for the references. We have added a comprehensive literature review of RR, SO and IG in Appendix F. Since camera-ready versions of accepted papers are allowed to add an extra page, we would be happy to incorporate this section in the main paper if our manuscript is accepted for publication. \n\n- [1] K. Mishchenko, A. Khaled, P. Richtarik (2020) Random Reshuffling: Simple Analysis with Vast Improvements. Neural Information Processing Systems, 2020.\n\n- [2] W. Cochran (1977) Sampling Techniques. Wiley. \n\n- [3] J. Rice (1988) Mathematical Statistics and Data Analysis. Wadsworth",
" Thank you for your positive evaluation of our work. We are glad you find our analysis of adversarial shuffling interesting. We hope our response below addresses your concerns.\n\n# On RR vs SO for Strongly Convex-Strongly Concave Minimax Optimization\n\nOur analysis demonstrates that for the class of smooth strongly convex-strongly concave minimax problems (or more broadly, smooth strongly monotone variational inequality problems), RR and SO have the same convergence rate, and as such, there is no apparent reason to prefer one over the other for this specific class. This conclusion is also in agreement with existing literature in the minimization setting [1], which shows that RR and SO exhibit similar performance in smooth strongly convex minimization. \n\n# Necessity of a Different Algorithm for Two-Sided PL Objectives\n\nThe choice of two-timescale alternating updates for two-sided PL (or 2PL) objectives is primarily motivated by the fact that, even for the deterministic minimax problem, it is not known whether the simultaneous GDA algorithm can achieve provable global convergence for 2PL objectives, whereas two-timescale alternating GDA is known to exhibit global linear convergence [2]. Additional motivation for this choice is also rooted in the fact that timescale separation and alternating updates are known to promote convergence and stability in minimax optimization [3, 4]. \n\n- [1] K. Mishchenko, A. Khaled, P. Richtarik (2020) Random Reshuffling: Simple Analysis with Vast Improvements. Neural Information Processing Systems, 2020.\n\n- [2] J. Yang, N. Kiyavash, N. He (2020) Global Convergence and Variance Reduction for a Class of Nonconvex-Nonconcave Minimax Problems. Neural Information Processing Systems, 2020.\n\n- [3] G. Gidel, R. A. Hemmat, M. Pezeshki, R. L. Priol, G. Huang, S. Lacoste-Julien, and I. Mitliagkas (2019) Negative momentum for improved game dynamics. International Conference on Artificial Intelligence and Statistics, 2019.\n\n- [4] T. Lin, C. Jin, and M. I. Jordan (2020). On gradient descent ascent for nonconvex-concave minimax problems. International Conference on Machine Learning, 2020.",
" # Twice Differentiability Requirement in Line 674 (Line 704 in updated version)\n\nAs a consequence of Rademacher's Theorem, the Lipschitz continuity of $\\omega_{\\tau_k(i)}$ implies that $\\omega_{\\tau_k(i)}$ is differentiable almost everywhere (with respect to the Lebesgue measure). Thus, by property of Lebesgue integrals, both $M_{\\tau_k(i)}$ and $J_{\\tau_k(i)}$ are well defined (without any need for assuming twice differentiability of $f_i$). As stated in Lines 705-707 of our (updated) manuscript, this line of reasoning has been elucidated in Theorem C.1.\n\n# Loose Inequalities in Line 719 (Line 760 in updated version)\n\nWe agree that the inequalities here can be tightened. However, in our experience, using tighter inequalities for bounding the sum does not improve the dependence of $\\alpha$ on $l$, $\\mu$, and $n$, and can lead only to constant factor improvements (this is also intuitive since the effective step-size of the epoch level update rule is $n \\alpha$ and full batch GDA needs step sizes of the order $\\mu/l^2$ for convergence). On the contrary, the use of tighter inequalities leads to a considerably more cumbersome presentation. Hence, for the sake of clarity and accessibility, we opt in favor of looser inequalities for our presentation.\n\n# Remaining Suggestions\n\nThank you for pointing these out. We have incorporated your suggestions in the updated version of our paper. ",
" Thank you for your detailed and thoughtful review, and for your positive evaluation of our work. Your insightful suggestions have greatly helped us in improving our manuscript. We hope our response is able to address your concerns. \n\n# Regarding Use of $O$ and $\\tilde{O}$ Notation\n\nWe apologize for the lack of clarity regarding the $O$ notation in some parts of the text and appreciate your helpful feedback on the same. We clarify that we are interested in the behavior of the convergence rate as both $n$ and $K$ grow, and treat parameters such as $\\kappa$, $\\mu$, $\\sigma$ (or $\\sigma^*$) and $|z_0 - z^*|$ as constant factors (hereafter called problem-specific constants). We have updated our manuscript to describe our usage of the $O$ and $\\tilde{O}$ notation in Section 2 Lines 126-128, where we explicitly state that our usage of the $O$ characterizes the dependence of our rates on $n$ and $K$ while suppressing constants such as $\\kappa, \\mu, \\sigma$, etc. (and $\\tilde{O}$ also suppresses logarithmic factors of $n$ and $K$). To ensure consistency, we have suppressed said problem-specific constants in every occurrence of the $O, \\tilde{O}$ and $\\Omega$ notation in our manuscript. Furthermore, to ensure that the article still precisely quantifies the dependence of our convergence rates on $\\kappa, \\mu, \\sigma^2$, etc. (as well as logarithmic factors of $n$ and $K$), the statement of all our theorems now state our obtained convergence rates both with and without the $\\tilde{O}$ notation\n\n# Regarding the use of $K = \\Omega(\\kappa^2)$\n\nThank you for pointing this out. We agree with your statement that, in order to ensure $\\exp (-K/5\\kappa^2) + 1/nK^2 = O(1/nK^2)$, one needs to assume $K \\geq 10 \\kappa^2 \\log (n^{1/2}K)$. Similarly, for GDA/PPM-AS, one needs to assume $K \\geq 10 \\kappa^2 \\log (K)$ to obtain a $\\tilde{O}(1/K^2)$ rate. We have updated our manuscript to reflect this appropriately by replacing every occurrence of $K \\geq \\Omega(\\kappa^c)$ with precise epoch requirements of the form $K \\geq C \\kappa^{a} \\log(n^{b}K)$ for some constants $C, a, b > 0$ (specific values depending on the algorithm under analysis). \nWe also highlight that, in the absence of structural assumptions (like convexity) on the components $f_i$, several prior results on RR/SO for minimization also require $K$ to satisfy an inequality of the form $K \\geq C \\kappa^a \\log(n^{b} K)$ in order to demonstrate that RR/SO converges enjoys a faster convergence rate (in terms of the dependence on $n$ and $K$) than uniform sampling. [1, 2, 3]\n\n- [1] K. Ahn, C. Yun, S. Sra (2020) \"SGD with shuffling: optimal rates without component convexity and large epoch requirements\". Neural Information Processing Systems, 2020.\n\n- [2] K. Mishchenko, A. Khaled, P. Richtarik (2020) \"Random Reshuffling: Simple Analysis with Vast Improvements\". Neural Information Processing Systems, 2020.\n\n\n- [3] D. Nagaraj, P. Jain, P. Netrapalli (2019) \"SGD without Replacement: Sharper Rates for General Smooth Convex Functions\". International Conference on Machine Learning, 2019.",
" Authors analyze variants of stochastic gradient descent ascent (SGDA) methods without replacement to solve minimax first order optimization. Authors claim that, despite all the studies about stochastic methods with replacement, enforcing a pass over the whole set of data at each epoch is a better choice. This has first been shown empirically, and the reason why methods with replacement have been the center of interest is because of the underlying assumptions that ease the theoretical study.\nTherefore, methods without replacement lately received some interest, and first theoretical results arose for minimization problem.\nThis paper proposes a proof of the gain of speed of SGDA's convergence on minimax problems in the \"no replacement\" regime.\n\nThe different studied variations of the algorithm are:\n- the way to order data: reshuffling at each epoch, shuffling once for all, or ordering arbitrarily removing the stochasticity aspect of the algorithm;\n- the way to perform descent and ascent: simultaneously or not.\n\nThere are also 2 different studied setting. In both of them, the objective function (expressed as an average of \"component\" functions) is assumed smooth. The difference between the 2 settings lies in the assumption made on the component functions:\n- first, assuming that all component functions are strongly convex- strongly concave;\n- then, assuming that they verify a 2 sided PL inequality instead.\n\nAdditional assumptions are one of these:\n- bounded variance of gradients at optimum;\n- uniformly bounded variance of gradients.\n\nAuthors claim bounds improvements with respect to the \"with replacement\" case, and tight guarantees in their setting. Strengths:\n- this paper brings new results of convergence and theoretical support of observed phenomenons.\n- Moreover, guarantees are obtained under very weak assumptions and for various variants, even in adversarial setting.\n- Finally, results are well presented and literature review is extensive although I cannot be sure if exhaustive as I am not very familiar with minimax related literature.\n\nWeakness:\nI noticed unclear statements related on O usage. This notation has a precise mathematical meaning, namely $a_n = O(b_n)$ when there exists a constant $C$ such that $a_n \\leq C b_n$ holds for all $n$. Therefore, we have to be clear which variables are varying and which are fixed. In this paper, several variables are introduced and guaranteed bounds depend on $l$, $\\mu$, $n$, $K$, $\\sigma$ (or $\\sigma_*$) and $\\|z_0 - z^* \\|$. One has to be clear using the O notation which of these are considered as fixed by the problem and which are varying in the O.\nFor instance, when looking at results of Thm 1 which are summed up in line 79 of the introduction, since $\\mu, l$ disappeared from the second term and $\\|z_0 - z^*\\|$ from the first one, we can conclude that they are considered as constants of the problem, that are fixed at first and only the other variables can vary up to infinity.\nTo be clear, I also assume that $\\sigma$ is fixed when I read some O(1/nK^2) in line 68 for example.\nBut I cannot know from this article whether we are interested by asymptotical behavior of the rate when $K$ tends to infinity, or also when $n$ grows:\n- If $n$ is a fixed constant of the problem, therefore, O(1/nK^2) = O(1/K^2) and all the discussion about having this $n$ or not depending on the adversarial setting or not has to be done on the accurate bound, not using the O notation.\n- If we are interested in the variation of $n$, I don't agree on the fact that $exp(-K / 5\\kappa^2) + \\sigma_*^2/nK^2 = O(1/nK^2)$. Indeed, if we fix $K$ and make $n$ tends to infinity, the RHS tends to 0 but not the LHS, which makes the domination of the LHS by the RHS impossible. Authors specifically asked that $K = \\Omega(\\kappa^2)$, but first, this assumption does not have any impact on the O notation as $\\kappa$ is fixed, and moreover, we can rewrite $exp(-K / 5\\kappa^2) + \\sigma_*^2/nK^2 = 1/K^2 [ K^2 exp(-K / 5\\kappa^2) + \\sigma_*^2/n ]$ and bound the term into brackets. Its maximum is reached for $K = 10 \\kappa^2$ and leads to $100\\kappa^4 exp(-2) + \\sigma_*^2/n $, which is not a $O(1/n)$ if $n$ is not a $O(1)$. This is a $O(1)$ and leads to a $O(1/K^2)$ rate, not $O(1/nK^2)$.\nAnd this can never change with additional assumption of the form $K = \\Omega(\\kappa^{\\{\\text{some exponent}\\}})$. One would need to enforce a relation between $K$ and $n$ in order to expect an improvement. For example, with $K \\geq 5\\kappa^2 (1 + \\varepsilon) log(n)$ for any $\\varepsilon>0$, the statement comes true. - l.75: I think it is worth adding the word \"smooth\" as well. This is present in introduction, but it might be worth repeating it here in boldface as for the other assumptions. (Same in line 89)\n- l.199: typo \"comparison\".\n- l.254: typo \"regulators\".\n- l.674: here, we need to add the assumption of twice differentiability.\n- l.675: typo in index when defining $J_{\\tau_k(i)}$. The one that is defined might be useful for the proximal based method, but not the one studied here.\n- l.709: I suggest an intermediate line of computation here. Especially since the counter $j$ is reused for another sum. After first equality, the product is developed into a sum over some new counter $l$, then we can commute sum over $j$ and sum over $l$, and finally $j$ becomes $t_1$ merging the 2 last sums. But $l$ has been renamed $j$ which can be confusing.\n- l.714: there is a missing $n$ in the LHS.\n- l.719: There are a lot of loose inequalities here. Isn't there any way to take advantage of tighter inequalities? Getting a weaker constraint on $\\alpha$ for instance. It seems that the only gain will be constant. But isn't it interesting optimizing them?\n- l.740: typo: the sum counter must go up to $n$. The only limitations are the assumptions which are clearly stated.",
" The paper shows the convergence of stochastic GDA with random reshuffling (RR), shuffle once (SO) and adversarial shuffling (AS) for strongly-convex-strongly-concave min-max problems. It also extends to the two-sided PL condition with alternating GDA. strength: the paper is easy to follow and well-organized. \n\nweakness: the techniques for RR has already been established for strongly-convex optimization. It is not surprising to extend it to strongly-convex-strongly-concave minmax problems. Questions:\n\n(a) PPM has an implicit step. Extragradient and PPM are always considered closely related. Is it possibly to extend to extragradient with RR? Usually PPM allows large stepsize (arbitrary large in deterministic setting) for convex optimization. Does it allow larger stepsize for PPM here?\n\n(b) What is the technical novelty compared to RR in minimization?\n\nminor comments:\n\n(a) I suggest to include $\\kappa$ for the 1/K^2 terms in the contribution part. \n\n(b) I suggest to use $\\Vert \\cdot \\Vert$ for norm\n NA",
" The authors derive convergence rates for stochastic gradient algorithms for finite-sum minimax optimization *without replacement*. They consider both (1) the smooth and strongly convex-strongly concave setting, and (2) 2-sided Polyak-Lojasiewicz inequality setting. The rates are better than the convergence rates for the with-replacement algorithms and match known lower bounds (up to logarithmic factors) when the epoch number K is large (as a function of the condition number). Specifically, the authors show the following.\n\n* For convex-concave objectives, gradient descent ascent (GDA) as well as the proximal point method (PPM) achieve rates of $\\tilde O(\\sigma_*^2/(nK^2))$ for $K=\\Omega(\\kappa^2)$, where $K$ is the number of epochs, $\\sigma_*^2$ is the gradient variance, $n$ is the number of terms in the sum, and $\\kappa$ is the condition number.\n* For 2-sided PL objectives, similar results hold for $K=\\Omega(\\kappa^3)$.\n* The convergence rates are slowed by a factor of $n$ to $\\tilde O(\\sigma^2/K^2)$ when the data is adversarially shuffled, which is tight (up to log factors).\n\nThe proofs proceed by linearizing the update around the minimax point, then controlling the noise term using the variance of without-replacement sample means.\n The authors present a unified analysis of stochastic gradient algorithms for finite-sum minimax optimization without replacement, that works for many variants (GDA vs. PPM, random reshuffling vs. shuffle once vs. adversarial shuffling). The rates indeed show the benefit of sampling without replacement. The extension to adversarial shuffling is particularly interesting as it quantifies the effect of a data-ordering attack. The paper is clear and well-organized.\n\nThe results are not particularly surprising since they parallel the existing results for optimization, though it remains valuable to work out the results for minimax optimization.\n Given that the same convergence rate is obtained for RR/SO, is there a reason to prefer one over the other?\n\nWhy do 2-sided PL objectives necessitate a different algorithm from convex-concave objectives?\n\nThe axes in Figure 3 are hard to read.\n Yes.",
" This paper considers finite sum minimax optimization problems. The authors propose to use gradient descent ascent/proximal point method with RR and AS data sampling schemes. They obtain iteration complexity results for both strongly convex-strongly concave or 2PL functions, matching the state-of-the-art ones for minimization problems. -Strength\nThis paper provides (nearly) optimal expected iteration complexity bounds for both strongly convex-strongly concave or 2PL functions.\n\n-Weakness\nSee comments below. I have the following major concerns\n\n1. By checking the proof of Theorem 1, I can understand that the main difficulty (or let's say the main difference to the existing analysis of RR) lies in rewriting the GDA-RR update as a GDA form. The authors may highlight this difficulty clearly in the main context to show the technical difficulty as other parts are really similar to the existing RR analysis. \n\n2. Lemma B.3 is quite important for establishing the $O(1/nK^2)$ result. It fully utilizes the randomness of random shuffling in terms of expectation. However, such a lemma (as I know) was already used in [1, Lemma 1]. I would highly recommend the authors clearly state this reference in their Lemma B.3. Citing this existing lemma rather than reinventing it is also welcome. \n\n3. Is it possible to remove the bounded variance assumption in 2PL case? It seems that this assumption is removed for random shuffling algorithm in KL inequality setting (more general than PL as I know) in a recent paper [5]. \n\n4. In 2PL setting, is it possible to use dist$(z,Z^*)$ as the Lyapunov function, where $Z^*$ is the set of saddle points? If not, what is the underlying difficulty? \n\n5. The authors should give a more comprehensive literature review of the RR algorithm. Some immediately related references: 1) The very pioneering papers [2]-[3]. 2) [4], which studies RR with momentum. 3) [5], which studies RR in KL inequality setting. 4) [6], which shows the limitation of RR under bad conditioning. 5) [7], which applies to federated learning setting. More related works are welcome. It is important to put this paper in the correct position in the literature. Potential comparisons between this paper and the existing literature will also be appreciated. \n\n[1] Mishchenko, K., Khaled, A., & Richtárik, P. (2020). Random reshuffling: Simple analysis with vast improvements. Advances in Neural Information Processing Systems, 33, 17309-17320.\n\n[2] Recht, B., & Ré, C. (2012, June). Toward a noncommutative arithmetic-geometric mean inequality: Conjectures, case-studies, and consequences. In Conference on Learning Theory (pp. 11-1). JMLR Workshop and Conference Proceedings.\n\n[3] Gürbüzbalaban, M., Ozdaglar, A., & Parrilo, P. (2015). Why Random Reshuffling Beats Stochastic Gradient Descent. arXiv preprint arXiv:1510.08560.\n\n[4] Tran, T. H., Nguyen, L. M., & Tran-Dinh, Q. (2020). SMG: A Shuffling Gradient-Based Method with Momentum. arXiv preprint arXiv:2011.11884.\n\n[5] Li, X., Milzarek, A., & Qiu, J. (2021). Convergence of random reshuffling under the kurdyka-{\\L} ojasiewicz inequality. arXiv preprint arXiv:2110.04926.\n\n[6] Safran, I., & Shamir, O. (2021). Random shuffling beats SGD only after many epochs on ill-conditioned problems. Advances in Neural Information Processing Systems, 34, 15151-15161.\n\n[7] Mishchenko, K., Khaled, A., & Richtárik, P. (2021). Proximal and federated random reshuffling. arXiv preprint arXiv:2102.06704. Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"dC3jjldRQ2XJ",
"gkdWiXksUH",
"4iZ8x1lGYT",
"L-LWu-5bkoc",
"EKoV0tupZ-N",
"EKoV0tupZ-N",
"v6Eaewj9L15",
"v6Eaewj9L15",
"hfFZuMurO_j",
"RWSHDIxuoMA",
"RWSHDIxuoMA",
"nips_2022_CTqjKUAyRBt",
"nips_2022_CTqjKUAyRBt",
"nips_2022_CTqjKUAyRBt",
"nips_2022_CTqjKUAyRBt"
] |
nips_2022_AQgmyyEWg8 | Beyond spectral gap: the role of the topology in decentralized learning | In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster. We consider the setting in which all workers sample from the same dataset, and communicate over a sparse graph (decentralized). In this setting, current theory fails to capture important aspects of real-world behavior. First, the ‘spectral gap’ of the communication graph is not predictive of its empirical performance in (deep) learning. Second, current theory does not explain that collaboration enables larger learning rates than training alone. In fact, it prescribes smaller learning rates, which further decrease as graphs become larger, failing to explain convergence in infinite graphs. This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution. We quantify how the graph topology influences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives. Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies. | Accept | The paper studies decentralized optimization and considers all machines work on the data that follow the same distribution. Most of the reviewers think the paper is interesting. I recommend an acceptance. | val | [
"zOaPDkl66iZ",
"NNhaZrn0RR",
"k2QUbbDHgcy",
"dzHnMoIFXGP",
"vd1W4bueNg",
"UBcAFuXEM9",
"vUrouopdKk",
"isgveAfLAA6",
"58ZUzTsND67"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed responses, which have answered most of my questions.",
" Thank you for your quick reply.\nWe agree with all your concrete suggestions on clarity and typos, and are very grateful for your in-depth review and contributions to the quality of the paper.\n\nFor the initial rebuttal, we had already\n- corrected the typo on line 193,\n- corrected all typos you found in the appendix.\n\nWe just uploaded a new revision, in which we\n- removed the symbol $\\bar P$,\n- clearly defined $\\zeta$ from the start,\n- reorganized Section 3 so the definition of $n_W$ appears earlier,\n- provided more intuition for $\\gamma$,\n- added a small discussion that compares the learning rates in Theorem 1 to standard learning rates,\n- incorporated feedback from the other reviewers.\n\nWe would be happy to integrate any other concrete suggestions on clarity in the camera ready version. ",
" I appreciate the detailed answers including the additional experiments on the choice of $\\gamma$ (Figure 15, 16). Now it is more clear that the trends observed in the paper still exist to some extent even when different $\\gamma$ is used. While I am generally happy with the paper and leaning to acceptance, I am very concerned with the readability of the paper.\n\nI don't know if a revision is allowed during the discussion period, but it would be great if the authors can incorporate the points I raised in the review, as well as some parts of their own answers. I believe this will improve the readability a lot.",
" Dear Reviewer,\n\nThank you for your very thorough review. Your suggestions on clarity and typos are much appreciated, and we will incorporate them into the revised paper. We hope to answer your questions below:\n\n__Intuition for $\\gamma$.__ The decay parameter $\\gamma$ modulates the sensitivity to communication delays. As $\\gamma$ approaches 1, it does not matter how old an update is. Even updates that travel many hops get approximately the same weight as updates from a node itself. This is fitting when the learning rate is small compared to the smoothness of the function. On the other end of the spectrum, $\\gamma=0$ indicates that delayed updates are useless, and only the updates from directly connected neighbors are beneficial. A small $\\gamma$ is applicable when the learning rate is very large.\n\n__Learning rate in Theorem 1.__ Standard D-SGD rates [8] rely on step-sizes of order $O(\\min(1/T, 1 - \\lambda_2(W))$, which are much smaller than $n_{W(\\gamma)} / \\zeta$ since the number of iterations $T$ is generally much larger than $\\zeta$, and the spectral gap of the network might be very small. \n\nWith these small step-sizes, [8] obtain an effective batch size of $n$ (and so divide the residual variance by a corresponding factor). \nInstead, we show improvement for any (large) constant step-size, but the effective batch size is “only” $n_{W(\\gamma)}$, which is natural since linear speedup in $n$ is not achievable for large step-sizes and sparse networks. Corollary II also provides a simple comparison: D-SGD is comparable to mini-batch SGD, where the effective batch size depends on the connectivity of the network and the learning rate.\n\nThank you for pointing this out, and we will try to make it clearer in a revision of this paper. \n\n__Choice of $\\gamma$.__ In Figure 5, we optimize $\\gamma$ independently for each topology, minimizing the Mean Squared Error between the normalized covariance matrix measured from checkpoints of Cifar-10 training and the covariance in a random walk with the decay parameter $\\gamma$. The new Figure 15 (Appendix) shows how Figure 5 would change if you used a $\\gamma$ that is either much too low, or too high.\n\nIn Figure 6, we choose a value of $\\gamma$ (shared between all topologies) that yields a good correspondence between the performance of fully connected topologies (with 2, 4, 8, 16 and 32 workers) and the other topologies. We opt for sharing a single $\\gamma$ here, to test whether this metric could have predictive power for the quality of graphs. The new Figure 16 (Appendix) shows how the figure changes if you use a value of $\\gamma$ that is either much too low, or much too high. \n\n__Toy problem and Assumption B.__ We stated Assumption B (IV) in this form for simplicity, but it can be relaxed by asking directly that $\\mathbb{E} \\| \\nabla f_{\\xi,i}(x^{(t)}) - \\nabla f_{\\xi,i}(x^\\star)\\|^2 \\leq 2 \\zeta D_f(x^\\star, x^{(t)})$ (see Appendix D.2, between Equations (22) and (23), which can also be implied by assuming that each $f_\\xi$ is $\\zeta_\\xi$-smooth, with $\\mathbb{E} \\left[\\zeta_\\xi D_{f_\\xi}(x^\\star, x^{(t)})\\right] \\leq \\zeta D_f(x^\\star, x^{(t)})$. These weaker forms would be satisfied by the toy problem of Section 3. \n",
" Dear Reviewer,\n\nThank you for your review and valuable feedback. We answer your questions below:\n\n__Randomized D-SGD.__ We can obtain similar (slightly better) results for the deterministic alternating update rule compared to randomized D-SGD. The new Appendix D.4 outlines how to obtain these results. Empirically, we confirmed that the typical alternating algorithm performs consistently slightly better than the randomized version, but that the relative performance of topologies does not change between the algorithms.\n\n__Non-convex theory.__ While our experiments show that the convex theory is already useful for deep learning, we agree that the non-convex setting is interesting. Although the core ideas of our analysis (semi-local analysis with matrix $M$) would remain the same, some details would change. In particular: \n\n- The measure of suboptimality would switch from $\\|x_t - x_\\star\\|^2_M$ to $\\|\\nabla f(x_t)\\|^2_M$. \n- The definition of noise should be adapted from assuming that each stochastic function is $\\zeta$-smooth to assuming a bound of the following type for all $f_\\xi$ : $\\| \\nabla f_\\xi (x_t)\\|^2 \\leq (\\zeta / L) \\| \\nabla f(x_t)\\|^2 + \\sigma^2$, where $f$ is $L$-smooth. \n\nWe conjecture that, under these assumptions, D-SGD is comparable to mini-batch SGD with some batch-size that depends on the connectivity of the network and the step-size. Yet, completely adapting the full proof remains non-trivial. One key difficulty is that performing gossip steps might increase the function value ($f(Wx) \\geq f(x)$), and correctly bounding the gap might require new assumptions (such as Lipschitzness of $f$, or some specific initialization).\n\n__Batch size.__ The baseline you request is already included in our results. In the setting we study, the *disconnected* topology with batch size $b \\times n$ (per worker) is equivalent to the *fully connected* topology with batch size $b$ per worker. As you said, in the i.i.d. setting, the benefit of averaging is variance reduction. Lower variance implies a larger ‘effective batch size’. The paper offers a way to reason about *how much* each topology can reduce the variance compared to training alone, while keeping the local computation cost fixed to a batch size of $b$. The fully-connected case you described (effective batch size $b \\times n$) corresponds to the best-achievable variance reduction for a given batch size $b$.\n\n__Related work.__ We believe our related work section covers the most relevant prior work. Since the submission of this paper, we were made aware of the following works: \n- [A], which shows that D-SGD adds some implicit regularization (a different benefit than the large learning rate), and attains optimal statistical rates. Yet, their optimization error bound is looser than ours, and in particular relies on the spectral gap. \n- [B] also show that larger (constant) step-sizes can be used in decentralized settings, but their analysis focuses on Decentralized Kernel Regression, does not cover stochastic gradient updates, and relies on statistical concentration of local objectives rather than analysis on local neighborhoods.\n\nIf you have particular papers or areas in mind that we missed, we would be very grateful if you could share a reference, so we can discuss it further.\n\n---\n\n[A] Richards D. Graph-dependent implicit regularisation for distributed stochastic subgradient descent. Journal of Machine Learning Research. 2020.\n\n[B] Richards D, Rebeschini P. Optimal statistical rates for decentralised non-parametric regression with linear speed-up. Advances in Neural Information Processing Systems. 2019;32.\n",
" Dear Reviewer,\n\nThank you for your review and your insightful comments. We answer your questions below:\n\n__Randomized v.s. alternating update rule.__ We can obtain similar (slightly better) results for the alternating update rule compared to randomized D-SGD. The new Appendix D.4 outlines how to obtain these results. Empirically, we confirmed that the typical alternating algorithm performs consistently slightly better than the randomized version, but that the relative performance of topologies does not change between the algorithms.\n\n__Matrix M: Eqs. (5) and (3).__ Thank you for this suggestion. Yes, these are equivalent, and we will clarify this connection.\n\n__Definition I (effective num. neighbors).__ The current Definition A in the main paper is an informal (“in words”) equivalent of Definition I. But we agree that the main text could benefit from the more explicit form in Definition I.\n\n__Toy function.__ We chose the isotropic quadratic toy problem because we can easily derive exact linear rates for this problem, and establish an obvious link with the notion of “effective number of neighbors”. We believe that the simplicity of this toy problem is a useful pedagogical complement to the more general theory in Section 4.\n\n__Heterogeneous data__. With heterogeneous data, we observe two regimes: in the beginning of training, when the worker's distant optima are in a similar direction, everything behaves identical to the homogeneous setting. In this regime, our insights on optimal learning rates and the quality of communication graphs are applicable. Heterogeneity seems to only play a role later during the training, when it leads to conflicting gradient directions. This behavior is illustrated the new Figure 14 (appendix), where we run D-SGD on our isotropic quadratic toy problem, but where the worker's optima are removed from zero by a zero-mean normal distribution with varying standard deviation.\n",
" This paper deeply investigated the role of topology in decentralized learning. Instead of the commonly adopted spectral gap in the literature, it proposed the \"effective number of neighbors\" concept and further revealed that the benefits of \"good\" topology are enabling larger learning rates so that it speeds up optimizations. The paper provides a sound proof of the convex case and observed similar behavior in deep learning. Originality: The role of the topology can influence the step-size choice in decentralized learning is not a surprisingly new observation, but this is the first time I saw a well-quantified way to describe it and show it in rigorous proof.\nQuality & Clarity: The presentation is clear and well organized. I can follow the logic and proof easily. The proof looks sound to me.\nSignificance: The work is a theoretical explanation work, which can be quite useful. However, the conclusion and the proof technique seem to rely on several strict assumptions. Not sure how easy to extend this framework to more general or complicated settings.\n 1. The D-SGD update adopted the uncommonly used probability update rule, i.e. execute the local gradient update with some probability and gossip communication with the rest case. It makes the analysis simplified since the expectation of Lyapunov becomes the convex combination of two steps independently instead of coupling them together. What about the case that one local gradient always followed by one gossip communication? How to modify the proof framework to cover this case?\n\n2. Can we rewrite $M = (1-r)\\sum_{k=1}^\\infty\\gamma^{k-1}W^{2k}$ into the form $M = (1-\\gamma) W^{2}(I-\\gamma W^2)^{-1}$ (assuming the inverse exists), since it looks closer to toy example Eq. (3). Also, personally feel the Definition I (effective number of neighbors) in the appendix is better moved to the main context so that the reader is easier to understand what the intuition of effective neighbors means.\n\n3. The toy example use an isotropic quadratic function $\\mathbb{E} \\frac{1}{2}(d^Tx)^2 = \\frac{1}{2}\\\\|x\\\\|^2$, which has a special property that the gradient noise at the optimal point $w^\\star$ vanish so that the linear speedup is established. It is more common to use Least-Mean-Square as the toy example, which is closer to the later theorem form. The nice detailed analysis is based on homogenous data and all workers shared the same loss function. I don't think heterogonous data or different minimizer cases should be covered by the theorem in this paper. But it will be nice to use some experiments to show the sensitivity of the conclusion when that assumption is not valid.",
" In this paper, the authors showed that decentralized learning allows a larger learning rate compared to centralized learning, which can accelerate the training process. The effective number of neighbors is defined in the paper which measures the ratio of the asymptotic variance of the iterations. As a main result, the authors provided the theorem about the larges learning rate that gives the best convergence guarantees based on the local neighborhood size. The authors also provided the results of the experiment based on the CIFAR10 dataset. Strengths:\n(1) Equations are written in a clear and reader-friendly way.\n(2) Visualizations are useful to convey the authors' main idea.\n(3) Paper is well-written and easy to follow.\n\nWeaknesses:\n(1) The related work section should discuss more related works.\n(2) Although the theory of this paper is solid, it would be better to give some theoretical direction for the non-convex problems. Since data-parallel optimization is widely used in deep learning regimes rather than strongly convex problems. (1) Same as the second point in weaknesses, is there any theoretical direction for the non-convex problems. It's maybe hard to get the same results as the convex case, but there is possible to get some upper bound or lower bound for the learning rate.\n(2) In the theory part, the authors use randomized D-SGD defined in equations (4). However, in practice, the standard D-SGD is used widely, which will average the parameters of neighbors and update the parameters every step without probability. It would be better to extend the current theory to the standard D-SGD.\n(3) For disconnected cases, it is more reasonable to look at batch size = batch size of decentralized case * num of workers rather than use the same batch size as the decentralized case. In the current setting, the larger stepsize mainly comes from a larger batch size compared to the decentralized case. If we use batch size * num of workers, would these results still hold? Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work.",
" The paper analyzes the D-SGD algorithm where workers (nodes) in a graph collaboratively perform stochastic gradient descent of an objective function. While the existing convergence rates of D-SGD mostly depend on the spectral gap of the adjacency matrix (assuming small learning rate), this paper suggests an alternative analysis based on the \"effective number of neighbors,\" which is defined through specific random walks on the communication graph.\n\nIn Section 3, the authors define the notion of effective number of neighbors $n_W(\\gamma)$, which is roughly the variance reduction rate of random walks (with decay $\\gamma$) on the graph $W$ compared to $W = I$. They derive a convergence rate depending on $n_W$, not the spectral gap of $W$, on a quadratic toy problem. They also extend this result to strongly convex functions. This analysis can explain the convergence of D-SGD even when the spectral gap is zero.\n\nFinally, they verify their theory by training VGG-11 on CIFAR-10. They argue that $n_W$ is better aligned with empirical training performance than the spectral gap. Strengths\n- The paper captured a weakness of current theories on D-SGD and provided an improved explanation on its convergence. Now, their theory can explain the convergence when the spectral gap is zero and also better align with empirical performance of neural networks trained with D-SGD.\n- The paper introduced an interesting random walk model on a graph as a proxy for the training dynamics of D-SGD. Based on this, they define a novel concept of $n_W$, which is a key in their analysis.\n- The paper performed exhaustive experiments on various types of graph (including time-varying topology) to confirm the theory.\n\nWeaknesses\n- I believe the writing of the paper can be much improved. There were notations (e.g. $\\zeta$, $\\bar{\\mathbf{P}}$) that are used before defined, so I had to go between the main text and appendix back and forth. \n- In line 149, the paper is giving the rate in terms of $n_W$, which is not yet defined. I think it is logically better to move the result after Section 3.2 (after $n_W$ is defined).\n- There were many typos and some of them slowed down my reading significantly.\n\nTypos\n- line 193, page 6: $\\zeta = d + 2$ instead of $\\zeta = d + 1$\n- displayed equation below line 471, page 17: $\\gamma^{\\frac{k - 1}{2}}$ instead of $\\gamma^{\\frac{k}{2}}$ (think about $\\mathbf{z}^{(1)}$)\n- displayed equation below line 473, page 17: $\\gamma^{k - 1}$ instead of $\\gamma^k$ (same reason)\n- line 476, page 17: assumptions of \"Lemma\" 2\n- proof of Lemma 4, page 18: Missing subscript of $\\mathbf{b}^{(t)}$. Result is correct, but the proof has a wrong scaling. The second line leads to $\\gamma (1 - r) \\mathbf{b}_{ni + j}^{(t)} + (1 - r)^t$ -- should be fixed.\n- line 501, page 18: the inequality is in the wrong direction. Should be $\\mathbf{b}^{(t + 1)} \\geq ...$.\n- line 521, page 19: $\\nabla h(x)$\n- Figure 13, page 27: the figure and caption say different $\\gamma$ - Could you provide a high-level motivation for introducing the decay $\\gamma$? I understand a parameter is technically needed to fit the random walk model, but is there a good interpretation of it?\n- Is it possible to compare the maximum learning rate in Theorem I with learning rate in other literatures?\n- How exactly are the $\\gamma$'s in the experiment chosen? From Figure 2, it seems that the ordering of $n_W$ of different graphs depends a lot on $\\gamma$. Can you also show what happens to Figure 5&6 if we use different $\\gamma$? Are those results still true?\n- Is it possible to relax Assumption B (IV)? The toy problem of Section 3 does not satisfy it since the spectrum of $\\mathbf{d} \\mathbf{d}^\\top$ can be arbitrary for $\\mathbf{d} \\sim \\mathcal{N}^d(0, 1)$. Maybe some sort of average case definition is needed (just like Definition C)?\n\n\n The authors have well addressed the limitations of their work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"vd1W4bueNg",
"k2QUbbDHgcy",
"dzHnMoIFXGP",
"58ZUzTsND67",
"isgveAfLAA6",
"vUrouopdKk",
"nips_2022_AQgmyyEWg8",
"nips_2022_AQgmyyEWg8",
"nips_2022_AQgmyyEWg8"
] |
nips_2022_F02H1zNl213 | Are GANs overkill for NLP? | This work offers a novel theoretical perspective on why, despite numerous attempts, adversarial approaches to generative modeling (e.g., GANs) have not been as successful for certain generation tasks, particularly sequential tasks such as Natural Language Generation, as they have in others, such as Computer Vision. In particular, on sequential data such as text, maximum-likelihood approaches are significantly more utilized than GANs. We show that, while it may seem that maximizing likelihood is inherently different than minimizing distinguishability, this distinction is largely an artifact of the limited representational capacity of the model family, for a wide class of adversarial objectives. We give a theoretical model in which minimizing KL-divergence (i.e., maximizing likelihood) is a more efficient approach to effectively minimizing the same distinguishability criteria that adversarial models seek to optimize. Reductions show that minimizing distinguishability can be seen as simply boosting likelihood for certain families of models including n-gram models and neural networks with a softmax output layer. To achieve a full polynomial-time reduction, a novel next-token distinguishability model is considered. Some preliminary empirical evidence is also provided to substantiate our theoretical analyses. | Accept | In the context of text generation, the paper gives a theoretical argument that GAN objectives are equivalent to maximum-likelihood training when the generator and discriminator families are 'paired'. Reviewers generally felt that the perspective was interesting (broM, jtPN, UUT1) and the theory was insightful (jtPN, UUT1). Reviewer vzAW raises the concern that the original draft of this paper overclaimed throughout, but I feel this has been addressed well enough in a revision. Reviewers broM and vzAW felt empirical validation was lacking, but since the paper's focus is clearly theoretical I don't see this as preventing acceptance. Overall this paper is borderline but I feel that it's interesting enough to merit acceptance despite flaws.
| train | [
"Mq3kQWRq1d",
"sOzTGMS-38c",
"SlAYeZml-I",
"uxAtqMWEfrf",
"p04cmF4tDf",
"TGFcbjg0l35",
"hZTy743skt",
"bF2_esz-SCQ",
"MHUL-cGysRU",
"53f5njtiCFn",
"SweKBcLxj-2",
"GAMh_ylN_b",
"GsjarHqw5NU",
"JRddaGGnsW",
"SlWLBI9qdbG",
"A6XcqAAFTc",
"c3oaoTwpKEh",
"vIAu3eGC2A6",
"7qjsqB2K95k",
"9wCI82UNEFb"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! Our purpose is to attract greater attention of the community to an important area of research, and we believe the conceptual and mathematical contributions of this paper would be broadly useful in the context of generative models. We've edited the paper at several places including the title (changes indicated in orange) - please see the revised version - to make sure that your reservations are accounted for. We hope these changes address your major concerns, and the same is reflected in your stronger support during the reviewer-area chair discussions. ",
" Thanks for clarifying some of the points! \n\n_“…we have decided to rephrase/soften some of these statements, including the title, to make it clear that GANs are often (i.e., in the circumstances described above) overkill, but there still might exist specific situations in which they might work just fine.”_ — Have these changes been made in the main document? How is the title going to change?\n\n_“…goal is explanatory rather prescriptive. That being said, we still think it is actionable, in the sense that it might prevent further futile efforts to `make GANs work for NLP’. / when NLL is directly efficiently optimizable (as is the case for NLP and other sequential data), using distinguisher/GAN-based methods instead is futile.”_ — This is actually one of the reasons I’m hesitant to promote this work. It’s not clear to me at all from the specific discussion in the paper that anyone ought to feel dissuaded from trying to make GANs work for NLP just yet. It’s not obvious to me that reduction arguments of the type used in the submission ought to have significant bearing in deep learning practice, given the many interacting components behind getting something to work. Different formulations of the same asymptotic objective might cause one to take very different roads, and when the landscape is treacherous, one can end up in very different places. At the end of the day, empiricism ought to determine best practice, unless a very precise technical reason is identified behind the _infeasibility_ of a class of methods.\n\nMy view after reading the submission was that it needed a do-over in several places, as well as a change in title, to avoid the potential risk of sparking an inclement GAN-winter in NLP, where perhaps we are only waiting for some major technical breakthrough. It appears from the rebuttal that some changes have been made, but it’s not clear to me where and how. I’m still of the view that this submission ought to be reviewed from scratch before being disseminated.",
" Thanks, again, for your constructive feedback! Acting on your comments and questions has helped us clarify and reinforce some strengths of this work along multiple dimensions, including, \n\n(1) Technical: see, e.g., the mathematical justification on generality of the proposed framework, including how these analyses apply to methods such as WGAN-GP; and\n\n(2) Empirical: see, e.g., the general comment above where we show 'boosting' underpinning Lemma 6 does indeed improve likelihood (reduces NLL) as stated, and that the empirical difference is indeed lower-bounded by the gap predicted by theory.\n\nTherefore, we would be grateful if the same could be acknowledged, and translated into a revised score. \n \n",
" Many thanks for your engagement in the discussion! We're grateful for your support, and will include your suggestions in the final version. ",
" Would you consider revising your score in light of our substantial modifications, including experiments we have added (see the general comments above) ?",
" Would you consider revising your score in light of our modifications, especially the experiments we have done as you have suggested (see the general comments above) ?",
" Thank you for your detailed response. The explainations can very well enhance the paper content and address concerns. I hope the discussions can be reflected in your revised paper version, and also improve the writing and organization of the paper. Hopefully after that you can make your points unambiguously clear and eaiser for readers to understand. I revised my score and changed it from 5 to 6.",
" We thank the reviewers for their feedback and comments. We would like to address some common points raised by multiple reviewers. \n\n- **On the applicability of Algorithm 1**. Multiple reviwers asked why there was no empirical validation of Algorithm 1 or the Lemmas. First, we would like to emphasize that the Algorithm we propose is not intended to be a practical one. It is an (polynomial-time) algorithmic reduction proving two problems are equivalent by showing how the solution of one can be used to solve the other. This is a common proof technique in the theory of computer science literature, and the resulting algorithms are rarely implemented -- if they are implementable at all. In this paper, the algorithmic reduction is a theoretical tool too, and we neither advise its use in practice nor we claim it as a practical contribution. In fact, its use would go against the main message of this work: that when MLE training is practical and efficient, GAN training is superfluous. That being said, based on the requests of the reviewers we have implemented a simple initial empirical validation of the reduction underpinning the algorithm (Lemma 3), which we discuss in the next point. \n\n- **On the choice of title: overkill $\\neq$ useless**. We would like to emphasize that our choice of work 'overkill' for the title does not imply a message that GANs never work for NLP or should never be used, but rather that they might be unnecessarily complicated (at least, compared to MLE, which is arguably much more straightforward and well-understood). \n\n\n- **An empirical validation of the reductions**. We have implemented and experimented with an empirical validation of Lemma 3 in the following realistic (albeit simplified) setting. We train a pair of text generator and discriminator using a [publicly available implementation ](https://github.com/suragnair/seqGAN) of SeqGAN (Yu et al. AAAI 2017). The generator is pretrained by negative log-likelihood (NLL) minimization. During the adversarial phase of training, the generator is trained using policy gradient. After training, we compute the discriminators' generalized training advantage (Eq. (4), using finite-sample empirical approximations), and then create a new generator whose next-word logit predictions are modified according to Lemma 3. We compare the NLL of the original and 'boosted' generators across training epochs, and compare the difference between these to the theoretical lower bound of Lemma 3. The results below correspond to the default repo settings in terms of network capacities and language configuration (`max.~seq.~len.~=20`, `vocab=5000`). They show that the 'boosting' underpinning Lemma 6 does indeed improve likelihood (reduces NLL) as stated, and that the empirical difference is indeed lower-bounded by the gap predicted by theory. With the benefit of additional time, we intend to run these experiments on various training configurations to get a better understanding of the empirical behavior of this reduction for varying model capacities and initial performance (esp.~weaker generators with lower initial NLL).\n\n| Discrim. Epoch | Gen. Epoch | Gen. NLL | 'Boosted' Gen. NLL | Improvement | Improvement Predicted by Bound |\n|------------:|------------:|----------:|-----------:|-----------:|--------------:|\n| 0 | 0 | 91.2756 | 91.1459 | 0.129694 | 0.00842083 |\n| 1 | 1 | 91.2995 | 91.182 | 0.117515 | 0.00733052 |\n| 2 | 2 | 91.3425 | 91.2238 | 0.118727 | 0.00834904 |\n| 3 | 3 | 91.4 | 91.2872 | 0.112812 | 0.00683975 |\n| 4 | 4 | 91.4663 | 91.3402 | 0.126077 | 0.00961482 |\n| 5 | 5 | 91.5386 | 91.4103 | 0.12824 | 0.00769221 |\n| 6 | 6 | 91.6125 | 91.4928 | 0.11971 | 0.00744601 |\n| 7 | 7 | 91.6866 | 91.5565 | 0.130122 | 0.00836084 |\n| 8 | 8 | 91.7617 | 91.6169 | 0.144865 | 0.00993902 |\n| 9 | 9 | 91.8349 | 91.7039 | 0.131023 | 0.00818018 |\n| 10 | 10 | 91.9048 | 91.7848 | 0.11997 | 0.00730441 |\n| 11 | 11 | 91.9714 | 91.8493 | 0.122075 | 0.00844588 |\n| 12 | 12 | 92.0345 | 91.8981 | 0.136373 | 0.00921325 |\n| 13 | 13 | 92.0937 | 91.9648 | 0.128912 | 0.00832491 |\n| 14 | 14 | 92.1491 | 92.0212 | 0.127937 | 0.00842633 |\n| 15 | 15 | 92.2005 | 92.0755 | 0.125091 | 0.0079117 |\n| 16 | 16 | 92.2481 | 92.1153 | 0.132838 | 0.0102334 |\n| 17 | 17 | 92.2919 | 92.1581 | 0.133824 | 0.00773296 |\n| 18 | 18 | 92.3321 | 92.2087 | 0.123441 | 0.00720734 |\n| 19 | 19 | 92.3676 | 92.237 | 0.130602 | 0.00950763 |",
" \n- \"* How does the recommendation in the paper to perform NLL optimization relate to observations in (O\\\"{o}rd and Dambre, 2015) and (Theis et al., 2016.)?*\" The recommendation in this paper agrees with (Theis et al) 'Our results demonstrate that for generative models there is no one-fits-all loss function but a proper assessment of model performance is only possible in the\nthe context of an application', and (van den O\\\"{o}rd and Dambre) 'In this work we introduced new ways of modeling images\nwith deep GMMs by using locally-connected transformations. These transformations efficiently exploit the fact that correlations in images are stronger between pixels that are closer to each other. This allows much faster training and\nless overfitting'. Namely, this paper establishes mathematical equivalence between maximizing likelihood and minimizing distinguishability for a wide class of NLP settings, and thus recommends NLL \\textbf{only} for applications when it is mathematically equivalent but more efficient and direct to compute compared to a broad class of adversarial methods. In particular, we do not make any claims about settings where the two approaches clearly optimize different objectives. Thank you for bringing our attention to these works. We will position the implications of the proposed work appropriately with respect to these important references in the revised version. \n- \"*Can the method be modified to accommodate unbounded critics?*\". The conclusions from this work apply to any training method based on distinguishability, which includes IPMs, Wasserstein distances, TVD, etc, and is not limited to methods without outputs in $[0,1]$. Moreover, variants such as WGAN-GP can also be investigated within the proposed framework as described earlier. We will include these references, and add a discussion on this.\n \n\nThank you for your constructive feedback. Hope our response has sufficiently addressed your concerns and the same would be reflected in your revised scores.\n ",
" - \"*this submission does not really identify any such issue that clearly informs us why GAN training might be 'overkill for NLP'*\". We acknowledge your point, and would like to point out in our defence that several complex factors are at interplay behind this phenomenon. The sequential nature of text data affords efficient estimation of important quantities (e.g., the partition function), and this bestows MLE based methods, in effect, with the ability to distiguish the conditional next-token predictions rather than having to distinguish full sentences for the class of problems we discussed here. At the same time, MLE based autoregressive models allow for efficient and stable training (and sampling) in this setting. This is in contrast, e.g., to typical vision settings where the MLE based autoregressive models end up being considerably less efficient due to high dimensionality of the input images. We reiterate that there isn't a single issue that accounts for GANs being \\textit{overkill} (i.e., unnecessarily complicated --- see our explanation on the choice of this word above), but rather a combination of factors, which we discuss in Section 1: GANs are conceptually and implementation-wise much more involved than MLE training, and much more prone to unstable training, a price that might be worth paying when the alternatives (like MLE) are unsuccessful - e.g., MLE typically performs poorly in settings where model misspecification is a major issue - which is certainly not the case for the NLP problems we focused on in this work. It is our hope that this paper fosters further exciting work toward unraveling the effect of such issues.\n- \"*Once again, recall how the dominant narrative of “GANs are best for image generation, and nothing else comes close” has changed significantly in recent times to “flow, diffusion, and autoregressive models are starting to look really really good!”*\". Thanks for underscoring this point. Certainly, the narrative evolves with evidence; and particularly in the context of NLP applications, there is an overwhelming empirical evidence that GANs have not been nearly as successful as they have been in several other applications. In that sense, our work is an attempt to establish a rigorous mathematical connection between two prominent classes of generative models via the notion of distinguishability. Diffusion models have only recently begun to gain attention within the NLP community (e.g., Austin et al, NeurIPS 2021) and certainly present an exciting avenue. Interestingly, the analysis presented here could help unravel the connections between diffusion models and maximum likelihood based models for specific class of problems (such as the log-linear models) via their similarities/interpretations as energy-based models (Song and Kingma, arXiv: 2101.03288). \n- \"* L144-151 does not take into account variants of GANs [...]*\". Thank you for this observation.\n The discussion in L144-151 concerns the need for a differentiable discriminator, which is still the case for WGAN (and all its variants). It is true that Wasserstein distance-based objectives address that issue of vanilla GANs, but the issue of differentiability remains (and indeed, has been approached with various post-hoc methods like Gumbel-softmax, etc). We would also like to point out that in practice, WGAN (but not WGAN-GP) resorts to, and recommends, weight clipping to make them lie within a compact space. Then, our analysis carries over via the family $F'$ of distinguishers to include appropriately scaled critics $f'$ as well as $-f'$. Our framework is also flexible enough to accommodate variants such as WGAN-GP. Recall that the WGAN-GP formulation can be expressed in our notation as: $ E_q [f(x)] - E_p [f(x)] + \\lambda E_r [(\\|\\nabla_x f(x)\\|_2 - 1)^2] $ . This can be viewed as Lagrangian relaxation of the following hard objective for $\\epsilon > 0$ as: $E_q [f(x)] - E_p [f(x)]$ subject to $ E_r [(\\|\\nabla_x f(x)\\|_2 - 1)^2] < \\epsilon $. Distinguishability can then be readily be expressed as:\n \n$\\max_{f : E_r [(\\| \\nabla_x f(x) \\|_2 - 1)^2] < \\epsilon} E_q [f(x)] - E_p [f(x)] $, \n\nwhich is clearly of the form this work deals with, i.e., $\\max_{f \\in F} E_q [f(x)] - E_p [f(x)]$. \n",
" \n[1] B. Sriperumbudur, K. Fukumizu, A. Gretton, B. Scholkopf, and G. Lanckriet. On the empirical estimation of integral probablity measures (Electronic Journal of Statistics, 2012). \n\n[2] Martin Arjovsky, Soumith Chintala, and L. Bottou. Wasserstein Generative Adversarial Networks (ICML 2017). \n\n[3] Y. Mroueh and T. Sercu. Fisher GAN (NeurIPS 2017). \n\n[4] C.-L. Li, W.-C. Chang,Y. Cheng, Y. Yang, and B. Poczos. MMD GAN: Towards Deeper Understanding of Moment Matching Network (NeurIPS 2017). \n\n[5] Y. Mroueh, C.-L. Li, T. Sercu, A. Raj, and Yu Cheng. Sobolev GAN (ICLR 2018). \n\n[6] G. Biau, M. Sangnier, and U. Tanielian. Some Theoretical Insights into Wasserstein GANs (JMLR, 2021). \n\n[7] S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization (NIPS 2016). \n\n[8] J. Song and S. Ermon. Bridging the Gap Between f-GANs and Wasserstein GANs (ICML 2020). \n\n[9] M. Binkowski, D. Sutherland, M. Arbel, and A. Gretton Demistyfying MMD GANs (ICLR 2018). \n\n[10] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel Recurrent Neural Networks (ICML 2016). \n\n[11] B. Dai, Z. Liu, H. Dai, N. He, A. Gretton, L. Song, and D. Schuurmans. Exponential family estimation via adversarial dynamics embedding (NeurIPS 2019).\n\n[12] S. Zhai, W. Talbott, C. Guestrin, and J. Susskind. Adversarial fisher vectors for unsupervised representation learning (NeurIPS 2019).\n\n[13] C. Finn, P. Christiano, P. Abbeel, and S. Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models (arXiv:1611.03852, 2016).\n\n[14] T. Che, R. Zhang, J. Sohl-Dickstein, H. Larochelle, L. Paull, Y. Cao, and Y. Bengio. Your GAN is Secretly an Energy-based Model and\nYou Should Use Discriminator Driven Latent Sampling (NeurIPS 2020). \n\n[15] Y. Song and D. Kingma. How to Train Your Energy-Based Models (arXiv: 2101.03288, 2021).",
" \nYes, our analysis extends to the sentence-level GANs as well. To make things concrete, we begin by noting that for any sentence $s = w_{1, s} w_{2, s} \\ldots, w_{k_s, s}$, we can decompose the probability $q(s)$ (and likewise $p(s)$) in terms of conditional probabilities given the prefixes, namely,\n$$q(s) = \\prod_{i=1}^{k_s} q(w_{i, s}|w_{1,s}, \\ldots, w_{i-1, s})~.$$\n\nThen, distinguishability can be written as \n\\begin{eqnarray*} &&\\max_{f \\in F} E_q [f(s)] - E_p [f(s)]\\\\\n& = & \\max_{f \\in F} \\sum_s (q(s) - p(s)) f(s) \\\\\n& = & \\max_{f \\in F} \\sum_s \\underbrace{\\left[\\prod_{i=1}^{k_s} q(w_{i, s}|w_{1,s}, \\ldots, w_{i-1, s}) - \\prod_{i=1}^{k_s} p(w_{i, s}|w_{1,s}, \\ldots, w_{i-1, s}) \\right]}_{r(s)} f(s) \n\\end{eqnarray*}\n\nThus, one can define a sentence level GAN that takes the entire sentence $s$ and implements a discriminator as well as a sequential component (such as LSTM/RNN) each for modeling (conditional) distributions $p$ and $q$. It can then produce a single score based on $s$ (more specifically, using $r(s) = q(s)-p(s)$ and $f(s)$). Thus, unlike, next-token level GANs, $f(s)$ is computed only once (i.e., after processing the entire sentence). As you mentioned, this kind of GAN suffers from critical issues such as difficulties with sampling. \n\n\nIn contrast, we can still follow the general efficient reduction from section 6 in the paper for MLE that exploits step-wise weak distinguishers (recall that we do not need these distinguishers to be optimal). However, without step-wise distinguishers are not available, direct MLE estimation at the sentence level would itself be computationally demanding (due to combinatorial issues), and susceptible to high variance. \n\nIn summary, the benefits of our approach stem from an efficient reduction that leverages weak distinguishers for MLE based training. This, in spirit, is akin to boosting, where weak models (such as one-level decision trees, or stumps) can be sequentially combined to achieve a strong ensemble classifier, efficiently, compared to fitting a single optimal decision tree directly (which is known to be hard). Our analyses establish that a wide class of adversarial objectives, including those prominent in NLP, can similarly be trained way more efficiently using MLE models instead. ",
" We thank the reviewer for the very thorough feedback and suggestions. We provide answers to your questions below:\n\n1. \"*What’s the difference between 'distinguisher' and 'discriminator'?*\". Thanks for the suggestion. Yes, what we call distinguisher for adversarial methods is commonly known as discriminator (e.g., as in original GAN) and critic (especially, in the context of more nuanced formulations based on some Integral Probability Metric, e.g., Sobolev GAN). We will unify these terms as you suggested. \n2. \"*Is this [that GAN training seeks to minimize distinguishability] an observation, assumption or statement?*\". Thanks for the opportunity to emphasize this important connection between distinguishability and GANs. Distinguishability is indeed the objective that the adversarial approaches, such as GANs, seek to optimize. For example, as established in reference [1] below, GANs that are trained based on Kantorovich metric, Fortet-Mourier metric, dual-bounded Lipschitz distance (or the Dudley metric), total variation distance, and kernel distance are all just specific formulations of the distinguishability criterion. Thus, in particular, our results hold for GNN formulations such as Wasserstein GANs [2], MMD GANs [3], Fisher GANs [4], and Sobolev GANs [5], for an appropriately chosen family $F$ of distinguishers. For example, we obtain Wasserstein GANs, in its dual form, as a special case when $F$ is restricted to 1-Lipschitz functions in which case it can also be viewed as a special case of the so-called f-GANs [6, 7, 8]). Likewise, we obtain MMD-GANs when $F$ pertains to functions (kernels) defined over a ball in some Reproducing Kernel Hilbert Space [9]. We will make this clear based on your feedback.\n3. \"*Is 'distinguishability”' symmetrical (like JS) or asymmetrical (like KL)?*\" Thanks for another great question that helps us elucidate the generality of our approach. Our distinguishers are asymmetric in the sense that in general 'q is distiguishable from p' is different from 'p is distinguishable from q'. This follows from our definition of distingishability of q from p as $d_q = \\max_{f \\in F} (E_q [f(x)] - E_p [f(x)])~.$ However, under our definition of distinguishability, we recover the notion of inverse probability metric (IPM) by letting $-f \\in F$ for all $f \\in F$. Clearly, in this case the notion of distinguishability becomes symmetric as it reduces to $\\max_{f \\in F} |E_q [f(x)] - E_p [f(x)]| ~.$ In fact, this explains in part, how the proposed framework lets us handle an extremely wide class of discrepancies, symmetric as well as asymmetric. Choosing an appropriate family $F$ of distinguishers immediately leads to the corresponding adversarial objective $\\min_q d(q)$, where $d(q)$ is symmetric or asymmetric depending on $F$. \n4. \"*Does your argument work **only** on next-token level GAN?*\" No. Due to space constraints, we post this as a separate comment below. \n5. \"*fundamental difference between GAN in text and vision?*\" Yes, indeed, we can compute conditional likelihoods more efficiently for text data compared to images. The sequential nature of test data allows us to compute the normalization terms (and thus the partition function) on a token-by-token basis, thereby enabling us to distinguish the conditional next-token predictions instead of having to distinguish full sentences. Similarly, for low dimensional images (such as 8x8, or 4x4), conditional predictions are tractable and thus autoregressive modeling would allow for efficient training and sampling. In contrast, MLE based autoregressive models such as PixelRNN [10] are typically slow for high dimensional real images. In such settings estimating the partition function is challenging, so alternative methods such as noise contrastive estimation, score matching, Langevin dynamics, and MCMC sampling in latent space that exploit connections between GANs and energy based models have been preferred [11, 12, 13, 14, 15]. \n\nWe thank the review, again, for their thoughtful feedback. We hope our response has sufficiently addressed all their questions and concerns, and if so, ask that they consider revising their score.\n\n",
" We are grateful for the thoughtful feedback and the appreciation of the contributions of the paper. In particular, we are glad the reviewer appreciated the contribution of the polynomial time reduction. Answers below:\n\n- \"*The main weakness is the lack of empirical validation; in particular, it would've been nice to see if GANs actually do end up at the MLE in simple cases, and whether the rate at which they do is slower, as argued in the paper.\"*. Please see the empirical validation we've added in the general comment section above. This is a first step towards the validation you propose, which requires addressing some additional issue, e.g., how to verify whether models with potentially different parametric representations (e.g., architectures) correspond to the same MLE solution. In other words, both could achieve a similar LL on a finite sample of data but not correspond to the same model. \n\n- \"*[140-141]: Is it the case that likelihood-based models generated text that could be easily distinguished from humans? I think fake-news detection, etc. has been an issue for at least 5 years?*\" Thank you for this interesting question. This was the case not too long ago, before the advent of Transformers and other very large NLL-trained language models. The discrepancy between quality and diversity of pre-Transformer NLG models and the ability of humans to pick up on each of these is discussed in detail in Hashimoto et al., 'Unifying Human and Statistical Evaluation for Natural Language Generation'.\n\n- \"*What is novel about the next-token distinguisher? Is this not a direct application of the standard GAN objective for classification to sequence modeling?*\" In short, yes, the principle behind the distinguishers we use here is very similar to (albeit more general than) the discriminator in usual GAN training. The novelty is not in using this principle to train a language model (in fact, we expressly advocate against doing so, see general comments above), but its use to formally prove the polynomial time reduction between training this adversary and fitting a MLE model. ",
" We thank the review for the feedback. Answers below:\n\n- \"*It’s hard for me to follow the detailed derivation because there is always a gap between consecutive formulas*\". Thank you for the suggestions for improving the presentation. We will edit the layout to prevent this from happening. \n- \"*paper focusing on text GANs [...] but the sequential formulation of q(x) where is not applied to the following derivation in Section 4 and 5*\". Indeed, the results of 4 and 5 apply to general log-linear models, which are common (but not exclusive) to NLP and text data. After deriving these general results in Section 5, we tailor them to the sequential setting in Section 6. \n- \"*I feel that many parts of this paper are used to analyze general GANs [...] which are divergent from the title*\". As stated above, many of our results are more general, but their *implications* for NLP is the key takeaway message of this work. More concretely, showing such an equivalence between GAN and MLE training for this general class of models has substantially different implications for text data (where MLE is typically easy and efficient) than for image data (where MLE has traditionally, and until very recently, been much more challenging). The title and discussion of this paper focus on the former.\n- \"*the non-differentiable problem of text GANs is always solved by policy gradient or Gumbel-Softmax approximation. I wonder whether the following theoretical analysis consider this step*\". We agree that these tricks are crucial component of making GANs for text data work (and indeed we discuss them in Page 4). However, what matters for our analyses is that the models are trained to minimize a certain objective (e.g., distinguishability), but the \\textit{mechanics} of how that is achieved is less important and does not play a role in the results. \n- \"*Since this paper proposes a specific algorithm [...] should conduct an experiment to show the effectiveness of the proposed algorithm*\".\nPlease see the general comment on this above. ",
" - \"*Title misleading / research question too ambitious to settle*\". We take this point, and recognize that some of the phrasing of the paper might misrepresent its core message. In response to your comments, we have decided to rephrase/soften some of these statements, including the title, to make it clear that GANs are *often* (i.e., in the circumstances described above) overkill, but there still might exist specific situations in which they might work just fine.\n- \"*Without experimental support, it is unclear if the connection in the paper is actionable: does it buy us anything?*\". The goal of the paper is to provide further theoretical understanding on a well-known and empirically observed phenomenon: that MLE training for NLP models is far more successful than GAN training. Thus, its goal is *explanatory* rather *prescriptive*. That being said, we still think it is actionable, in the sense that it might prevent further futile efforts to `make GANs work for NLP'. As discussed in the general comment above, our reductions are intended to serve a purpose similar to the polytime reductions routinely conjured in complexity theory: e.g., one can establish that a problem B (e.g., independent set) is NP-complete if it admits a polytime reduction to some already known NP-complete problem A (e.g., vertex cover). These reductions are not intended to necessarily design an (approximate) algorithm for problem B that invokes procedure for A, but rather to redirect the efforts on B toward a more fruitful setting by underscoring the hardness of computing B given what we know about its relation to A. \n- \"*So is the recommendation that one ought to use “distinguishers” to optimize NLL using the developed procedure?*\" As stated in the general comment above, Algorithms 1/2 are not intended as feasible optimization methods, but rather as algorithmic reductions. We emphatically do not recommend their usage in practice. In fact, the take-home message is the opposite: when NLL is directly efficiently optimizable (as is the case for NLP and other sequential data), using distinguisher/GAN-based methods instead is futile.\n- \"*It would be nice to have some clearer characterization of what is meant by “similar power” in F and Q*\". Thank you for pointing out this ambiguity. Here ``power'' refers to representational capacity. We will clarify this in the paper.\n- Regarding the lessons from FlowGAN, etc: Thank you for the oppportunity to elaborate on this. Regarding the lessons from FlowGAN and other references you cited, we completely agree that likelihood and 'quality' or 'realistic-ness' of sample do not always go hand in hand. In fact, one prominent example of this phenomenon is the standard VAE objective [1] that tries to maximize a lower bound on the log-likelihood but results in poor sample quality compared to GANs. Turns out that, in such settings, distributional shift due to minimizing (regularized) distortion can be at odds with the perceptual quality of the samples [1, 2]. On the other hand, in effect, GANs also take into account the KL divergence in the other direction, leading to comparatively much better quality of samples. Indeed, understanding the theoretical underpinnings of generative models with respect to their sample quality is an intriguing question that requires further analyses. In the context of present work, **our message here is not to claim at all that maximizing likelihood is universally better than adversarial methods or vice-versa, but to emphasize that for many problems, in domains like NLP, the two objectives often turn out to be equivalent *mathematically* via the notion of distinguishability and maximizing MLE could provide a more efficient (and stable way) of optimizing the common objective**. Surely, as you rightly mentioned, a more comprehensive investigation encompassing the interplay of several intriguing factors such as optimizing methods, network architecture, step sizes, momentum based acceleration, size of the models, etc. is very much needed to further our understanding of the relative pros and cons of different generative models in such scenarios. Our work should be seen as a stepping stone toward that endeavor. Based on your feedback, we will add a discussion on this along with the references you mentioned. ",
" This paper analyzes why GANs underperform MLE in natural language generation tasks. The authors argue that minimizing KL-divergence like MLE is a more efficient approach compared with minimizing the same distinguishability criteria in adversarial models. The authors also propose that minimizing distinguishability can be regarded as boosting likelihood for certain families of models including n-gram models and neural networks with a softmax output layer. Strengths:\n\n1. It’s essential and meaningful to give a theoretical analysis on why text GANs fall short. The authors try to connect the minimization of distinguishability and the boosting of the MLE training objective, which is an interesting perspective.\n\nWeaknesses:\n\n1. The organization of this paper should be largely improved. It’s hard for me to follow the detailed derivation because there is always a gap between consecutive formulas. Also, as a paper focusing on text GANs, most parts are about general GANs which are not directly related to texts. For example, the authors mention GANs based on the n-gram model in Section 3. But the sequential formulation of q(x) where $x=w_1 w_2 \\cdots w_t$ is not applied to the following derivation in Section 4 and 5. I feel that many parts of this paper are used to analyze general GANs (such image GANs), which are divergent from the title.\n2. As mentioned in Section 2, the non-differentiable problem of text GANs is always solved by policy gradient or Gumbel-Softmax approximation. I wonder whether the following theoretical analysis consider this step because it plays an important role in the performance of text GANs from the existing works.\n3. Since this paper proposes a specific algorithm (i.e., Algorithm 1 in Section 6), the authors should conduct an experiment (at least on synthetic data) to show the effectiveness of the proposed algorithm. I have included my questions in the weaknesses part. The authors have adequately addressed the limitations and potential negative societal impact of their work.",
" The submission broadly aims at providing arguments for why GANs have not seen much success in sequential generation, as in language, but maximum likelihood models have. \n\nThe submission is entirely theoretical, demonstrating that for a reasonable notion of “distinguishability” of model samples from true samples, one can derive a method to minimize the empirical negative log-likelihood using the output of a classifier that can discriminate (with some effectiveness at least) the model samples from true samples. This derivation connects “distinguishability” (for example, predictions from an adversarial discriminator) with log-likelihood maximization. For sequential models, one can adapt the same procedure to conditional distributions over sequence items, yielding an algorithm reminiscent of boosting, where (possibly-) weak “distinguishers” can be used at each stage to further minimize negative log-likelihood. A run-time analysis of this algorithm is provided. I think some of the phrasing might be a bit misleading to some readers. The abstract suggests that the difference between maximizing likelihood for an explicit model and training an implicit model such as a GAN is “largely artificial”. Having read the paper, I do not find the equivalence this claim seemed to promise: it turns out that the main argument tying the two together is that one can use a differentiating-signal between two distributions to improve log-likelihood for an explicit model. L49-51 claims that GAN training is a “roundabout way of maximizing likelihood on observed data”, but in my reading, the submission does not really substantiate this — it is only shown that a current log-likelihood may be improved if one uses an effective adversary’s predictions to push down probability density at points distinguishable from the true samples by the adversary. In my view, this does not really imply that GAN training is maximum likelihood in disguise in any meaningful sense as seems suggestive in the submission, rather it only suggests that maximum likelihood training may be conducted by using (even weak) adversaries.\n\nThe lack of empirical support is somewhat disappointing, after deriving the algorithm. Algorithm 1 seems implementable, as long as one can design a $g$ that can handle multi-dimensional inputs (or $N$ of them), since Lemma 3 seems to suggest that the normalization is doable by only summing over $|\\mathcal{V}|$ terms. Is there something I’m missing, in terms of practical application, which would explain the lack of empirical validation? Without experimental support, it is unclear if the connection in the paper is actionable: does it buy us anything? The conclusion suggests that the takeaway is that “in applications where it is natural to fit models by minimizing log-loss, it is indeed likely to be a more direct and efficient means of fitting a model”. So is the recommendation that one ought to use “distinguishers” to optimize NLL using the developed procedure? Or that we should directly attempt to optimize NLLs? What would be an example of an unnatural circumstance for fitting models with MLE (recall that for images, flow-based explicit models have been approaching GANs promisingly closely in terms of sample quality, for example [1]; the auto-regressive model PARTI from Google is highly convincing)?\n\nIt would be nice to have some clearer characterization of what is meant by “similar power” in F and Q.\n\nIn general, the question the submission seems to aim to tackle at a high-level (based on the title at the very least) might be too ambitious to settle. With all the moving parts in designing and training a model, such as architectural choices, optimization issues, issues of mode-collapse in generation, and the often unanticipated way they mesh together (or don’t), the meta-aspects guiding efforts invested by the community on certain approaches over others depending on publicized initial results, pin-pointing reasons for why one class of methods might have seen less success than others is difficult unless a clear technical reason is identified. In my opinion, this submission does not really identify any such issue that clearly informs us why GAN training might be “overkill for NLP”. Once again, recall how the dominant narrative of “GANs are best for image generation, and nothing else comes close” has changed significantly in recent times to “flow, diffusion, and autoregressive models are starting to look really really good!”. \n\nTypo in L349: “to” —> “two”\n\n\nSome related work that are relevant:\n\n — L144-151 does not take into account variants of GANs, such as the WGAN-GP [2], that do not suffer from gradients vanishing on discrete-spaces. More generally, how does the discussion in the paper relate to cases like WGANs where the critic outputs are not in [0, 1]?\n\n — FlowGAN [3] is one relevant work. They report that, in their framework, optimizing for likelihood in a hybrid model results in poor sample quality but good likelihoods, and conversely, using adversarial training results in opposite trends. Related to this observation are the broader observations in [4, 5] about how improved likelihoods need not correspond to improved sample quality in practice for high-dimensional data. \n\nOverall, \n\n — While the paper seems to have decent elements of originality in it,\n\n — and is very clearly written,\n\n — but does not seem to deliver sufficiently in order to be of much practical significance at the current stage. \n\n\n[1] Glow, Kingma and Dhariwal, 2018\n\n[2] WGAN-GP, Gulrajani et al., 2017\n\n[3] FlowGAN, Grover et al., 2018\n\n[4] Locally-connected transformations for deep GMMs, van den Oörd and Dambre, 2015\n\n[5] A note on the evaluation of generative models, Theis et al., 2016. 1. Why is Algorithm 1 empirically non-validatable at this stage?\n\n2. How does the recommendation in the paper to perform NLL optimization relate to observations in [4, 5]?\n\n3. Can the method be modified to accommodate unbounded critics? The authors have discussed potential negative societal impacts.",
" This paper argues that training a GAN, which minimizes distinguishability between a learned and actual distribution, and maximizing likelihood, are often equivalent for NLP tasks. They do this by (i) giving a case where minimizing distinguishability is not the same as maximizing likelihood due to limitations in the set of possible generates, then (ii) showing how training a GAN is equivalent to maximizing likelihood for n-gram models, then (iii) showing how training distinguishing reduces log loss, and finally (iv) showing a polynomial-time reductions from a distinguishing distribution to an MLE for sequence models. Strengths\n* The paper gives good theoretical motivation for an empirical phenomenon that, to the best of my knowledge, wasn't well understood. In particular, previous explanations argued that \n* The exposition in the paper is great; the authors help motivate with simple cases where distinguishability + MLE are different, then show how they're the same in an intuitive and realistic case, then provide general theory\n* The explicit polynomial time reduction seems like a strong theoretical contribution. \n\nWeaknesses\n* The main weakness is the lack of empirical validation; in particular, it would've been nice to see if GANs actually do end up at the MLE in simple cases, and whether the rate at which they do is slower, as argued in the paper. * [140-141]: Is it the case that likelihood-based models generated text that could be easily distinguished from humans? I think fake-news detection, etc. has been an issue for at least 5 years? \n* What is novel about the next-token distinguisher? Is this not a direct application of the standard GAN objective for classification to sequence modeling? N/A. ",
" In this paper, the authors show maximizing likelihood is effectively minimizing distinguishability for log-linear Q. Authors suggest a polynomial-time reduction from likelihood maximization to next-token distinguishability. If the distinguisher has an advantage over a threshold, then a generator network with lower log-loss can be constructed accordingly. Since in sequential domains (eg, text) the likelihood is easier to compute, one might prefer to use MLE in the first place. # Strengths\n\nIt’s well known that GANs don’t work so well in the domain of text compared to MLE, yet a thorough investigation is still lacking. The paper offers new and detailed insights into this phenomenon. Previous works explain this as the consequences of having discrete tokens, being not differentiable, sparse reward, optimization challenges, etc. This paper chose a different angle by showing how GAN and MLE are actually closely related.\n\nThe usage of “distinguishability” to analyze GAN is new.\n\nThe general reduction is useful and solid. It’s good to see asymptotic analysis.\n\nThe argument is backed with mathematics derivations. An essential algorithm is given. Assumptions are clearly given before claims.\n\nThe final conclusion is straightforward and clear.\n\nOverall this paper is working on a topic with significance. People do care why GANs can/cannot work on text. This paper is making contributions and casting new insights into this problem.\n\n# Weaknesses\n\nThe introduction of “distinguishability” is abrupt to me. See Q2.\n\nThe relation of distinguishability, likelihood(KL divergence) and JS divergence should be discussed further. See Q2 Q3.\n\nIt seems this paper focuses on next-token level GANs and not sentence level GAN. See Q4.\n\n“GANs are overkill for NLP” seems too strong a claim, unless you can be very sure that you cover all types of GAN in NLP.\n\nIn general, I'm positive for this paper, but I still have some concerns that need to be addressed.\n\n Q1: \n\nWhat’s the difference between “distinguisher” and “discriminator”? If they’re the same, maybe authors should unify the terms.\n\nQ2: \n\nIt’s known that MLE minimizes KL divergence and GAN minimizes JS divergence, but this paper assumes GAN minimizes distinguishability. On line 70, “Motivated by this observation, numerous adversarial approaches to approximating q have been attempted to minimize distinguishability d(q)”. Is it an observation, assumption or statement? Can you prove that GAN is indeed minimizing distinguishability (e.g., with what kind of loss for generator/discriminator, using what kind objective, and give the optimal value for D)?\n\nQ3: \n\nWhen you say “distinguishable” or “distinguishability”, is it symmetrical (like JS) or asymmetrical (like KL)? Do “q is distinguishable from p” and “p is distinguishable from q” mean the same thing? \n\nIf it’s asymmetrical (like KL), then it’s not surprising that maximizing likelihood and minimizing distinguishability yield the same convergence in most cases. But I don’t think GANs are using an asymmetrical objective (because discriminator treats real and fake samples equally).\n\nIf it’s symmetrical, then the example about ages you give in line 72-82 does not seem right to me. “A smaller m < 100 would yield less distinguishable samples”. Any q with m < 100 assigns zero probability to ages over 100, hence real samples (with 100+ ages) are very distinguishable from samples in q. \n\nActually in the ages example, MLE(KL) and GAN(JS) both yields m=119 as the optimal point.\n\nQ4: \n\nDoes your argument work on next-token level GAN only? By next-token level, I mean the generator takes sentence prefix as input, and output the distribution for the next token, and the discriminator takes in prefix + next_token, and predicts a score for only the next_token (assuming prefix is always real). \n\nAnother set of GAN is sentence-level, where discriminator takes in a whole sentence and predicts only one score. This kind of GAN suffers from sparse reward (with RL) or sampling difficulties. \n\nOn next-token level, it’s no news GAN is almost identical as MLE. So the conclusion is not surprising. However the mathematics and proofs in the paper is valuable.\n\nQ5: \n\nI still wonder what makes the fundamental difference between GAN in text and vision. Is it because text models are sequential (so likelihood is easy to compute)? Let’s say if you have an autoregressive image generator which generates small pictures (say 8x8) pixel after pixel, and each pixel is an int in [0,255]. Will your conclusion apply to this kind of sequential model? Will MLE work better than GAN?\n\nCorrect me if I am wrong or misunderstand your point. I can completely understand this is a theoretical paper, but any experiment (even on toy/synthetic datasets) will further confirm the arguments made by this paper and examine to what extend all the assumptions stand."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
3
] | [
"sOzTGMS-38c",
"vIAu3eGC2A6",
"MHUL-cGysRU",
"hZTy743skt",
"MHUL-cGysRU",
"SlWLBI9qdbG",
"GsjarHqw5NU",
"nips_2022_F02H1zNl213",
"53f5njtiCFn",
"A6XcqAAFTc",
"GAMh_ylN_b",
"GsjarHqw5NU",
"9wCI82UNEFb",
"7qjsqB2K95k",
"c3oaoTwpKEh",
"vIAu3eGC2A6",
"nips_2022_F02H1zNl213",
"nips_2022_F02H1zNl213",
"nips_2022_F02H1zNl213",
"nips_2022_F02H1zNl213"
] |
nips_2022_xONqm0NUJc | Relational Proxies: Emergent Relationships as Fine-Grained Discriminators | Fine-grained categories that largely share the same set of parts cannot be discriminated based on part information alone, as they mostly differ in the way the local parts relate to the overall global structure of the object. We propose Relational Proxies, a novel approach that leverages the relational information between the global and local views of an object for encoding its semantic label. Starting with a rigorous formalization of the notion of distinguishability between fine-grained categories, we prove the necessary and sufficient conditions that a model must satisfy in order to learn the underlying decision boundaries in the fine-grained setting. We design Relational Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets and achieve state-of-the-art results on all of them, surpassing the performance of all existing works with a margin exceeding 4% in some cases. We also experimentally validate our theory on fine-grained distinguishability and obtain consistent results across multiple benchmarks. Implementation is available at https://github.com/abhrac/relational-proxies.
| Accept | This paper proposes a novel approach for fine-grained image recognition, which utilizes the relational information between the global and local views of an object. It is a reasonable and important finding that not only representing local parts but relating them are critical to establishing superior performance. The authors validate their proposal’s effectiveness with both theoretical explanations and positive empirical results on various benchmarks. The authors also did a great job in rebuttal. They provide more clarifications, extra experiments on large datasets, and newly included error bars. Most of the reviewers are satisfied with the rebuttals and discussions, and all reviewers have a consistent recommendation. We think this paper can bring new insights to the visual recognition community and help people understand how the key features and their relations work. Please also include the newly added experiments and clarifications in the new revision.
| val | [
"XGowPicg2qA",
"dOOs3Y1PdHA",
"GLX8eBgMP7",
"fawBt7P2c1U",
"k-eFNMoL5-7Z",
"58AkdzIhFt",
"9B8iXtGlqyb",
"XA6qg4Vuqzt",
"Fg0b_2gsZOf",
"Eu_v14hcmz",
"NPzyzl5v71w",
"0JHyaaknHpQ",
"PpSx7RDZdX-",
"0y1yic1l98q",
"zXWKCL1bPNp",
"l4iGnf1QNRo",
"Rvj9lGjmM88"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for taking the time to go through the rebuttal, appreciating our intuitive explanations and additional experiments, and increasing their score.\n",
" I thank the reviewers for their response, and appreciate their efforts in providing additional clarifications and revising the paper. The analogy provided in response 1 was intuitive and helped me better grasp the motivation behind this framework, the additional ablation that was performed experimentally supports this intuition. \n\nOverall, all the clarifications that were provided help with the understanding of this work, and I am happy to see the paper revised accordingly.\n\nMany of my concerns about clarity of the method as well as its limitations have been addressed, and I have, therefore, increased my score from a 6 to a 7.",
" We thank the reviewer for recognizing our efforts and increasing the score. As per their suggestion, we will incorporate intuitive explanations for our theoretical framework (some of which are currently in the appendix - Sections 6.2 and 6.3) in the final version of our main paper.",
" I appreciate the efforts the authors went through in the rebuttal. They added new experiments, error bars, clarifications, etc.\n\nIn light of the new experiments and the explanations offered to me and other reviewers I have more confidence in the correctness and motivations for their work. It seems that the relational representation is useful for performing fine grained classification (though the base encoders seem most useful). However, the additional boost may be useful in some applications and allow for classification in new settings. I hope the authors include a bit more intuition of their theoretical framework in the writing and I have updated my score.",
" Dear Reviewers,\n\nThanks for your work reviewing this paper. There are only a few days left for discussing with the authors.\n\nPlease read the authors' rebuttals and **explain why you decided to keep your score as is or why you updated it.** (not just click on the \"acknowledge\" button). It is very frustrating for authors to be completely ignored.\n\nHence, we urge you to read and reply ASAP. \n\nThanks again,\n\nAC ",
" We have now completed our experiment on the iNaturalist 2017 dataset. The results are presented below:\n\n||iNaturalist 2017\n|:-|:-:|\nTransFG [13] |71.70\n**Ours (Relational Proxy)**|**72.15**\n\nIt can be seen that our method provides an improvement of 0.45% over TransFG [13], which is the current state-of-the-art on iNaturalist. This shows that our method does in fact scale to very large datasets, supporting the generality of our theory and design choices.\nWe are currently in the process of performing additional training runs with different random seed initializations on INaturalist to obtain its error bounds. However, as results from all other datasets show, our method offers very stable results with extremely low standard deviation. In the final version of the paper, we will add the mean accuracy along with the standard deviation obtained from 5 independent runs with different initializations.",
" 1. **Necessity for Permutation Invariance:** We thank the reviewer for pointing this out. We agree that we have only rather glossed over the property of permutation invariance, and its necessity in constructing the relationship modelling function ($\\xi$, line: 108) is not entirely apparent. As the reviewer has correctly observed, an equivalent formulation is possible by replacing the AST and the view-unifier MLP ($\\rho$) with a transformer having position embeddings. However, we observed that this setting led to accuracies that were 0.2-0.3% lower on FGVC Aircraft and Stanford Cars than our current approach. The reason this happens is because there are two underlying sub-problems to solve, and tasking a transformer with doing it end-to-end leads to highly entangled intermediate representations that make learning the relationships challenging. We provide more details on the sub-problems below. We instead train our AST to solve only one of the sub-problems, and our view-unification MLP the other, thus achieving the same functionality, but in a factorized manner that aids convergence.\\\n**Intuitive Analogy:** The problem of local-to-global relation computation can be viewed as a bit-string-to-integer matching problem.\nConsider 3 bits, say $b_1, b_2$ and $b_3$, corresponding to 3 local views. Let the global view be represented by an integer that can be encoded with 3 bits, say with a value of $g = 6$, for this example. The problem then is to find the association of the integer 6 with its corresponding binary representation 110. This association represents the cross-view relationship.\nDrawing a parallel with our algorithm, the first step towards solving this problem is to enumerate all the possible ways in which the local views can combine (to produce any global view, not specifically $g$), given by $S = [000, 001, 010, ….110, 111]$. The bit values encode the presence or absence of a particular view in the cross-view relationship. So, no matter what order we observe $b_1, b_2$ and $b_3$ in, we must output the same set $S$, as it is required to be an exhaustive enumeration. This is exactly what the AST achieves.\nOnce we have S, the next step is to design a function that finds the mapping $S, g \\mapsto 110$, i.e, the correct binary encoding for the integer $g = 6$, which is accomplished by $\\rho$ in our method.\\\n**Purpose:** As illustrated through the above analogy, one can view the local-to-global relationship modelling function as an enumerative search algorithm - given a set of local views, it first *enumerates* all possible ways in which they can combine to form a meaningful global view. Given that enumeration, it then *finds* the target solution by learning to identify the correct combination that matches with the global-view representation. Thus, the *enumerate* operation needs to be permutation invariant, as it has to consider all possible combinations of the inputs, and the *find* operation needs to be a view-unifier by construction.\\\n**Motivation:** Behind our specific design choice was the motivation to keep the *enumerate* and *find* steps separate. This allows the model to have dedicated representation spaces for the two distinct subtasks, which in turn facilitates better convergence. Our AST thus produces the candidate *enumerations* of local-view aggregations, and the view-unification MLP ($\\rho$) *finds* the correct aggregation that matches with the global view.\nIn the final version of our paper, we will incorporate the above intuition, purpose and motivation alongside the definition of and requirement for permutation invariance.\n\n2. **Relation:** Intuitively, what we mean by “relation” here is the way the local parts of an object combine to form its global view (lines 18 - 19). In Section 1, Introduction of the main paper, we provide an intuitive example to illustrate this (lines 21 - 26). Mathematically, it is what manifests as the Information Gap (Section 3.2, Proposition 1) in a relation-agnostic representation space, quantified as $I(\\mathrm{\\mathbf{x}}; \\mathrm{\\mathbf{r}} | \\mathrm{\\mathbf{z}}) = I(\\mathrm{\\mathbf{x}}; \\mathrm{\\mathbf{y}}) - I(\\mathrm{\\mathbf{z}}; \\mathrm{\\mathbf{y}})$.\\\n**Relation-agnostic representation:** Relation-agnostic representations are ones that are obtained by independently processing all the views, without taking into account the cross-view relational information (as described above). All existing FGVC works can be categorized under this head (sharing the primary objective of identifying discriminative object parts in an isolated manner). We introduced the idea of relation-agnostic representations with the goal of formalizing this commonality across the existing literature.",
" 1. **Complicated mathematical modelling and explanation:** We thank the reviewer for the suggestion. However, one of the core motivations behind our paper is to provide a theoretical framework for the analysis of FGVC algorithms, something that we felt was missing in existing literature. We have tried to accomplish this by making the mathematical foundations of our paper as comprehensive as we could (also observed by Reviewers 1ZBu, 9xUV, EFjT, and G6W7). At the same time, we acknowledge that this recommendation is helpful, and we will incorporate in the main text, intuitive explanations (some of which are currently in the appendix - Sections 6.2 and 6.3) for the underlying theory.\n\n2. **AST:** As correctly observed by the reviewer, the AST being a Transformer, has the attention operation as its building block. However, our task requires it to be permutation invariant, for which it can no longer depend on the position embeddings, which happens to be an important and unique design decision specific to our particular problem formulation. However, **please note that we claim novelty not so much for its design, but rather the objective with which it is trained**. Conventional ViTs for FGVC (like TransFG) are trained so as to base their classification only on the information provided by the most discriminative image patches. In our case, the purpose that AST serves is to exhaustively enumerate and summarize all the possible ways the set of local views can combine to form the global view. In other words, it learns to enumerate all possible local-to-global relationships that the set of local views can generate. This is achieved by the virtue of its permutation invariance.\nThe view-unification MLP ($\\rho$) then matches the correct relationship from this enumerated set to the global-view embedding. We ensure that the enumerations produced by the AST are semantically meaningful by matching them to the correct relational proxy of the input image. We have elaborated further on the working principle of our AST in our answer to Reviewer EFjT’s Question 1 on its necessity of being permutation invariant, which we will include in the final version of the paper.\n\n3. **Standardized experimental setup:** We follow the same experimental setup as the standard FGVC literature that we compare our work to [3,11,47,50], which involve ResNet50 as the network backbone and random horizontal flip, along with color jitter as the set of data augmentations. This gives us a fair starting point to quantify the gains achieved specifically by our method.\nAlso, as was suggested by Reviewer 9xUV, we have now evaluated our model with the VGG-16 backbone on the FGVC Aircraft and CUB datasets. We present our results below with error bars, and compare them to existing SotA methods that also provide their accuracies on VGG-16.\n\n||FGVC Aircraft|CUB\n|-|:-:|:-:|\nMaxEnt [5]|78.08|77.02\nMMAL [6]|87.00|83.75\nOurs|**91.20** ± 0.03|**88.13** ± 0.01\n\nAs the numbers show, our method has no backbone specific dependency, providing stable and consistent improvements over SotA across different underlying architectures.\\\n**Standard deviations / Error bars:** We thank the reviewer for this suggestion. We will add the error bars (over 5 independent runs with different random seed initializations) for Table 1 in the final version of the paper. For completeness, we also present them below:\n\n|Dataset | Accuracy (mean ± std) |\n|-: | :-: |\n|FGVC Aircraft| 95.25 ± 0.02 |\n|Stanford Cars| 96.30 ± 0.04 |\n|CUB| 92.00 ± 0.01 |\n|NA Birds| 91.20 ± 0.02 |\n|Cotton| 69.81 ± 0.04 |\n|Soy| 51.20 ± 0.02 |\n\nIt can be seen that our method does in fact give consistent and stable performance gains across multiple initializations, with a highest error bound of only ± 0.04% across all 6 datasets.\\\n**Improvement over ablation baseline:** In our Table 2 of ablation studies, the base encoder / relation-agnostic encoder (row 1) is trained with all the state-of-the-art design choices in terms of network backbone, data-augmentations, hyperparameters, representation learning mechanism, etc. For that reason, **it can be seen that the relation-agnostic encoder itself performs at par with the SotA methods, often exceeding some of the slightly older approaches**. With this very motivation, we designed the theoretical framework of Relation Agnosticity, along with its corresponding experimental counterpart, which serves both as a unified representative for state-of-the-art, as well as a baseline for our method. Thus, our Relation Agnostic Encoder can also serve as a self-contained FGVC method in and of itself. However, we aim to show that even with an encoder that operates at its full capacity when processing views in isolation, it is possible to provide an extra boost to its performance by modelling the cross-view relationships. The gains are more significant, i.e., exceeding 4%, for the datasets on which the SotA encoders do not perform as well.\n",
" 3. **Decomposing the problem into relation-agnostic encoder and a cross-view relational function:** Similar to the reason mentioned in the answer for Question 2, factorizing the label information in terms of cross-view relationship provides us with a clean framework for analyzing existing literature, precisely identifying the gaps therein, and thus, ways of resolving the same. Identity 1 in Appendix 6.1 proves that given a relation-agnostic encoder (any SotA encoder), the only uncertainty that remains in its representation space stems from the cross-view relational information. Thereafter in Proposition 2, we prove that given a relation-agnostic encoder, there needs to be a distinct sub-model for learning the cross-view relational information in order for a learner to qualify as being sufficient, thus requiring the said problem decomposition.\nWe will incorporate this, along with the discussion in Appendix 6.3 in the main text following Proposition 2.\n\n4. **Value of $k$ for multi-class datasets:** If a multi-class dataset contains class-pairs with differing $k$-values, in our current implementation, we consider a single $k$-value for the entire dataset by choosing the largest $k$ considering all class-pairs in that dataset. Formally, the $k$ value for the entire dataset is given by $max [ k_{ij}; \\forall (i, j) ]$, where $i$ and $j$ are class indices.\nThis is a theoretically valid choice since $k$-distinguishability puts a lower bound on the number of local views, and thus, any value higher than the true $k$ should also work. We empirically validate that this is a functionally correct design via our ablation in Figure 2. The performance can be seen to saturate beyond the maximum $k$ value for the entire dataset.\nWe agree that this is not the choice that provides the most computational efficiency, as for many class pairs, the actual value of $k$ would be lower than the global value for the entire dataset, leading to unnecessary computations over redundant views. This however, is a problem that deserves to be researched on its own. In fact, this is the exact direction we are pursuing as a follow-up to this work via generating explanations for the relational embeddings and pruning out local-views that do not feature in that explanation. So we are glad the reviewer brought this up, as this gives us the approval that this is a sensible next-step to take.\n\n5. **$k$-distinguishability for coarse-grained categories:** For the coarse-grained categories, we need not resort to the idea of $k$-distinguishability, as because of the large inter-class differences, most of them would be separable just via their global views (as has been demonstrated by most SotA classifiers on coarse-grained datasets like ImageNet).\\\n**Choice of $k$:** The choice of $k$ in such a setting would be analogous to the purely fine-grained case, the maximum $k$ value among all the leaf-level / fine-grained class-pairs. The only drawback being a lot of redundant computations to tell apart classes that come from different coarse-grained categories.\\\n**Number of relational proxies:** The number of relational proxies $c$ could be chosen to be the total number of fine-grained classes across all coarse-grained categories. The proxies could be grouped based on their corresponding higher level super-category in the dataset. The coarse grained class of an image would then be the super-category that its fine-grained proxy belongs to.",
" We thank the reviewer for suggesting this qualitative evaluation, as it has allowed us to illustrate our model in a more transparent manner, as well as better understand the situations that might limit its potential. We have now added qualitative classification results, and depictions of the predicted cross-view relationships in Figure 4 and Section 2.3 Qualitative Classification Results of the Supplementary Material.\n\n**How we obtained visual representations for the cross-view relationships:** The cross-view relationships are depicted in Figure 4 via a graph of the local views. The graph represents the manner in which the local views combine to form the overall object. The nodes of the graph represent the local views. The nodes are connected based on the mutual attention scores of their corresponding representations obtained from the final layer of the Attribute Summarization Transformer (AST). The weight of the edge is proportional to the magnitude of attention. For the purpose of simplicity, we depict fewer local views in the visualization, than are actually used for computation.\n\n**Observations:** It can be seen that images that provide a diverse set of local views, and thus, a larger space of possible cross-view relationships are the ones that get classified correctly with full certainty. However, as the number of unique local views get limited (possibly due to occlusion or an incomplete photographing of the object), it reduces the amount of relational information that can be mined. Under situations when even the individual local-views are largely shared between classes, there remains no discriminative premise (neither local/global, nor relational) for telling their instances (with limited depiction of local views) apart. It is under such circumstances that the classifier gets confused.\n\n**Example:** For instance, in the example from the CUB dataset (the top row in Figure 4), the images of the Acadian Flycatcher and Bank Swallow depict sufficient numbers of local views like the head, tail, belly and wings, which provide a large space of potential cross-view relationships that favor classification outcome. On the other hand, the images of the Black-footed Albatross and Laysan Albatross only depict the head and the neck, thus limiting the number of computable relationships that can act as discriminators. Moreover, the head and the neck look largely similar between the two categories, thereby leading to cross-category confusion causing a subsequent misclassification. However, we believe that such a situation can be addressed by learning different distributional priors over the set of local views, which we plan to take up as future work.\n\nWe will include these findings in the final version of the main manuscript.",
" 1. **Missing numbers in Table 1:** Initially, only considering the numbers reported in the original publications, a larger fraction of the accuracy scores for the SotA methods in Table 1 were missing. We tried our best to run their implementations and fill-out as many of the blanks as we could. For the ones that remain vacant, it is either because we were unable to find existing implementations of their method (MaxEnt [11]), or it was difficult to get it running even if one was available (DBTNet [50] and CAP [3]).\n2. **Error Bars:** We thank the reviewer for this suggestion. We will add the error bars (over 5 independent runs with different random seed initializations) for Table 1 in the final version of the paper. For completeness, we also present them below:\n\n|Dataset | Accuracy (mean ± std) |\n| :- | :-: |\n|FGVC Aircraft | 95.25 ± 0.02 |\n|Stanford Cars | 96.30 ± 0.04 |\n|CUB | 92.00 ± 0.01 |\n|NA Birds| 91.20 ± 0.02 |\n|Cotton| 69.81 ± 0.04 |\n|Soy| 51.20 ± 0.02 |\n\nIt can be seen that our method provides stable performance across different initializations, with a highest error bound of only ± 0.04% across all 6 datasets.\\\n**Marginal Performance Gains:** FGVC being a highly challenging problem domain, we observe that most recent SotA are only able to improve marginally over their predecessors. For instance, the differences between TransFG [13] and CAP [3] are just as low (0.2%) for NA Birds or even lower for CUB (0.1%). Among all fine-grained datasets, the Bird datasets are relatively more difficult to categorize due to challenges like shift in the distribution of bird poses in test-set images [4], occlusions, and high intra-class and low inter-class variations. Thus, the SoTA performance on Birds datasets is somewhat lower compared to other datasets, despite having been around for a long time. We understand that these marginal performance gains on the long-studied datasets may not be fully convincing of the efficacy of our method. For that reason, we also considered the Cotton and Soy Cultivar datasets [41]. They have been newly proposed in 2021 and provide a highly challenging novel setting with extremely low inter-class variations for FGVC algorithms to address. We show consistent and stable performance gains of over 4% on both these datasets (Table 1).\n\n3. **Network backbone:** We have now evaluated our model with the VGG-16 backbone on the FGVC Aircraft and CUB datasets. We present our results below with error bars, and compare them to existing SotA methods that also provide their accuracies with VGG-16 backbone.\n\n| |FGVC Aircraft|CUB\n|-|:-:|:-:|\nMaxEnt [5]|78.08|77.02\nMMAL [6]|87.00|83.75\nOurs|**91.20** ± 0.03|**88.13** ± 0.01\n\nAs the numbers show, our method remains stable across backbones, significantly outperforming SotA methods with VGG-16 backbones as well.\n\n4. **Correlation between number of local views and size of local patches:** We sincerely thank the reviewer for suggesting this experiment as we believe that doing such a correlation study is a great way of determining the right computational trade-offs for our method. We trained our model on FGVC Aircraft by varying the number of local views |L| and the size of each local patch to identify their correlations. We present our results in the table below, where rows represent the number of local views |L| and the columns represent the side-length of each local patch. So, if the global view has spatial dimensions size x size, each local patch would be of size/t x size/t, where t is the scaling factor that is varied across the columns. In summary, the rows represent increasing the number of local views top-down, and the columns represent increasing the patch size left-to-right.\nThe numbers are expressed as relative deviations from a reference of 95.25%, i.e., the setting corresponding to Row 1, Column 3, whose performance we reported in Table 1 of the main paper.\n\n||size/5|size/4|size/3|size/2\n|:-:|:-:|:-:|:-:|:-:|\n**7**|-0.03|-0.02|0.00|-0.14\n**12**|+0.01|+0.02|0.00|-0.11\n**15**|+0.05|+0.03|+0.01|-0.11\n**18**|+0.05|+0.03|+0.00|-0.10\n\nFrom the table above, we can see that increasing the patch size beyond a certain point has a detrimental effect, as with increasing size, the local views tend to lose their granularity and degenerate into global views. Increasing the number of crops has a stronger improvement effect on performance if the patch size is small. However, decreasing the patch size at the cost of an increased number of local views also has its downsides - the number of attention computations in the attribute summarization step increases quadratically. Thus |L| and local patch size need to be determined based on application specific accuracy requirements and the amount of available computation resources.\nWe will include the results of this experiment and our observations in the final version of the paper.\n",
" 1. **Clarity on model architecture:** We thank the reviewer for this suggestion. In Section 3.4, we will provide a dedicated summary of the entire architecture that gives a simple and clear overview of the full model end-to-end, and directly correlate it with the components depicted in the model diagram (Figure 1).\n2. **Experiments on large datasets:** We follow recent state-of-the-art literature to choose datasets for evaluation [3,11,13,41,47,50]. However, we agree that it is important to evaluate the scalability of our method by considering large-scale datasets. For that purpose, we are currently training and evaluating our model on the iNaturalist 2017 dataset [A], which has a total of 5,089 categories, 675,170 train + val images, and 182,707 test images. We have chosen iNaturalist because (1) it contains large number of fine-grained categories from diverse super classes including Plant, Insect, Bird, Mammal, and so on; (2) it is highly imbalanced with very different number of images per category, which we believe can stress test our proposed model for appropriate validation. Nevertheless, we would also be happy to evaluate our model on any other datasets that the reviewer would like to suggest. We will update this rebuttal via a comment, and also the final version of the paper, with the corresponding results on iNaturalist once they are available.\n3. **Limited performance gains:** Below, we present the error bars (over 5 independent runs with different random seed initializations), which we will also add to Table 1 in the final version of the paper:\n\n|Dataset | Accuracy (mean ± std) |\n| :- | :-: |\n|FGVC Aircraft | 95.25 ± 0.02 |\n|Stanford Cars | 96.30 ± 0.04 |\n|CUB | 92.00 ± 0.01 |\n|NA Birds| 91.20 ± 0.02 |\n|Cotton| 69.81 ± 0.04 |\n|Soy| 51.20 ± 0.02 |\n\nIt can be seen that, although with narrow gains, our method provides stable performance across different initializations, with a highest error bound of only ± 0.04% across all 6 datasets.\n\nFGVC being a highly challenging problem domain, we observe that most recent SotA are only able to improve marginally over their predecessors. For instance, the differences between MMAL [47] and CAP[3] on FGVC Aircraft, TransFG [13] and CAP [3] on CUB and NA Birds are even lower than us, or just as low. We understand that these marginal performance gains on the long-studied datasets may not be fully convincing of the efficacy of our method. For that reason, we also considered the Cotton and Soy Cultivar datasets [41]. They have been newly proposed in 2021 and provide a highly challenging novel setting with extremely low inter-class variations for FGVC algorithms to address. We show consistent and stable performance gains of over 4% on both these datasets (Table 1).\n\n**Additional References:** \\\n[A] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, Serge Belongie. The iNaturalist Species Classification and Detection Dataset. *In* CVPR, 2018.\n",
" The authors propose Relational Proxies, a novel approach that leverages the relational information between the global and local views of an object for encoding its semantic label. I think the main novelty comes from the introduced Relational Proxies and the corresponding comprehensive theoretical and experimental analysis.\n\nWeaknesses One area of improvement for the paper at hand would be clarity, especially with respect to the exposition of the proposed architecture. It takes multiple read throughs in order to identify the actual architecture proposed.\n\nLack of experiments on larger datasets. In a time of ever-growing dataset sizes it would be good to provide and compare results of suchmodels when trained on larger datasets. This is important for judging the impact asimprovements stemming from architecture engineering typically vanish with growing dataset sizes. Insufficient experiments and limited performance gains. N/A.",
" Summary.\n\nThis paper is dedicated to developing algorithms for fine-grained image recognition. They argue it is not enough to distinguish fine-grained categories only based on partial information. Therefore, they propose relational proxies, which leverage the relational information between the global and local views of an object. They also provide theoretical explanations to support the effectiveness of the proposed methods. Experiments on six fine-grained benchmark datasets offer positive results. Pros.\n\n1. The proposed methods make sense and are well-motivated. Both theoretical and empirical analyses are provided to support the effectiveness.\n\n2. The paper is well written and easy to follow. Figure 1 is informative and illustrative.\n\n\n\nCons.\n\n1. There are missing numbers in Table 1. For a comprehensive comparison, it is necessary to complete it. Minor: the caption of tables should be on the above content.\n\n2. The performance gains are marginal, especially on CUB (0.3%) and NA Birds (0.2%)? Any explanations? It seems the proposed methods are less working for bird images. Meanwhile, the error bar is required for Table1 since the current accuracy margin is too small.\n\n3. More network backbones are needed to support the generalization of proposed methods across architectures.\n\n4. As for the study in Figure 2, the number of local views |L| and the size of the local patch should be correlated. A detailed analysis is needed. Refer to the weakness section. Both limitations and potential negative social impacts are discussed in the submission.",
" In the fine-grained setting, discriminating between different classes requires learning how different local parts combine to form the object of interest. In this work, the authors introduce a theoretical framework and a novel method that decomposes FGVC tasks into relation-agnostic feature extraction and cross-view relation learning. They show the superiority of such method through a set of experiments. The problem this work address is relevant. While I am unable to gauge the relevance of the proposed theoretical framework and its broad usefulness to the community, the proposed approach is solid and of interest.\n\nStrengths:\n1. **The experimental setup is solid.** The authors test their method on multiple datasets and consistently show competitive results across all of them. These results support the idea of modeling the cross-view relation in their proposed architecture. This is further demonstrated through ablation studies, that highlight the need for a cross-view relational function and provide further insights into the method.\n2. The theoretical framework established in this work is clear, and while it builds on a lot of definitions, the proofs are simple and easy to follow. \n\nWeaknesses:\n1. **The permutation invariance property is counter-intuitive** The authors introduce the permutation invariance property as a necessary property for a model to solve FGVC tasks. In section 3.4, they introduce a novel transformer (AST) that does not use positional embeddings and is thus permutation invariant. While the permutation invariance property can provide certain desired properties like potentially better generalization, the authors do not provide theoretical evidence for why it would be necessary, and it is not fully clear how it is motivated either. Certain claims in the introduction would actually suggest otherwise: “differ only in the way the attributes combine to generate the global view of the object” or “[features like] the distance between the head and the body, or the angular orientation of the legs”. This suggest that features like positional encoding would actually be critical. \n2. The term “relation” is not explicitly defined and it is unclear what the authors mean. The \"relation-agnostic representation” is established in Definition 4, and while it is clear what it means in mathematical terms, its relation to FGVC problems is not evident. Providing more clarifications would make the text easier to follow.\n3. **Decomposing the problem into relation-agnostic encoder and a cross-view relational function.** The authors do not argue for why this decomposition is necessary when solving FGVC problems- at least in the main text. A discussion of this can be found in appendix 6.3, and I would argue for including this in the main text as it better explains the idea of the relational gap and seems to at least provide an initial motivation for this decomposition.\n 1. In Definition 1, k-distinguishability is defined with respect to two classes only. What does this imply at the level of the entire dataset. What happens if different pairs of classes have different corresponding k values?\n2. In settings where the datasets includes multiple subgroups of classes (cats, dogs, birds) that would be coarsely separable at the group level, but would require fine-grained visual modeling within each group (in birds: white-faced plover vs. kentish plover). How would this approach change? How is k-distinguishability defined? How is the number of relational proxies c selected?\n The authors discuss one of the main limitation, being the local view generation. \n",
" In this paper, authors propose to address the fine-grained image classification from a novel perspective, namely for the fine-grained classes, with the visually similar local features, more attention should be made on leveraging the relational information between the global and local views of an object for encoding its semantic label. Relational proxies are designed based on the proposed theory and achieve the superior results on multiple benchmark datasets. Strengths:\nA hypothesis is made for fine-grained visual classification, i.e. when two categories possess same local attributes and differ only in the way the attributes combine to generate the global view of the object, relation-agnostic approaches do not capture the full semantic information in an input image. This hypothesis is then proved by theory and validated in the experimental parts, which is the main theoretical contribution of the paper. I do like this novel perspective.\n\nWeakness:\nI do not have major concerns regarding the technical details, however, the visualization analysis of the proposed method is lacking, for example, under what circumstance that the proposed model can significant improves the performances and when does it fail. As noted in the weakness part, I'd like to see some visualized validations of the hypothesis to verify the effectiveness and limitations of the proposed method. As noted by authors, the main limitation is considered as the local views obtained by the proposed method is cropped from the global view, which may not be the best representations of local parts.",
" This paper proposes a method based on relationships between views of an object to perform fine grained visual categorization. The authors hypothesise that not only representing local parts but relating them is pivotal to achieve good performance. The authors then experiment using their proposed approach on different FGVC datasets, improving on current methods. Strengths:\n1. The point the paper makes, that the relationship between object parts is important for learning the classification label makes intuitive sense and seems to be important in their experiments.\n\n2. The paper is reasonably clear as to the motivations and experiments run by the authors.\n\nWeaknesses\n1. The authors, in my opinion, over complicate the mathematical explanation, spending two pages explaining why modelling the relationships between object parts is important. I do not think this adds much to the paper and the space would be better spent giving a high level, clear explanation of the intuition.\n\n2. I am not sure why their AST (how they combine local parts) is so different from an attention layer (e.g. in TransFG). It seems to be performing the same operation, so I do not understand how this is fundamentally different than the TransFG architecture. Is the main difference the global embedding of the object that the local crops can be compared against and using all three embeddings -- the z_g, z_L, r -- when computing distances and the final metric?\n\n3. In general the paper is clear: their main contribution is using the global, local and r information in the metric learning and so the insight is that using all 3 sources of information is the most useful. However, in the experiments there seems to be limited improvement from these properties over the base encoder (Table 2). So I wonder how useful these things really are. Moreover, the improvements are small over standard methods, and there is no standard deviation to explain if these results are significant. I further wonder if the authors carefully made sure that their setup was similar to the underlying setup and that the improvement was actually due to their method or better data augmentation or underlying architectures. Stated above. The authors discuss limitations."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3,
3
] | [
"dOOs3Y1PdHA",
"Fg0b_2gsZOf",
"fawBt7P2c1U",
"XA6qg4Vuqzt",
"nips_2022_xONqm0NUJc",
"0JHyaaknHpQ",
"zXWKCL1bPNp",
"Rvj9lGjmM88",
"zXWKCL1bPNp",
"l4iGnf1QNRo",
"0y1yic1l98q",
"PpSx7RDZdX-",
"nips_2022_xONqm0NUJc",
"nips_2022_xONqm0NUJc",
"nips_2022_xONqm0NUJc",
"nips_2022_xONqm0NUJc",
"nips_2022_xONqm0NUJc"
] |
nips_2022_vMQ1V_z0TxU | Out-of-Distribution Detection with An Adaptive Likelihood Ratio on Informative Hierarchical VAE | Unsupervised out-of-distribution (OOD) detection is essential for the reliability of machine learning. In the literature, existing work has shown that higher-level semantics captured by hierarchical VAEs can be used to detect OOD instances.
However, we empirically show that, the inherent issue of hierarchical VAEs, i.e., ``posterior collapse'', would seriously limit their capacity for OOD detection.
Based on a thorough analysis for `posterior collapse'', we propose a novel informative hierarchical VAE to alleviate this issue through enhancing the connections between the data sample and its multi-layer stochastic latent representations during training.
Furthermore, we propose a novel score function for unsupervised OOD detection, referred to as Adaptive Likelihood Ratio. With this score function, one can selectively aggregate the semantic information on multiple hidden layers of hierarchical VAEs, leading to a strong separability between in-distribution and OOD samples.
Experimental results demonstrate that our method can significantly outperform existing state-of-the-art unsupervised OOD detection approaches. | Accept | This paper studies unsupervised out-of-distribution detection based on hierarchical VAE models. In particular, it (1) investigates the posterior collapse issue, (2) proposes a training procedure by increasing the mutual information between the input and latent representations, and (3) proposes an adaptive likelihood ratio score for detecting OOD inputs. Multiple reviewers found the method interesting and technically sound.
Post rebuttal, all reviewers unanimously supported this paper positively. The contribution and insights presented in this paper will be valuable for the OOD detection community. The AC recommends acceptance.
Please incorporate the reviewer's requested discussions (e.g. computational footprint) in the final version. Several published papers in the reference sections are in arXiv format, which necessitates proper citations in camera ready.
| train | [
"5zJ1dOyPq06",
"P1K4uv4il5w",
"Z1abMj_3vb7",
"hbakHmSQpMK",
"2RuxaQTF0AY",
"g0Wl7jzp8Hm",
"9qC4mhcINEX",
"SxTRoxbxUvr",
"UABDFtlrYar",
"sWLkfM5boXK",
"NCw9AhX13eT",
"eSX-_A9LfPl",
"6jnyZBhiIXk",
"n76THBZ-Xso",
"G9jZPscqAiT",
"2nos73R0EW8",
"zg3XRhFCb27",
"duXswuDelz",
"NfOju3OS4So",
"LMbD-6Fe4ft",
"dtG_FJqqbpr",
"Flv0ymK2q-q",
"a_p5KW5sIR7"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 4FiY:\n\nThanks again for your effort in reviewing our paper and give us a great chance to improve the quality of this paper . \n\nConsidering that the discussion period is coming to an end, we would like to know if you have any other questions about our paper, and we are still glad to have a discussion with you in the limited time.\n\nSorry for disturbing you again and again, we only want to let you know your decision is quite important to us.\n\nSincerely\n\nAuthors",
" Best wishes!\n\nAuthors",
" The score should be updated to 7. Shows on my end. ",
" Thanks for your effort in reviewing our paper. \n\nOne more thing, it seems that you haven't updated the socre, which is quite important to us.\n\nBest wishes\nAuthors",
" This is now clear to me.",
" Thanks for your effort in checking our paper! We agree with your comment that the inequality $I_p(\\mathbf{x}, \\mathbf{z_{>k}}) \\geq H_\\{p,q\\}(\\mathbf{x}|\\mathbf{z_\\{>k\\}}) $ will not hold when the entropy $H_{p}(\\mathbf{x})$ is negative in the special cases, though which is not easy to happen when $d$ is a large value like $28\\times28\\times1$ in MNIST (i.e., $|\\Sigma|< e^{-28\\times28\\times(1+\\log 2\\pi)}$). \n\nGiven the cases where $H_{p}(\\mathbf{x})$ is negative, however, please note that $H_{p}(\\mathbf{x})$ is a constant, which is the expectation only related to the **true** distribution $p(x)$ of the data $x$ and does not change along with the training.\nThus, there still exists $I_p(\\mathbf{x}, \\mathbf{z_\\{>k\\}}) = E_{p(\\mathbf{x})p_\\theta(\\mathbf{z_\\{>k\\}}|\\mathbf{x})}\\log p_\\theta(\\mathbf{x}|\\mathbf{z_\\{>k\\}}) + H_{p}(\\mathbf{x}) \\propto E_{p(\\mathbf{x})p_\\theta(\\mathbf{z_\\{>k\\}}|\\mathbf{x})}\\log p_\\theta(\\mathbf{x}|\\mathbf{z_\\{>k\\}})$ in Eq. (8), and this item can be approximated by $H_\\{p,q\\}(\\mathbf{x}|\\mathbf{z_\\{>k\\}})=E_{p(\\mathbf{x})q_\\phi(\\mathbf{z_\\{>k\\}}|\\mathbf{x})}\\log p_\\theta(\\mathbf{x}|\\mathbf{z_\\{>k\\}})$, which indicates that our approach can still work well even if the entropy $H_{p}(\\mathbf{x})$ is a negative constant. \n\nIn other words, the optimization direction of maximizing $H_{p,q}(\\mathbf{x}|\\mathbf{z_\\{>k\\}})=E_{p(\\mathbf{x})q_\\phi(\\mathbf{z_\\{>k\\}}|\\mathbf{x})}\\log p_\\theta(\\mathbf{x}|\\mathbf{z_\\{>k\\}})$ is consistent with maximizing $I_p(\\mathbf{x}, \\mathbf{z_\\{>k\\}})$ even if the entropy $H_{p}(\\mathbf{x})$ is a negative constant in some extreme data distributions.\n\nWe thank you again for your careful review and have revised our paper in this submission.",
" Thank you for the detailed response and efforts on the revised submission. My concerns have been mostly addressed, and I am willing to increase my score to 7.\n\nI wanted to clarify one point about the entropy $\\mathcal{H}_p(x)$ in Eqn. (8) of the revised paper. Consider for example the case where $x$ follows a standard Gaussian distribution, and for simplicity let it be univariate. In this case, the entropy of $x$ would be $1/2 + 1/2 \\log(2 \\pi \\sigma^2)$. This can be negative when $\\sigma^2 < \\frac{1}{2 \\pi e}$. Similarly if $x$ is multivariate Gaussian, the entropy of $x$ can be negative when the determinant of its covariance satisfies $|\\Sigma| < e^{-d (1 + log(2 \\pi))}$, where $d$ is the dimension. \n\nGiven that the entropy of $x$ can be negative for some distributions, how valid is the inequality $I_p(x, z_{>k}) \\geq H_{p,q}(x | z_{>k})$ (on line 189)? Is this a requirement satisfied in practice when the empirical estimate of the entropy are used?",
" Thanks again for your efforts in reviewing!\n\nAs your awesome suggestion in \"Questions\" that \n**\"Testing the approach on MNIST and CIFAR is great, but what about more natural images\"**, \nwe provide additional comparisons on more dataset pairs in appendix **P**, including Tiny-Imagenet, LFWPeople, Flower102, Places365, and Food101.\n\n",
" Thanks again for your efforts in reviewing! We carefully read the papers you recommended and we give an additional brief summary of them below:\n\n1) For paper [1], it is based on the belief that correctly classified examples tend to have greater maximum softmax classification probabilities than out-of-distribution examples.\n\n2) For paper [2], it is an extension of ODIN, and it still relies on the classification confidence score.\n\n3) For paper [3], it is designed for pre-trained softmax neural classifiers.\n\n4) For paper [4], it replaces the softmax score with the energy score for pre-trained classifiers, but it still needs the label to estimate the energy.\n\n5) For paper [5], it applies rectified activation operation to the penultimate layer of a classifier.\n\nIn short, all these papers for OOD detection still require the category labels of in-distribution data samples, which is not applicable in our setting.\n\n\nWe note that, although the OOD detection methods especially with the help of in-distribution data labels have been well studied, \nthe purely unsupervised OOD detection (no labels and no prior assumption for the OOD data) is still rarely investigated \ndue to the challenging setting. \nHowever, unsupervised OOD detection is suitable and practical for more scenarios, especially in cases where labels are not available.\n\n\nBesides, as your awesome suggestion in \"Weakness\" that \n**\"only two in-distribution datasets are evaluated, while it is common to evaluate on more ID/OOD dataset pairs\"**, \nwe provide addtional comparisons on more dataset pairs in **appendix P**, including Tiny-Imagenet, LFWPeople, Flower102, Places365, and Food101.",
" Dear Reviewer 4FiY,\n\nThanks a lot for your valuable comments to improve this paper! Are there unclear explanations here based on our response? We are willing to further clarify them and have a discussion with you in the following days!\n\nBest regards,\n\nAuthors",
" I have read the response and revised paper, and my concern has been resolved. I would like to increase my score to 7.",
" **For Question 9**\n\nThanks for your suggestions. We have revised Section 3.4 and give a brief explanation here. \n\nFirstly, we need to point out that, for the likelihood-ratio score function $\\mathcal{LLR}^{>k}$ [19], cherry picking the hyperparameter $k$ on testing OOD samples is unreasonable for unsupervised OOD detection, but an inappropriate choice of $k$ will bring negative impact on the performance.\nThus, the intuition of designing $\\mathcal{LLR}^{ada}$ is to move beyond the choose of $k$ but adaptively enhance the importance of some discriminative terms, like $\\mathcal{LLR}^{>2}$, in the overall score function for OOD detection. \nWith $\\mathcal{R}(x, z_{>k})$ to measure the relevance between $x$ and $z_{>k}$, we find that the adaptive weight $\\frac{\\mathcal{R}(x, z_{>k-1})}{\\mathcal{R}(x, z_{>k})}$ will be relatively large when the data information drop rapidly at the current hidden layer, like $k=2$, which naturally meets our requirements for designing $\\mathcal{LLR}^{ada}$.\n\nThrough combining $\\frac{\\mathcal{R}(x, z_{>k-1})}{\\mathcal{R}(x, z_{>k})}$ and $\\mathcal{LLR}^{>k}$, there could be several ways to design the final score function. The reason why we choose $\\mathcal{LLR}^{ada}$ (a weighted difference of log-likelihood ratios) in Eq. (10) is that it can numerically omit some terms that occur ``posterior collapse'', as discussed in Q.4 to review jv8d. \n\nThen, we believe that there will be other more principled or effective ways to redesign the score function through combining $\\frac{\\mathcal{R}(x, z_{>k-1})}{\\mathcal{R}(x, z_{>k})}$ and $\\mathcal{LLR}^{>k}$, and we are working on this. \n\n\n**For Question 10**\n\nThanks for your careful review, we will revise it.\n\n**For Question 11**\n\nThanks for your notification, we have already cited these approaches in Section 4.1 and also added the citations of baselines in all Tables in the revision. For the details of these baselines, please refer to Appendix E named ``Details of the Baselines'' in our first submitted manuscript.\n\n**For Question 12**\n\nThanks for the suggestion! We have added more comparisons under these metrics (AUPRC, AUROC, and FPR80) with non-VAE methods in Appendix K and other methods designed for alleviating \"posterior collapse'' in Appendix J.\n\n**For Question 13**\n\nSome of the results of the baseline shown in Table 1 are directly cited from their original papers, and we only report the results under the same experimental setting as ours. We have also provided additional comparisons of the baselines in Appendix K and J.\n\n**For Question 14**\n\nSorry, this is a typo and we have fixed it in the revision. \n\n**For Question 15**\n\nThanks for your suggestion. We have reported the area under the ROC curves in Fig. 3 of our revision.;",
" Thanks for your effort in reviewing the paper!\n\n**For weakness 1**\n\nThanks, we have revised our paper and provided more comments so that our method can be intuitively understood. We have given an intuitive explanation of the likelihood-ratio score in our revision (Section 3.2) and left the derivation details in Appendix B. We have also added more comments to illustrate Eq. (8) and Eq. (9) in the revision.\n\n**For weakness 2**\n\nThanks for your constructive suggestion. \nWe have discussed these baselines in Section 4.1 and left more details to Appendix E named \"Details of the Baselines''. \nWe promise that we will try to move the definitions of these baselines in future revision.\n\n**For weakness 3**\n\nThanks for your constructive suggestion. \nAdditionally, we have provided more experimental results in the revised Appendix, including \n1) comparison with more methods designed for alleviating posterior collapse like Oversmoothing VAE, Warm-up in Appendix J;\n2) comparison with more non-VAE baselines for unsupervised OOD detection like flow-based model in Appendix K; \n3) comparison with different score methods in Appendix L and M; \n4) t-sne visualization of hierarchical latent representations in Appendix N;\n5) measure of reconstruction quality with partial generative models $p_\\theta(x|z_\\{>k\\})$ and visualization of data samples generated from the prior distribution in Appendix O.\n\n**For weakness 4**\n\nThanks, we have fixed the typos and grammatical issues you mentioned, and will further improve the quality of the paper.\n\n**For Question 1**\n\nThanks for your awesome suggestion for utilizing the class label into the VAE for OOD detection. A very straightforward idea is that the label could be used to guide the learning of latent space, e.g., we could model the posterior distribution $q(z|x)$ as a Gaussian-Mixture-Model (GMM) $q(z|x) = \\sum_{i=1}^{\\operatorname{C}} w_i \\mathcal{N}(z_i|\\mu_i(x), \\sigma_i(x))$, where $C$ is the number of classes and the label could be used to guide the learning of the coefficient $w_i$ of the GMM, which indicates the probability for each classes conditioned on $x$. We think this idea could further improve the performance of the OOD detection with the help of class labels during training. We are trying to work in this direction.\n\n**For Question 2**\n\nThanks for your bringing these excellent works to our eyes. Due to the time limitation, we can only capture the main idea of these works and find them all belong to supervised OOD detection methods. Thus, we choose to only discuss these work in the first paragraph of the introduction in this revision, and will carefully study them in the future.\n\n**For Question 3**\n\nYes, it is correct that the mentioned phenomenon will not always happen. The point we want to highlight is that our method can successfully deal with these cases, and also still work well on daily scenarios (the mentioned phenomenon does not happen).\n\n**For Question 4**\n\nThanks for your careful review! We have clarified $\\mathbf{z}_{L+1}:=\\mathbf{x}$ in our revision.\n\n**For Question 5**\n\nThanks, we have given an intuitive explanation of the likelihood-ratio score in our revision (Section 3.2), and left the derivation details in Appendix B.\n\n**For Question 6**\n\nThanks, we have fixed this typo!\n\n**For Question 7**\n\nThanks for your notification. Actually, $H_p(x)$ is the entropy conditioned on the **true** distribution $p(x)$ of data $x$, and a reasonable prior assumption for it is that it should obviously not be the extreme case like the impulse function $\\delta(x)$ but a smoother one, where the value for it could be bounded into 0~1 in our setting. Thus, in this setting, the entropy $H_p(x)$ is a non-negative value.\n\n**For Question 8**\n\nThanks, we have added more comments to illustrate Eq. (8) and Eq. (9) in the revision.\n\n\n",
" Thanks for your interest in our work!\n\n**For Limitations 1**\n\nThanks for your suggestions. We have given an intuitive explanation of [1] in our revision (Section 3.2), and left the derivation details to Appendix B. \n\n**For Limitations 2**\n\nThanks for your suggestions. We have revised Section 3.4 and give a brief explanation here. \n\nFirstly, we need to point out that, for the likelihood-ratio score function $\\mathcal{LLR}^{>k}$ [19], cherry picking the hyperparameter $k$ on testing OOD samples is unreasonable for unsupervised OOD detection, but an inappropriate choice of $k$ will bring negative impact on the performance.\nThus, the intuition of designing $\\mathcal{LLR}^{ada}$ is to move beyond the choose of $k$ but adaptively enhance the importance of some discriminative terms, like $\\mathcal{LLR}^{>2}$, in the overall score function for OOD detection. \nWith $\\mathcal{R}(x, z_\\{>k\\})$ to measure the relevance between $x$ and $z_\\{>k\\}$, we find that the adaptive weight $\\frac{\\mathcal{R}(x, z_\\{>k-1\\})}{\\mathcal{R}(x, z_\\{>k\\})}$ will be relatively large when the data information drop rapidly at the current hidden layer, like $k=2$, which naturally meets our requirements for designing $\\mathcal{LLR}^{ada}$.\n\nThrough combining $\\frac{\\mathcal{R}(x, z_\\{>k-1\\})}{\\mathcal{R}(x, z_\\{>k\\})}$ and $\\mathcal{LLR}^{>k}$, there could be several ways to design the final score function. The reason why we choose $\\mathcal{LLR}^{ada}$ in Eq. (10) is that it can numerically omit some terms that occur ``posterior collapse'', as discussed in item 4 to review jV8d.\n\n**For Questions 1**\n\nThanks for your suggestions. We absolutely agree that any VAE-based extension should inherit the original model properties of VAE. In such consideration, we have included more VAE-related experiments in Appendix N and O, including \n1) measure of reconstruction quality with partial generative models $p_\\theta(\\mathbf{x}|\\mathbf{z}_{>k})$ (Table 15 in Appendix O);\n2) t-sne visualization of hierarchical latent representations (Fig. 7 in Appendix N);\n3) {\\color{cyan}visualization of the data samples generated from the prior distribution} (Fig. 9 in Appendix O).\nFrom these results, we can find that our method can perverse the versatility of VAE.\n\n**For Questions 2**\n\nThanks. We have provided additional comparisons of various methods to alleviate \"posterior collapse'' of HVAEs in Table 6 and Table 7 of Appendix J.\nAs shown in the results, with the same score function for OOD detection, HVAEs trained with the warm-up scheme can outperform the vanilla VAE without any modification on ELBO.\n\n\n**For Questions 3**\n\nThanks for your awesome suggestion. As discussed in Q.2, there could be several ways to design the final score function through combining $\\frac{\\mathcal{R}(\\mathbf{x}, \\mathbf{z}_\\{>k-1\\})}{\\mathcal{R}(\\mathbf{x}, \\mathbf{z}_\\{>k\\})}$ and $\\mathcal{LLR}^\\{>k\\}$, where automatically picking $k$ with the largest $R$-ratio could also be an interesting and effective way. \nWe note that we have included the method you suggested as a score function baseline, termed $\\mathcal{LLR}^{opt_k}$, in our experiments, as shown in Table 10, Table 11, and Table 12 of Appendix.\nFrom the results, we can find that $\\mathcal{LLR}^{ada}$ can stably outperform $\\mathcal{LLR}^{opt_k}$. The potential reason could be that the optimal choices of $k$ for data samples are quite different, and the non-unified scales of scoring functions will cause confusion for OOD detection (the scales of $\\mathcal{LLR}^{>1}$ and $\\mathcal{LLR}^{>2}$ are different). \nThus, $\\mathcal{LLR}^{ada}$ could be a more stable and soft choice when compared to $\\mathcal{LLR}^{opt_k}$.\n\n**For Questions 4**\n\nWe appreciate it so much for your careful reviewing our paper! \nThe reason is that we only exhibit the reconstructed samples with the highest probability in Fig. 2. \nSpecifically, for each hidden layer $l$, we deterministically estimate $z_l$ with the mean vector of Gaussian-distributed $p_{\\theta}(z_l|z_{>l})$ without noise sampling, resulting in that the data samples generated from HAVE with $p_\\theta(x|z_{>3})$ and $p_\\theta(x|z_{>4})$ are exactly same when the estimated posterior $q_\\phi(z_4|z_5, x)$ collapses to its prior $p_\\theta(z_4|z_5)$. As shown in Table 14 of Appendix N, the KL-divergence scores of the 4-th and 5-th hidden layers are almost close to zero, which indicates the appearing of `\"posterior collapse''.\n\n\nTo intuitively demonstrate that the posterior does not collapse to a single point, we visualize the data samples generated from $p_\\theta(x|z_{>k})$ by taking the latent variables $z_k$ sampled from the posterior $q_\\phi(z_{k}|z_{>k}, x)$ as input, where $x$ is a fixed data point. As shown in Fig. 8 of Appendix~N, the diversity of the generated samples demonstrate that the posterior $q_\\phi(z_{k}|z_{>k}, x)$ collapses to its prior distribution $p_\\theta(z_k|z_{>k})$ rather than a single point.",
" **For Q3: \"How does the computational footprint change\"**\n\nTake the vanilla VAE equipped with Likelihood Ratio as the baseline. For the space complexity, our method doesn't introduce any additional model parameters or memory cost. For the time complexity, compared to the baseline, our method requires additional $L-1$ times computation cost to calculate those expected log-likelihood terms in the loss function, specifically $\\\\frac{1}{L} \\\\sum\\\\nolimits\\_{k=0}^{L-1} \\\\mathbb{E}\\_{p\\_\\\\theta(\\mathbf{z}\\_{\\\\leq k}\\|\\mathbf{z}\\_{> k})q\\_\\\\phi(\\mathbf{z}\\_{> k}\\|\\mathbf{x})} \\\\left\\[ \\\\log p\\_\\\\theta(\\mathbf{x}\\|\\mathbf{z}\\_{\\\\leq k}) \\\\right\\]$, where $L$ denotes the number of layers and will be a relative small number in practice.\n\n**For limitations \"The limitations in Sec. H are a bit unspecific (\"additional computational burden\"). The broader impact in Sec. I is perhaps a bit more technical than it should be.\"**\n\nThanks for your suggestion, we have revised these in the revision.",
" **4. For \"The idea behind the proposed score seems to be mainly intuition-based and lacks theoretical backing. The explanation in Appendix C does not provide further theoretical insights.\"**\n\nThanks for your suggestions! The design of the adaptive score function is mainly inspired by the insight that the adaptive weight $\\frac{R(x, z_\\{>k-1\\})}{R(x, z_\\{>k\\})}$ will be relatively large when the data information drop rapidly, and can be used to adaptively enhance the importance of some discriminative terms, like $LLR^\\{>2\\}$, in the overall score function for OOD detection. Compared to $LLR^\\{>k\\}$'s unreasonably cherry picking $k$ on the whole testing set, the developed $LLR^\\{ada\\}$ does move beyond the choice of $k$ and still achieve competitive OOD detection performance in an unsupervised manner.\n\nFor theoretical analysis, we note that the $LLR^\\{ada\\}$ in Eq. (10) can be rewritten as \n$$\\\\begin{aligned}\nLL{R^{ada}} = \\\\frac{{R(x,{z\\_{ > L - 2}})}}{{R(x,{z\\_{ > L - 1}})}}LL{R^{ > L - 1}} + \\\\sum\\\\nolimits\\_{k = 0}^{L - 2} {(\\\\frac{{R(x,{z\\_{ > k}})}}{{R(x,{z\\_{ > k + 1}})}} - \\\\frac{{R(x,{z\\_{ > k - 1}})}}{{R(x,{z\\_{ > k}})}})LL{R^{ > k}}} - LL{R^{ >- 1}}.\\\\end{aligned}$$\n\nRecall to the visualization exhibited in Fig. 2, the weight score of $LLR^{>k}$ will be close to zero when ``posterior collapse'' occurs, like $k=3$, because no information decay will cause $\\\\frac{{R(x,{z\\_{ > k}})}}{{R(x,{z\\_{ > k + 1}})}} = \\\\frac{{R(x,{z\\_{ > k - 1}})}}{{R(x,{z\\_{ > k}})}} = 1$;\nwhen the data information suddenly drop rapidly, like $k=2$, the weight score will be relatively large, leading to $\\frac{{R(x,{z_{ > k}})}}{{R(x,{z_{ > k + 1}})}} \t\\gg \\frac{{R(x,{z_{ > k - 1}})}}{{R(x,{z_{ > k}})}}$; on the contrary, if the data information drop slowly, the weight score will be relatively small, because $\\frac{{R(x,{z_{ > k}})}}{{R(x,{z_{ > k + 1}})}} \\approx \\frac{{R(x,{z_{ > k - 1}})}}{{R(x,{z_{ > k}})}}$, like $k=1$ or $k=0$. \nThus, $\\mathcal{LLR}^{ada}$ can finally achieve the goal of adaptively enhancing the importance of some discriminative terms, like $\\mathcal{LLR}^{>2}$, in the overall score function for OOD detection. \n\nIt would help your understanding of different score methods with a numerical example in Table 12 and 13 in Appendix M.\n\n**5. Typos**\n\nThanks for your notification, we have fixed these typos in the revision.\n\n**6. For \"Tab. 2 left should probably read “FashionMNIST(in)/MNIST(out)\"**\n\nThanks, we have fixed it in the revision.\n\n**7. For \"The exposition could be improved (e.g. l. 95 “at the cost of bringing heavy burdens”). It is not always clear in notation, if it is a definition or an implied equality (e.g. l. 120,188).\"**\n\nThanks, we have explained why Likelihood Regret will bring heavy computation burdens in the previous response to Q.~2, and modified the corresponding sentence as follows:\n\n\"A pioneering VAE-based OOD detection method is Likelihood Regret (LRe) [23] calculated by iteratively fine-tuning the decoder parameters of VAE, which is time-consuming but achieves competitive performance in an unsupervised manner.''\n\nThanks, the equations in Line 120 and 188 are both implied equations, such as $p\\_\\\\theta(x\\|z\\_{>k}) = \\\\int p\\_\\\\theta(x, z\\_{\\\\leq k}\\|z\\_{>k}) dz\\_{\\\\leq k}\n= \\\\int p\\_\\\\theta(x\\| z\\_{\\\\leq k}) p\\_\\\\theta(z\\_{\\\\leq k} \\|z\\_{>k}) dz\\_{\\\\leq k} = E\\_{p\\_\\\\theta(z\\_{\\\\leq k}\\|z\\_{> k})}\\\\left\\[ p\\_\\\\theta(x\\|z\\_{\\\\leq k}) \\\\right\\].$\n\n**8. For \"The acronyms used in Tab. 1 are not all self-explanatory, nor is it very clear which previous works specifically they come from. Please make it clearer in the revision\" **\n\nThanks for your notification, we have already cited these approaches in Section 4.1 and also added the citations of baselines in all Tables in the revision. For the details of these baselines, please refer to\nthe Appendix E named “Details of the Baselines” in our first submitted manuscript.\n\n**For Q1: \"What is the OOD accuracy with the vanilla VAE loss (i.e. without the informative loss), but with the adaptive criterion\"**\n\nThanks for your suggestions, we have provided the additional experimental results of vanilla VAE with adaptive criterion as shown in Table 6, Table 7, and Table 12 in Appendix.\nFrom the results, we can see that the developed Adaptive Likelihood Ratio can still outperform other OOD score functions on the vanilla VAE. Moreover, we provide an experimental analysis of the effect of different score methods on vanilla HVAE without informative loss in Appendix M.\n\n**For Q2: \"Testing the approach on MNIST and CIFAR is great, but what about more natural images\"**\n\nThanks for your suggestions, due to limited time, we cannot provide experimental results on more natural images in the response. \nBut we promise that we will try to include these experiments during the discussion period if we can, and even evaluate our method on large-scale datasets in future work.\n\n",
" Thanks for your interest in our work!\n\n**1. For \"While the OOD is the focus of this work, the approach leads to increased in-domain likelihood overall, in some cases rather substantially so. (c.f. Tab. 4 in Appendix)\"**\n\nThanks for your careful review.\nFirstly, we need to highlight that the metric $L_x$ in Table. 4 is the ELBO ($\\log p(\\mathbf{x})$ rather than the reconstruction log-likelihood ($\\log p(\\mathbf{x}|\\mathbf{z})$). Then, what we want to highlight in Table 4 is that our method can lessen the gap between $L_x$ and $L_x^{>4}$ (a smaller $LLR^{>4}$) for in-distribution testing data samples, illustrating why our method can outperform the other baselines.\n\nTo better understand why our method will lead to an increased in-domain $L_x$, which consists of an expected likelihood term and several KL divergence terms, we provide additional comparisons of layer-wise log-likelihood and KL divergence between HVAE and our method, as shown in Table 15 and Table 14 of Appendix respectively.\nAs the log-likelihood results shown in Table 15, our method can achieve comparable performance with HVAE at the first-layer likelihood $p_\\theta(\\mathbf{x}|\\mathbf{z}_{>0})$, and significantly outperform it at higher-level likelihoods $p_\\theta(\\mathbf{x}|\\mathbf{z}_\\{>k\\})$ for $k>0$. From the KL-divergence results shown in Table 14, we can find that the KL-divergence scores of our method will be larger than those of HVAE at higher layers, indicating that our method can effectively alleviate ``posterior collapse'' at higher layers.\nThus, a comparable first-layer likelihood $p_\\theta{(\\mathbf{x}|\\mathbf{z}_\\{>0\\})}$ and a larger summation of KL-divergence terms together lead to a decreased in-domain $L_x$ for our method, which leads to a higher Average bits per dim.\n\nTo make a comprehensive comparison, we also measure the reconstruction quality with partial generative models $p_\\theta(\\mathbf{x}|\\mathbf{z}_\\{>k\\})$ and visualize the data samples generated from the prior distribution in Appendix O.\n\n**2. For \"Both the proposed training and the adaptive OOD score may be a suboptimal choice. For example, on FashionMNIST->MNIST the likelihood regret approach has a somewhat superior OOD accuracy.\"**\n\nFor the training scheme in informative HVAE, it has been proven to be effective to alleviate ``posterior collaspe'' and further improve the performance of unsupervised OOD detection in all our experiments. \nFor $LLR^{ada}$, in some cases, we admit that it will be a suboptimal choice compared to Likelihood Regret (only 0.8\\% smaller in AUROC when detecting MNIST as OOD). However, we need to point out that the calculation of Likelihood Regret is extremely time-consuming. For **each** testing sample, after fixing the parameters in the encoder network of VAE obtained by pretraining, it requires the model to iteratively \nfinetune the parameters of the decoder network blue **only for one data sample** until convergence to calculate Likelihood Regret score for it, which can hardly achieve fast in out-of-sample prediction and be applied in real-world applications (not all machines support the finetune). On the contrary, the calculation of $LLR^{ada}$ is straightforward and is more suitable for real-time prediction.\n\n**3. For \"Comparisons could have been a bit more extensive (only 2 training sets). Not compared to previous non-VAE and more recent works (e.g. [A])\"**\n\nThanks for your recommending the paper [A]. \nTo make a comprehensive comparison with non-VAE deep likelihood-based }models on unsupervised OOD detection, we provided additional experimental results in Table 8 and Table 9 of Appendix K, including Flow+Group [A], Glow [1], and PixcelCNN++ [2].\nSpecifically, Flow+Group [A] is an SOTA flow-based group OOD detection method to justify whether a batch of samples \\{${x_1, x_2, ..., x_n}$\\} $(n>1)$ is an OOD batch, rather than a sample-level OOD detection method ($n=1$}). \nLuckily, the authors have extended their method [A] via data augmentation to a sample-level OOD detection situation, i.e., $n=1$, whose setting is the same as us, and therefore we directly cite their OOD detection results reported in Appendix F;\nFor Glow [1], which thoroughly shows that flow-based models tend to assign higher likelihood scores to OOD samples, we report the results with their released code for OOD detection; \nFor PixcelCNN++ [2], which proposes to use an auto-regressive model for OOD detection with the help of additional OOD datasets, like NotMNIST dataset, we report the results of PixcelCNN++ under our purely unsupervised setting (no additional datasets).\n\nFrom the results shown in Table 8 and Tabl 9 of Appendix K, we can find that Flow+Group and our method can significantly outperform the other non-VAE methods, while our method is still better than Flow+Group [A].\n\n[1] Nalisnick et al. \"Do deep generative models know what they don't know?\". \n\n[2] Ren et al. \"Likelihood Ratios for Out-of-Distribution Detection\". ",
" **For Q3: \"Experiments are limited, Authors are encouraged to compare the results against the more recent state-of-the-art methods''**\n\nThanks for your recommending these excellent works, and we have carefully read these papers. \nWe have also briefly summarized these papers as follows:\n1) Paper (a) develops an energy-based OOD method by replacing the softmax score with an energy one, but still utilizes the groundtruth class labels to estimate the corresponding energy;\n2) Paper (b) exploits the property of backpropagation gradients derived from the KL-divergence between the softmax output and a uniform distribution;\n3) Paper (c) is still a classifier-based method (labels are needed), which is developed based on the insight that DNN should be invariant to the transformation like data augmentation.\n\nIn short, all of these methods are developed for supervised OOD detection and cannot be applied to unsupervised scenarios. We have discussed these supervised methods in the first paragraph of the Introduction).\n\nWe emphasize that our work focuses on investigating purely unsupervised OOD detection methods, where the in-distribution data's class labels are not available and no prior knowledge of OOD data is allowed (no additional OOD datasets to help training and no assumption about the OOD data type), and have compared it with SOTA unsupervised OOD methods in the experiments, including HVK and a series of HVAE baselines.\nMoreover, we have also included recent popular supervised methods termed \"Label\" and \"Prior\" as compared baselines in Table~1.\nConsidering that the aforementioned three papers [a] [b] [c] belong to the method category \"Label\", we have also included their OOD detection results in Table 1, such as paper [a] (termed as \"EN\") and paper [c] (termed as \"iDE\").\n\nAdditionally, we have provided more experimental results in the revised Appendix, including \n1) comparison with more methods designed for alleviating posterior collapse like Oversmoothing VAE, Warm-up in Appendix J;\n2) comparison with more non-VAE baselines for unsupervised OOD detection like flow-based models in Appendix K; \n3) comparison with different score methods in Appendix L and M; \n4) t-sne visualization of hierarchical latent representations in Appendix N;\n5) measure of reconstruction quality with partial generative models $p_\\theta(\\mathbf{x}|\\mathbf{z}_{>k})$ and visualization of data samples generated from the prior distribution in Appendix O.\n--------------------------------------------------------------------------------------\n6) **(update on 8 August) we add an additional comparison on more dataset pairs in appendix P, including Tiny-Imagenet, LFWPeople, Flower102, Places365, and Food101.**",
" Thanks for your effort in reviewing our paper! \n\n**For Q1: \"Comparison and the fundamental difference with BIVA and Oversmoothing VAE.''**\n\nFor BIVA [20], as discussed in the first paragraph of Section 3.3, it focuses on alleviating \"posterior collapse'' by modifying the generative network structure of HVAE. \nSpecifically, BIVA is characterized by a skip-connected generative model and an inference network formed by a bidirectional stochastic inference path, whose generative process forces the concatenation of latent variables $\\\\{\\mathbf{z}\\_k\\\\}\\_{k=1}^{L}$ to be **physically** linked to the generated samples, potentially hurting the hierarchy of multiple latent representations. \nWithout modifying the generative network structure of HVAE, the developed informative HVAE tries to introduce **virtual** skip-connection-liked structures into the objective function for training VAEs, specifically $E\\_{q\\_\\\\phi(\\mathbf{z}\\_{> k}\\|\\mathbf{x})}\\\\left\\[ \\\\log p\\_\\\\theta(\\mathbf{x}\\|\\mathbf{z}\\_{> k}) \\\\right\\]$ terms to build straightforward connections between the observation $\\mathbf{x}$ and latent variables $\\mathbf{z}\\_{>k}$ at higher layers,\nand its main idea can be applied to any existing hierarchical VAE, which is one of the main contributions of our work.\n\nFor Oversmoothing VAE [1], its main idea is that an inappropriate variance $\\sigma\\_{\\mathbf{x}}$ will cause the oversmoothness of the decoder and lead to ``posterior collapse'', where the $\\sigma_{\\mathbf{x}}$ is the variance parameter in the likelihood function $p_{\\theta}(\\mathbf{x}|\\mathbf{z})=\\mathcal{N}(\\mathbf{x}|\\mu_{\\mathbf{x}}(\\mathbf{z}),\\sigma_{\\mathbf{x}}^{2}\\mathbf{I})$. Please note that, in Oversmoothing VAE, $\\sigma_{\\mathbf{x}}$ is not parameterized by networks, which is directly updated with a 1-dimensional value related to the training objective during the learning process instead. Thus, Oversmoothing VAE is developed to alleviate collapse specifically for the VAEs whose variances $\\sigma_{\\mathbf{x}}$ is fixed as a 1-dimensional constant parameter rather than being parameterized by networks like conventional VAEs, such as $\\sigma_{\\mathbf{x}}(\\mathbf{z})$. However, in our paper, we adopt the most original settings for VAEs [2], where the variances $\\sigma_{\\mathbf{x}}$ is parameterized by fully-connected networks and has the same dimension as $\\mathbf{x}$, where the likelihood function is $p_{\\theta}(\\mathbf{x}|\\mathbf{z})=N(\\mathbf{x}|\\mu_{\\mathbf{x}}(\\mathbf{z}),\\sigma_{\\mathbf{x}}(\\mathbf{z}))$. \nWe note that the developed informative HVAE and Oversmoothing VAE alleviate \"posterior collapse'' from exactly different perspectives, and we have also compared their effectiveness in the following experiments.\n\nThanks for your bringing Oversmoothing VAE [1] to our eyes, and we have provided an additional comparison to demonstrate our method can beat it on unsupervised OOD detection, as the experimental results shown in Table 6 and Table 7 of Appendix J. For BIVA [20], we have already treated it as an important baseline in our experiments, and the comparison results can be found in the right part of Table 2 (termed as \"HVK\", since the HVK choose BIVA as their backbone in detecting SVHN as OOD), Table 3, Figure 3 (a~b) and Figure 4} of our first submitted manuscript. Besides, we also add an additional comparison between BIVA and other methods in Table 6 and Table 7 of Appendix J.\n\n[1] Takida et al., \"Preventing Posterior Collapse Induced by Oversmoothing in Gaussian VAE''.\n\n[2] Diederik P Kingma and Max Welling. ``Auto-encoding variational bayes''.\n\n\n**For Q2: \"In table 1, the approaches are not cited.''**\n\nThanks for your notification, we have already cited these approaches in Section 4.1 and also added the citations of baselines in all Tables in the revision. For the details of these baselines, please refer to the Appendix E named \"Details of the Baselines'' in our first submitted manuscript.\n\n\n\n",
" The paper investigates the problem on \"posterior collapse\" in hierarchical variational autoencoders (HVAE), and provides a theoretical explanation for why this occurs during training based on the ELBO lower bound. It further discusses why posterior collapse can affect the OOD detection performance of the HVAE model. Based on these insights, the paper proposes to enhance the connection (dependence) between the input an its multilayer stochastic latent representations based on an informative HVAE training objective. It also proposes an adaptive likelihood ratio score for detecting OOD inputs, which enhances the separation in in-distribution and OOD inputs, and does not depend on the specific choice of higher-level latent layer representations used. Strengths:\n\nThe paper addresses an important problem of posterior collapse observed in hierarchical VAEs, which significantly limits the ability of the model to be used for OOD detection. The motivation and background on posterior collapse based on mutual information is interesting, and there is adequate discussion of related work on the problem. Building on the insights, the paper proposes a novel training objective (informative HVAE) which alleviates the issue of posterior collapse by enhancing the dependence between the input and the higher level latent variables. The experiments are fairly extensive and compare the OOD detection performance of the proposed method with a number of baselines from different categories of OOD detection. \n\nWeaknesses:\n\n- The technical details are hard to follow in some places and lacks enough discussion. For instance, the likelihood-ratio score in Eq. (6) and its approximation are not discussed clearly. Same comment for the informative HVAE loss in Eqs. (8) and (9). \n\n- The paper compares with several baselines, but they are not clearly defined in the main paper (please questions on the experiments). \n\n- In the experiments, only two in-distribution datasets are evaluated, while it is common to evaluate on more ID/OOD dataset pairs. Some additional results, including an ablation study on the adaptive log-likelihood score, could be provided in the appendix to make the evaluation stronger.\n\n- A number of typos and grammatical issues, which could be easily fixed by proof-reading. \n While the paper is technically strong, it is a bit hard to follow and there are some logical jumps that are not obvious. Overall, the presentation of ideas could be improved.\n\n1. It is mentioned that the proposed method is completely unsupervised, i.e., it does not require labels for the in-distribution data, nor does it require auxiliary OOD data for the detection algorithm. The proposed method is different from many conventional OOD detection methods in that it does not depend on a classification model (DNN) for its scoring. In this sense, it is more like an anomaly detection method.\nCan the authors comment on whether the proposed method can be improved by utilizing the class labels (if available) by modeling the class-conditional distributions of $x$? \n\n1. The paper does not discuss a number of prior works on OOD detection in section 2. These include methods not based on deep generative models such as maximum softmax probability [1], Generalized ODIN [2], Deep Mahalanobis [3], Energy-based [4], ReAct [5] etc.\n\n1. On line 89, it is mentioned that likelihood methods based on generative models always assign a higher likelihood to OOD inputs compared to ID inputs. While this happens in some cases, it is not always true.\n\n1. In equation (3), it should be clarified that $z_{L+1} := x$ as a special case. Otherwise the dependence on $x$ is not clear in the second term. \n\n1. Please provide a discussion on the log-likelihood ratio score in Eqn. (5).\n\n1. On line 161, it should be $q_\\phi$ and not $q_\\theta$.\n\n1. On line 185, it is mentioned that the entropy of the marginal distribution on $x$ is a positive constant. This does not have to be the case for continuous $x \\in \\mathbb{R}^d$. If so, is the lower bound on the mutual information correct?\n\n1. The informative loss proposed in Eqn. (8) and its lower bound could be discussed in more detail. \n\n1. It is not obvious how the authors arrive at the proposed adaptive log-likelihood score. Please provide some intuition about why this is formulated as a weighted difference of log-likelihood ratios. Could this be arrived at in a more principled way? Why does it enhance the separation between ID and OOD inputs?\n\n1. Minor: On lines 241 - 243, it is mentioned that the metrics are threshold independent, but the `FPR80` metric does depend on the threshold.\n\n1. The paper compares with different categories of OOD detection methods, but some of them are not defined in the main paper. This makes it a big vague to read the tables and figures. \n\n1. In Table 1, why is the AUPRC not reported? This metric is sometimes more effective is capturing the separation between the OOD and in-distribution scores, especially at low FPR. \n\n1. In Table 1, it is not clear why some of the baseline methods are missing for the CIFAR10 / SVHN datasets. The format is a bit confusing to follow. \n\n1. In Table 2, why is SVHN used as the OOD dataset for FashionMNIST, whereas MNIST is used as the OOD dataset in Table 1? More complete results could be provided in the Appendix. \n\n1. In Figure 3, could the authors also report the area under the ROC curves, maybe as part of the legend. \n\n\n### References\n\n[1] Hendrycks, Dan, and Kevin Gimpel. \"A baseline for detecting misclassified and out-of-distribution examples in neural networks.\" arXiv preprint arXiv:1610.02136 (2016).\n\n[2] Hsu, Yen-Chang, et al. \"Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.\n\n[3] Lee, Kimin, et al. \"A simple unified framework for detecting out-of-distribution samples and adversarial attacks.\" Advances in neural information processing systems 31 (2018).\n\n[4] Liu, Weitang, et al. \"Energy-based out-of-distribution detection.\" Advances in Neural Information Processing Systems 33 (2020): 21464-21475.\n\n[5] Sun, Yiyou, Chuan Guo, and Yixuan Li. \"React: Out-of-distribution detection with rectified activations.\" Advances in Neural Information Processing Systems 34 (2021): 144-157.\n\n The discussion of limitations and broader impact in Appendix H and I is adequate. ",
" This paper presents a hierarchical VAE (HVAE) for out-of-distribution (OOD) detection. Authors investigate the 'posterior collapse' problem with HVAE models and propose a solution to mitigate the by increasing the mutual information between the input and latent representations. Finally, an adaptive likelihood ratio-based measure is proposed for the HVAE models to detect OOD samples. The proposed approach is evaluated on benchmark datasets and the proposed approach outperforms related variants. Strengths:\n\n1. Authors systematically explored the 'posterior collapse' problem with HVAE models. An interesting mitigation strategy is proposed by increasing the mutual information between the input and latent representations. \n\n2. An adaptive likelihood ratio-based measure is proposed to distinguish the OOD sample using all layers of HVAE.\n\n3. The draft is clearly written and easy to follow.\n\n4. Authors experimentally evaluated various components of the approach justifying their contribution toward the final performance. \n\n\nWeaknesses:\n\n1. There are other approaches to prevent posterior collapse such as bidirectional inference [20] or oversmoothing VAE loss function. How does the proposed approach compare with these approaches and what is the fundamental difference? [20] also considers skip connections. \nTakida et al., \"Preventing Posterior Collapse Induced by Oversmoothing in Gaussian VAE\" \n\n2. In table 1, the approaches are not cited. \n\n3. Experiments are limited. Authors are encouraged to compare the results against the more recent state of the art such as \n\na. Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. Advances in Neural Information Processing Systems, 2020.\n\nb. Rui Huang, Andrew Geng, and Yixuan Li. On the importance of gradients for detecting distributional shifts in the wild, ArXiv, abs/2110.00218, 2021\n\nc. Kaur, Ramneet, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban, Oleg Sokolsky, and Insup Lee. \"iDECODe: In-distribution equivariance for conformal out-of-distribution detection.\" AAAI (2022).\n Please address the comments in the weaknesses section, especially the novelty of the proposed approach with respect to existing approaches to preventing posterior collapse. No",
" The work studies OOD in hierarchical VAEs (HVAE). It connects the earlier observations that HVAEs may yield higher likelihood for OOD samples to the so-called “posterior collapse”, where higher-level latent variables degenerate to the (conditional) prior and hence becomes uninformative w.r.t. the input sample. To alleviate this issue, the work promotes increased mutual information between the input and the higher-level latents. It further develops an OOD score based on a weighted difference of the log-likelihood ratio between subsequent slices in the hierarchy of the latent variables.\nExperiments demonstrate improved OOD accuracy on two benchmarks. The proposed OOD criterion is shown to be more stable than the layer-specific scores (which implicitly require the index of the layer as a hyperparamter).\n\n**A post-rebuttal note.**\nI thank the authors for their elaboration and I appreciate the effort. I increase my score, since my main concerns have been resolved. Nevertheless, I encourage the authors to improve clarity in the main text, as well as to include the computational considerations entailed by the approach (as provided in the response below) in the final revision. **Pros:**\n- the narrative that OOD accuracy in hierarchical VAEs is connected to the posterior collapse issue is compelling and interesting to read.\n- the work appears technically sound and the arguments are appropriately formalised. \n- the proposed OOD score does not have any hyperparameters, but nevertheless appears competitive w.r.t. parametric alternatives.\n- the empirical results are strong and support the main claims.\n\n\n**Cons:**\n\nWhile the OOD is the focus of this work, the approach leads to increased in-domain likelihood overall, in some cases rather substantially so (c.f. Tab. 4 in Appendix);\n\nBoth the proposed training and the adaptive OOD score may be a suboptimal choice. For example, on FashionMNIST->MNIST the likelihood regret approach has a somewhat superior OOD accuracy.\n\nComparisons could have been a bit more extensive (only 2 training sets). Tab. 3 and 4 in the appendix provide more results, but those are not compared to previous non-VAE and more recent works (e.g. [A])\n\nThe idea behind the proposed score seems to be mainly intuition-based and lacks theoretical backing. The explanation in Appendix C does not provide further theoretical insights.\n\n[A] Revisiting Flow Generative Models for Group-wise Out-of-Distribution Detection\n\n**Typos:**\n- l. 70 “inference”\n- l. 100 “PixelCNN”\n- l. 206,213 “ration”\n- l. 310 “still” redundant\n- l. 311 “preserve”\n\n**Other comments:**\n- Tab. 2 left should probably read “FashionMNIST(in)/MNIST(out)”\n- The exposition could be improved (e.g. l. 95 “at the cost of bringing heavy burdens”). It is not always clear in notation, if it is a definition or an implied equality (e.g. l. 120,188).\n- The acronyms used in Tab. 1 are not all self-explanatory, nor is it very clear which previous works specifically they come from. Please make it clearer in the revision.\n - What is the OOD accuracy with the vanilla VAE loss (i.e. without the informative loss), but with the adaptive criterion? \n- Testing the approach on MNIST and CIFAR is great, but what about more natural images (e.g. ImageNet)?\n- How does the computational footprint change specifically w.r.t. the baseline?\n The limitations in Sec. H are a bit unspecific (\"additional computational burden\"). The broader impact in Sec. I is perhaps a bit more technical than it should be. ",
" The paper aims to detect OOD samples (unsupervisedly) with hierarchical VAEs. The main idea is based on the likelihood ratio in [1], with some interesting modifications. Firstly, the paper demonstrates that alleviating posterior collapse in hierarchical VAEs can help the performance of OOD detection using likelihood ratio. Then the paper proposes a modified training objective that upweights the mutual information between data $x$ and higher level latent variables, which enforces higher level latent variables to contain information about $x$ and hence prevents higher level latent variables to collapse to prior. The paper also proposes a new score that eliminates the need to tune the hyper-parameter $k$ in previous likelihood ratio method. \n\n[1] Hierarchical VAEs Know What They Don't Know Strengths: \n\n1.One thing I particularly like about the paper is that the proposed method is based on solid motivations. The paper clearly demonstrate the impact of posterior collapse to OOD detection in hierarchical VAEs, and then proposes method to alleviate posterior collapse. This makes the main structure of the paper easy to follow. \n\n2. The proposed method is simple yet effective. Upweighting the MI between $x$ and $z$ can certainly alleviate posterior collapse, but the magic only happens when it is combined with the analysis on the relationship between posterior collapse and OOD detection.\n\n3. Comprehensive and strong experimental results.\n\nLimitations:\n\n1. Since the method is based on previous likelihood ratio idea, in order to make the paper self-contained, more details of the previous method should be given. In particular, section 3.2 should give a more detailed introduction to [1]. The motivation and intuitive explanation should be provided.\n\n2. Section 3.4 is not very clear to me. Some explanations in the appendix should be moved to the main text.\n\n[1] Hierarchical VAEs Know What They Don't Know 1. How does the modification in training objective affect the performance of VAE, e.g., reconstruction, test likelihood, sample quality? We want the VAE to be versatile, not just a tool to detect OOD samples.\n\n2. If we alleviate the posterior collapse with simpler approaches, such as simply downweighting the KL term or using some warm-up scheme, can we obtain hierarchical VAEs that do better in OOD detection?\n\n3. In section 3.4, as well as appendix C, if we can use the ration of $R$, where $R$ is the partial generative model's log likelihood, why don't we just automatically pick k with the largest $R$-ratio?\n\n4. In Figure 2, I don't quite understand why the partial reconstructions for $p_{\\theta}(x|z_{>3})$ and $p_{\\theta}(x|z_{>4})$ have absolutely no variation. Assuming $z_{>3}$ has been collapsed into prior, then $q(z_{>3}|x) \\approx p(z_{>3})$. As a result, sampling $z_{>3}$ from posterior $q(z_{>3}|x)$ and $z_{<3}$ from prior $p(z_{<3}|z_{>3})$ (which is what a partial generative model does) is essentially sampling the whole $z$ from prior. Then the resulting reconstruction should just be some generated samples from the VAE, which should be diverse. But here they are the same. Does the posterior collapse to a single point? The authors have adequately addressed the limitations and potential negative societal impact of their work"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"dtG_FJqqbpr",
"Z1abMj_3vb7",
"hbakHmSQpMK",
"2RuxaQTF0AY",
"g0Wl7jzp8Hm",
"9qC4mhcINEX",
"UABDFtlrYar",
"Flv0ymK2q-q",
"LMbD-6Fe4ft",
"dtG_FJqqbpr",
"n76THBZ-Xso",
"6jnyZBhiIXk",
"LMbD-6Fe4ft",
"a_p5KW5sIR7",
"2nos73R0EW8",
"zg3XRhFCb27",
"Flv0ymK2q-q",
"NfOju3OS4So",
"dtG_FJqqbpr",
"nips_2022_vMQ1V_z0TxU",
"nips_2022_vMQ1V_z0TxU",
"nips_2022_vMQ1V_z0TxU",
"nips_2022_vMQ1V_z0TxU"
] |
nips_2022_o762mMj4XK | Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation | Modern approaches for simulation-based inference build upon deep learning surrogates to enable approximate Bayesian inference with computer simulators. In practice, the estimated posteriors' computational faithfulness is, however, rarely guaranteed. For example, Hermans et al., 2021 have shown that current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking false inferences. In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability, while sharing the same Bayes optimal solution. We achieve this by enforcing a balancing condition that increases the quantified uncertainty in low simulation budget regimes while still converging to the exact posterior as the budget increases. We provide theoretical arguments showing that BNRE tends to produce posterior surrogates that are more conservative than NRE's. We evaluate BNRE on a wide variety of tasks and show that it produces conservative posterior surrogates on all tested benchmarks and simulation budgets. Finally, we emphasize that BNRE is straightforward to implement over NRE and does not introduce any computational overhead. | Accept | The paper proposes a modification to the neural ratio estimation algorithm in the context of SBI (simulation-based inference) that tends to avoid overconfident posteriors. This is important for applications (for example in scientific discovery) where excluding plausible inferences can be more detrimental than including implausible ones.
The reviewers found the paper to be well written, technically solid, and a useful contribution to the SBI literature. Most concerns were addressed during the discussion period, with the paper strengthening its discussion of limitations as a result. In the end, the reviewers unanimously awarded the paper a score of 6 (weak accept). Therefore, I'm happy to recommend this paper for acceptance. | train | [
"WrgBz6ddS2m",
"U3ky3JJojRV",
"AsEt7fj5s0b",
"-ZEXJTjzPw3",
"hq7dhKU8Rqq",
"3aJJ6UeOpc",
"KVt5VMOmqQ",
"1MvK46KhP0l",
"pzyto-H0izC",
"_IP_aiPyt99",
"c-vPJs_G0in",
"NLbqLe_qwHM",
"JBtRJeZvjloL",
"glaLONgO91j",
"ytKjy_lDwGE",
"G4pnS9EGS8",
"d71VQgqr_yi",
"SydkzNUhyZ7",
"PFshcbP6ec9",
"2oJEZOpRDYy"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We have now updated the limitations in Section 6 to reflect this:\n\n> Third, the benefits of BNRE remain to be assessed in high-dimensional parameter spaces. In particular, the posterior density must be evaluated on a discretized grid over the parameter space to compute credibility regions, which currently prohibits the accurate computation of expected coverage in the high-dimensional setting.\n\nFinally, we also want to thank you for your positive feedback and the constructive discussion that helped us improve the paper. \n\nThe latest revision of the manuscript now integrates changes requested and discussed with all four reviewers, all within the 9-page limit.",
" Thank you for the pointers to the low-dimensional SBI problems---I agree, there are many interesting SBI problems in this regime. Given the overall rebuttal of your submission and your revised manuscript (*) I now see how BNRE can be useful in this regime of low-dimensional SBI problems, in order to obtain more conservative posteriors in the low simulation-budget regime. I am willing to adapt my score accordingly. \nHowever, I would ask you to add this limitation of BNRE--that it was developed for and tested in the low-dimensional SBI regime, i.e., for <= 3D benchmarks for which (B)NRE-posteriors can be obtained by evaluating a grid--should be added to this discussion too. \n\n(*) the revised version breaks the 9-page limit",
" Thanks for acknowledging that all the points raised but one have successfully been addressed. Regarding this last point, we disagree with the following statement:\n\n> In practice, SBI problems will likely have higher dimensionality than the benchmarks presented here, thus, I find it highly problematic to present BNRE as a valid alternative to other SBI methods, without knowing how it behaves in such a scenario.\n\nMany use cases exist for SBI in the low parameter dimensionality setting, either because the problems are low dimensional or because scientists are interested in the marginal posterior density of a small subset of parameters. All the benchmarks in this paper with the exception of SLCP are representative of actual scientific use cases. For example, the gravitational wave benchmark has led to many works applying SBI to infer the marginals over a few parameters of interest, see for example [A, B, C, D]. Other examples include studies on the dark matter aiming to infer its mass [E, F], inferring a subset of lens parameters from strong lensing [G], the Hubble constant [H], the matter density and its fluctuation in the Early Universe from weak-lensing [I]. The method presented in this manuscript is hence already useful for many real-world problems. \n\n[A] Green, S. R., Simpson, C., & Gair, J. (2020). Gravitational-wave parameter estimation with autoregressive neural network flows. Physical Review D, 102(10), 104057.\n\n[B] Dax, M., Green, S. R., Gair, J., Macke, J. H., Buonanno, A., & Schölkopf, B. (2021). Real-time gravitational wave science with neural posterior estimation. Physical review letters, 127(24), 241103.\n\n[C] Gabbard, H., Messenger, C., Heng, I. S., Tonolini, F., & Murray-Smith, R. (2022). Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy. Nature Physics, 18(1), 112-117.\n\n[D] Delaunoy, A., Wehenkel, A., Hinderer, T., Nissanke, S., Weniger, C., Williamson, A. R., & Louppe, G. (2020). Lightning-fast gravitational wave parameter inference through neural amortization. arXiv preprint arXiv:2010.12931.\n\n[E] Montel, N. A., Coogan, A., Correa, C., Karchev, K., & Weniger, C. (2022). Estimating the warm dark matter mass from strong lensing images with truncated marginal neural ratio estimation. arXiv preprint arXiv:2205.09126.\n\n[F] Hermans, J., Banik, N., Weniger, C., Bertone, G., & Louppe, G. (2021). Towards constraining warm dark matter with stellar streams through neural simulation-based inference. Monthly Notices of the Royal Astronomical Society, 507(2), 1999-2011.\n\n[G]Chianese, M., Coogan, A., Hofma, P., Otten, S., & Weniger, C. (2020). Differentiable strong lensing: uniting gravity and neural nets through differentiable probabilistic programming. Monthly Notices of the Royal Astronomical Society, 496(1), 381-393.\n\n[H] Gerardi, F., Feeney, S. M., & Alsing, J. (2021). Unbiased likelihood-free inference of the Hubble constant from light standard sirens. Physical Review D, 104(8), 083531.\n\n[I] Kilbinger, M., Ishida, E. E., & Cisewski-Kehe, J. (2021). Sidestepping the inversion of the weak-lensing covariance matrix with Approximate Bayesian Computation. arXiv preprint arXiv:2112.03148.",
" Thanks for those precisions.\n\nWe understand your point and indeed model misspecification and computational unfaithfulness lead to the similar consequences and are hence connected. However, we still believe that those issues should still be viewed as separate and that the distinction between the two should be made as the causes underpinning those issues are different. The tools used for addressing them are hence also different.\n\n> Replace GBI with LFI?\n\nWe have now replaced it with SBI to be consistent with the rest of the paper.",
" Thank you very much, I appreciate the changed. However, there are still many odd bits:\n\n> In this work, we make the assumption that the simulator is well-specified [...]\n\nI understand that you're in the well-specified setting, but the point I am trying to make is that the likelihood-to-evidence ratio is misspecified wrt to the simulator. In short, model misspecification occurs when $x \\sim p^*$, but we evaluate it in $p(x | \\theta) \\neq p^*(x)$. In your case, $x \\sim p(x | \\theta)$, but you evaluate $\\hat{d}(\\theta, x)/(1- \\hat{d}(\\theta, x)) \\neq p(x \\mid \\theta)/p(x)$. Hence, the approaches suggested are legitimate alternatives to obtain more conservative posteriors. \nCurrently, it reads as all this work has relatively little relevance to this paper. \n\n> Recently, Dellaporta et al. [30] applied Bayesian non-parametric learning to GBI, \n\nReplace GBI with LFI? ",
" I thank the authors for taking the time to answer my remarks and questions in detail! I consider most of the minor points addressed, however, one major concern remains. \nRegarding the benchmarks: thank you for the clarification, I indeed missed the point that all benchmarks are intractable, and by design so low-dimensional that evaluation of the posterior on a grid is feasible. However, this does not alleviate my concerns: I agree that the theoretical argument for BNRE made in this paper is valuable, but I find the empirical evidence that it works in practice too thin. \nAgain, the results show that, as intended, BNRE results in broader / more conservative posteriors in the low simulation budget-regime. However, the results in section 4 showed as well that BNRE comes with a bias in the posterior estimate. In practice, SBI problems will likely have higher dimensionality than the benchmarks presented here, thus, I find it highly problematic to present BNRE as a valid alternative to other SBI methods, without knowing how it behaves in such a scenario. I therefore think that it would be essential to evaluate BNRE with additional experiments, e.g., show on a fully tractable example how NRE and BNRE compare in terms of bias, variance and coverage as one increases the simulation budget and the dimensionality of the inference problem. ",
" Thanks for the additional references that helped us get a broader view and understand the relevance of this literature. We will update the paragraph related to model misspecification as follows to both include a discussion on power likelihood/posteriors and clarify the contributions of [30]. Please let us know if you find any mistakes in this paragraph as we are not familiar with this literature.\n\n> In this work, we make the assumption that the simulator is well-specified, in the sense that it accurately models the real data generation process. However, this assumption is often violated. To overcome this issue, Generalized Bayesian inference (GBI) extends Bayesian inference by replacing the likelihood term with an arbitrary loss function [23]. Those loss functions can be designed to mitigate specific types of misspecifications and enable robust inference, even with intractable likelihoods [24– 26]. Power likelihood losses have also been shown to increase robustness to model misspecification [27]. It consists in raising the likelihood to a power to control the impact it has over the prior. The lower the power of likelihood, the lower the importance given to the data and the higher the uncertainty of the posterior. It can either be set based on practitioner knowledge or derived from observed data [28]. Following the same objective, Miller and Dunson [29] introduce coarsened posteriors that condition on a neighborhood of the empirical data distribution rather than on the data itself. This neighborhood is derived from a distance function that, when set to the relative entropy, allows the approximation of coarsened posteriors by a power posterior. Recently, Dellaporta et al. [30] applied Bayesian non-parametric learning to GBI, making inference with misspecified simulator models both robust and computationally efficient.\n\n[23] Pier Giovanni Bissiri, Chris C Holmes, and Stephen G Walker. A general framework for\nupdating belief distributions. Journal of the Royal Statistical Society: Series B (Statistical\nMethodology), 78(5):1103–1130, 2016\n\n[24] Sebastian M Schmon, Patrick W Cannon, and Jeremias Knoblauch. Generalized posteriors in\napproximate bayesian computation. arXiv preprint arXiv:2011.08644, 2020.\n\n[25] Takuo Matsubara, Jeremias Knoblauch, François-Xavier Briol, Chris Oates, et al. Robust\ngeneralised bayesian inference for intractable likelihoods. arXiv preprint arXiv:2104.07359,\n2021.\n\n[26] Lorenzo Pacchiardi and Ritabrata Dutta. Score matched neural exponential families for\nlikelihood-free inference. Journal of Machine Learning Research, 23(38):1–71, 2022.\n\n[27] Peter Grünwald and Thijs Van Ommen. Inconsistency of bayesian inference for misspecified\nlinear models, and a proposal for repairing it. Bayesian Analysis, 12(4):1069–1103, 2017.\n\n[28] Chris C Holmes and Stephen G Walker. Assigning a value to a power likelihood in a general\nbayesian model. Biometrika, 104(2):497–503, 2017.\n\n[29] Jeffrey W Miller and David B Dunson. Robust bayesian inference via coarsening. Journal of\nthe American Statistical Association, 2018.\n\n[30] Charita Dellaporta, Jeremias Knoblauch, Theodoros Damoulas, and François-Xavier Briol.\nRobust bayesian inference for simulator-based models via the mmd posterior bootstrap. In\nInternational Conference on Artificial Intelligence and Statistics, pages 943–970. PMLR, 2022.",
" Thank you very much for the changes. While some of the technical discussion has been improved, I have the impression that other bits have been unfairly dismissed as relatively minor. \n\n> Thank you for these references, we have updated the manuscript. It should be noted however that our work is not immediately concerned with the problem of model (simulator) misspecification. However, we acknowledge the importance of the problem.\n\nIt appears that the suggested discussion is missing and I assume this is because the authors think those references are not immediately relevant, as this paper is not considering model (simulator) misspecification. However, what is considered here is that the estimate of the likelihood-to-evidence ratio is misspecified wrt to the posterior of the simulator, i.e. it is not a perfect estimate / approximation. Hence, I still believe that the relevant literature on power posteriors and coarsening [F] is highly relevant and should be discussed. After all, the paper is about finding more conservative posteriors, and there exists a literature on finding more conservative posteriors. Please do not shy away from investigating the references in those papers as well. \n\n> Thanks for the clarification, we replaced further improved by extended.\n\nI am sorry to be nitpicking, but I do not understand in what sense the method is an \"extension\" of GBI. Instead, it is an application of the related nonparametric learning framework [G]. (This is not a criticism of the cited work, rather it's pointing out the classification that the authors of the cited work are using. I do think it is important not to confuse potential readers by adopting a different language.)\n\n> Generalized Bayesian inference (GBI) extends Bayesian inference by replacing the likelihood term by an arbitrary loss function [23].\n\n[23] is the wrong reference for GBI, as the paper says, it is about finding the scaling parameter for a power posterior. This is relevant for this paper, but not the paper that fits this sentence. The paper that does discuss GBI as a coherent way for obtaining belief updates using general losses is [H]. \n\n> If the revised version successfully addressed all your concerns, we would like to kindly ask you to reconsider your score in light of the modifications made.\n\nMy initial tendency to accept the paper is based (in addition to the generally interesting idea) my belief that the weaknesses can be easily remedied by adjusting the discussed literature during the rebuttal period. I think it is essential, although remain open to be convinced otherwise, if there is something I have misunderstood. \n\n[F] Miller, J. W., & Dunson, D. B. (2018). Robust Bayesian inference via coarsening. Journal of the American Statistical Association.\n\n[G] Lyddon, S., Walker, S., & Holmes, C. C. (2018). Nonparametric learning from Bayesian models with randomized objective functions. Advances in neural information processing systems, 31.\n\n[23] Holmes, C. C., & Walker, S. G. (2017). Assigning a value to a power likelihood in a general Bayesian model. Biometrika, 104(2), 497-503.\n\n[H] Bissiri, P. G., Holmes, C. C., & Walker, S. G. (2016). A general framework for updating belief distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(5), 1103-1130.",
" > The discussion around equation (6) is confusing. You say that $\\hat{p}(\\theta|x)<p(\\theta|x)$ whenever the $\\hat{d}(\\theta,x)<d(\\theta,x)$, but clearly $\\hat{p}(\\theta|x)<p(\\theta|x)$ can't hold for all $\\theta$ since we are considering densities. Hence, the crucial aspects of whether the density is more conservative is for which $\\theta$ one has $\\hat{p}(\\theta|x)<p(\\theta|x)$ and that seems to be missing.\n\nThank you for pointing this out. We realize our discussion should have been more thought through. We now propose the following and more complete explanation:\n\nThm 1 states that ${\\mathbb{E}}\\_{p(x,\\theta)} [d / \\hat{d}] \\geq 1$. This can be rewritten as ${\\mathbb{E}}\\_{p(x)} \\left[ {E}\\_{p(\\theta|x)} [d / \\hat{d}] \\right] \\geq 1$. Ideally, if we had ${\\mathbb{E}}\\_{p(\\theta|x)} [d / \\hat{d}] \\geq 1$ for all $x$, then we would have $d / \\hat{d} \\geq 1$ in regions of high posterior density, which would result in $\\hat{d} / (1-\\hat{d}) \\leq d / (1-d) \\Leftrightarrow \\hat{r}(x \\vert \\theta) < r(x \\vert \\theta)$, hence $\\hat{p}(\\theta \\vert x) < p(\\theta \\vert x)$.\n\nSimilarly, Thm 2 states that ${\\mathbb{E}}\\_{p(x)p(\\theta)} [(1-d) / (1-\\hat{d})] \\geq 1$. This can be rewritten as ${\\mathbb{E}}\\_{p(x)} \\left[ {\\mathbb{E}}\\_{p(\\theta)} [(1-d) / (1-\\hat{d})] \\right] \\geq 1$. Ideally, if we had ${\\mathbb{E}}\\_{p(\\theta)} [(1-d) / (1-\\hat{d})] \\geq 1$ for all $x$, then we would have $(1-d)/(1-\\hat{d}) \\geq 1$ in regions of high prior density, which would result in $\\hat{p}(\\theta \\vert x) > p(\\theta \\vert x)$. \n\nTherefore enforcing the balancing condition will try to impose these two antagonistic objectives (lowering $\\hat{p}(\\theta \\vert x)$ such that $\\hat{p}(\\theta \\vert x) < p(\\theta \\vert x)$ and increasing $\\hat{p}(\\theta \\vert x)$ such that $\\hat{p}(\\theta \\vert x) > p(\\theta \\vert x)$) at the same time. Which constraint dominates will depend on whether $p(\\theta|x) > p(\\theta)$. If $p(\\theta|x) > p(\\theta)$, then the effect of Thm 1 dominates, which results in $\\hat{p}(\\theta \\vert x) < p(\\theta \\vert x)$. If $p(\\theta) > p(\\theta|x)$ then the effect of Thm 2 dominates, which results in $\\hat{p}(\\theta \\vert x) > p(\\theta \\vert x)$. \n\nWe have now updated L124:138 to include this discussion.\n\n> Generally the model details are a bit short. For example, you reference Lotka and Volterra indicating that the underlying model is a deterministic ODE following the Lotka-Volterra equations. However, commonly people in SBI work with the stochastic version, describing a Markov jump process. I would be nice to have more information here.\n\nWe use the stochastic version that is described by a Markov jump process; it is the same one as defined in Papamakarios et al. [4]. The reference was indeed missing and has now been added. Thank you for spotting this!\n\n> I am not so sure about the expected coverage as a special case of SBC. The internal consistency of the joint distribution that you're relying on is something that is remarked even in the SBC paper to be known before. The devil is in the detail and from my point of view SBC is about an actionable algorithm, which is more than just the expectation that you allude to in Appendix A. In particular, the actual computations (checking whether rank statistics are uniformly distributed vs comparing coverage) are quite different in both cases.\n\nIndeed, the way we compute coverage is quite different from the computation of rank statistics. However, the expected coverage at credibility level $1 - \\alpha$ \n\n$$ {\\mathbb{E}}\\_{p(\\theta^\\*, x^\\*)} \\left[ \\mathbb{I} [ \\theta^\\* \\in {\\Theta}\\_{\\hat{p}(\\theta | x^\\*)}(1 - \\alpha)]\\right] $$\n\nis the probability of rank statistics\n\n$$ \\hat{r}(\\theta^*) = \\mathbb{E}_{\\hat{p}(\\theta | x^*)} \\big[ \\mathbb{I} [ \\hat{p}(\\theta | x^\\*) \\leq \\hat{p}(\\theta^\\* | x^\\*) ] \\big] $$\n\nto be above $\\alpha$. Therefore, $1$ minus the expected coverage is the cumulative distribution function $P(\\hat{r}(\\theta^*) \\leq \\alpha)$ of the rank statistics over the joint distribution $p(\\theta^*, x^*)$. Hence, we can easily recover the empirical CDF from our expected coverage plots and, thereby, the histograms (empirical PDF) of SBC. Given that the histograms of SBC can be recovered from the empirical coverage, it benefits from the interpretations linked to the SBC diagnostic, making it an actionable algorithm as well.\n\nThat being said, we would like to insist on the fact that this link between SBC and expected coverage has been added to strengthen the use of expected coverage as a diagnostic. Never do we claim that this is a novel contribution. \n\nIf the revised version successfully addressed all your concerns, we would like to kindly ask you to reconsider your score in light of the modifications made.",
" Thank you for your positive review highlighting the importance, novelty, and significance of BNRE. We also appreciate your kind words regarding the presentation of our work.\n\n> The penalty term does result in a worse calibration in most examples. While the posterior is more conservative the calibration is often worse, in particular when the number of samples is low. Especially for models where simulation is costly this might be an issue.\n\n> While I agree that conservative posteriors are preferable, calibration is not a binary issue. Overly conservative posteriors might be unable to identify a parameter to a reasonable degree. In the Weinberg, SIR and Lotka Volterra models, a relatively accurate posterior is traded in for a mis-calibrated conservative posterior. I think the trade-off is important.\n\nWe agree, the trade-off is important and can actually be adjusted by tuning the $\\lambda$ parameter depending on the use case. \n\nWe now better highlight and nuance the importance of the trade-off in our discussion of Figure 3. We have replaced the sentence `However, the loss in statistical performance is eventually recovered by increasing the simulation budget. In fact, practitioners might be inclined to favor reliability over statistical performance [1] and would therefore be willing to cover this cost.` by `However, the loss in statistical performance is eventually recovered by increasing the simulation budget. In fact, practitioners might be inclined to favor reliability over statistical performance [1], although it is always a trade-off that depends on the precise use case.`\n\n> There is a broad literature in statistics considering model (over-)confidence [A - C] which I think should be mentioned. In particular, power posteriors are quite popular to adjust the posterior in order to account for discrepancies between a model and real data. Since NRE does not estimate the likelihood directly, it might be necessary to use the presented approach instead of power posteriors, but a discussion should be added.\n\nThank you for these references, we have updated the manuscript. It should be noted however that our work is not immediately concerned with the problem of model (simulator) misspecification. However, we acknowledge the importance of the problem.\n\n> “Recently, Dellaporta et al. [26] further improved GBI by combining [...]” The cited work [26] has advantages and disadvantaged but it's not an \"improved\" version of GBI, it's a variant. In particular, it best applies to IID data. When using time-series data as many of the models in this paper [D] would be a more fitting GBI procedure.\n\nThanks for the clarification, we replaced `further improved` by `extended`. \n\n> “M/G/1, originally introduced by Papamakarios et al. [4], [...]” Papamakarios et al. however reference [E] for the M/G/1 model. This should be amended.\n\nThank you for spotting this, the reference was indeed missing and has been updated.\n\n> Could you please write down the definition of a classifier? I assume you mean something like $d:X \\rightarrow [0,1]$\n\nThe classifier actually takes both $x$ and $\\theta$ as input. The label is $1$ if the pair $(\\theta, x)$ pair is sampled from the joint distribution and $0$ if it is sampled from the product of the marginals. This is now defined mathematically in the latest revision.\n",
" > However, the experiments and results presented here appear to me as preliminary with no direct insights or consequences for other researchers or practitioners (yet). In theory, the BNRE approach makes sense, however, in practice (as indicated by the results presented here) I do not see an advantage over NRE. [...] it would not makes sense to use BNRE as of yet.\n\nThis opinion completely ignores the theoretical arguments and the empirical evidence presented in the paper. BNRE is an algorithm that enables conservative approximations for every simulation budget essentially for free in terms of computational cost and tuning, while retaining the same global minimum as NRE! \n\nIn addition, although not the main focus of our paper, the core theoretical arguments that are the foundation to BNRE have an impact on the broader AI community, because they relate to any binary classifier and can therefore be applied to any high-risk classification problem while again, sharing the same global minimum as the Bayes optimal classifier.\n\n—\n\n\nWe hope the responses put forward here are convincing and answer your main concern regarding the validity of our theoretical arguments and empirical results. For this reason we kindly ask you to reconsider your score.\n",
" > Importantly, SBC is able to detect not only miscalibrated uncertainties like over- or underdispersion, but also positive and negative biases of the posterior estimates.\n\nIndeed, but we already discuss the approximation bias and variance in Section 4.2 and in Appendix G. Therefore, we believe that the added value of SBC would be minor. \n\nIn addition, we have shown in Appendix A that the expected coverage closely relates to SBC. The expected coverage at credibility level $1 - \\alpha$\n\n$$ {\\mathbb{E}}\\_{p(\\theta^{\\*}, x^{\\*})} \\left[ \\mathbb{I} [\\theta^{\\*} \\in {\\Theta}\\_{\\hat{p}(\\theta | x^*)} (1 - \\alpha) ] \\right] $$\n\nis the probability of rank statistics\n\n$$ \\hat{r}(\\theta^*) = \\mathbb{E}_{\\hat{p}(\\theta | x^*)} \\big[ \\mathbb{I} [ \\hat{p}(\\theta | x^*) \\leq \\hat{p}(\\theta^* | x^*) ] \\big] $$\n\nto be above $\\alpha$. Therefore, $1$ minus the expected coverage is the cumulative distribution function $P(\\hat{r}(\\theta^*) \\leq \\alpha)$ of the rank statistics over the joint distribution $p(\\theta^*, x^*)$. Hence, we can easily recover the empirical CDF from our coverage plots and, thereby, the histograms (empirical PDF) of SBC.\n\nThis implies that the expected coverage curves can also be used to probe for negative and positive biases, further demonstrating the equivalence between both diagnostics. That being said, we would like to insist on the fact that this link between SBC and expected coverage has been added to strengthen the use of expected coverage as a diagnostic.\n\n> For the tractable tasks with known posteriors it would additionally be useful to calculate actual biases, and dispersion with respect to the reference posteriors.\n\nWe agree that it could be a nice addition. We did not compute those against reference posteriors because they are unavailable in settings where the likelihood is intractable (which is the case for all our benchmarks). However, if we consider NRE with the highest simulation budget as a reference posterior, Figure 4 gives us insights by comparing the red curve (BNRE) with the last value for the blue curve (the chosen reference posterior). Moreover, we do compute and report approximation biases and variances in Appendix G. To highlight this we added, close to Figure 4, the following sentence \n\n*The bias gets close to $0$ for high simulation budgets, showing that the bias induced by BNRE vanishes as the simulation-budget increases.*\n\n> For the real-world examples additional checks, e.g., prior and posterior predictive checks would be instructive to show the practical effect of conservative posteriors and difference of BNRE to NRE.\n\nWe agree that these might be instructive to show the practical effect and leave it as future work. However, we would like to point out that in the current version, the metrics considered are sufficient to demonstrate our claims regarding the behavior of BNRE in contrast to NRE.\n\n> Finally, a comparison to other established SBI methods like NPE and NLE would be illustrative. They are readily available in open-source software packages so that adding them to the benchmark would not result in large algorithmic or implementation overhead.\n\n[1] already demonstrates that NPE, NRE and SNL can all be overconfident. Meaning, showing that BNRE leads to conservative approximate posterior for all the benchmarks with respect to NRE is therefore a sufficient comparison. Comparing BNRE to NPE and NLE in terms of bias, variance, and log densities is, in our opinion, irrelevant for the claims made in the paper because the balancing condition is (currently) designed for SBI techniques that employ classifiers as neural surrogates for the statistical quantity of interest.\n\n> For Figure 2 it would be good to show standard error of the mean as error bars, given that 5 repetitions were performed\n \nStandard deviations (albeit not standard errors) are shown in Appendix F for the coverage AUCs. We made separate plots to avoid cluttering Figure 2. \n\n> For Figure 3 it would better to show standard error of the mean instead of the standard deviation (minor)\n\nWe always consider single models and not ensembles. We do not understand the motivation for considering the standard error of the mean over 5 models instead of the standard error of the statistic computed on one model, could you elaborate further on this particular point?\n",
" We thank you for taking the time to review our work. We answer your main concerns below:\n\n> Like NRE, BNRE trains a classifier in order to approximate the likelihood-to-evidence ratio, to then perform MCMC.\n\nAlthough NRE can be used in combination with MCMC, we use here a variant of NRE that avoids the use of MCMC. We directly evaluate the approximate posterior density as $p(\\vartheta)\\hat{r}(\\vartheta\\vert x)$ over a discrete grid of parameter values.\n\n> They perform experiments with tractable benchmarking problems. [...] most of which (all? […]) are tractable [...]\n\nThis is untrue, all our benchmarks have an intractable likelihood:\n\n* SLCP: As stated in the text, the likelihood is intractable due to the marginalization to infer the posterior density over 2 parameters defining the mean.\n\n* Weinberg: Simulates an experiment that measures the scattering angle of electron positron collisions. Internally, this involves a stochastic rejection loop of particle collisions based on the (random) collision cross section and beam energy.\n\n* Spatial SIR: The observable result from a sequence of random infections of individuals. This sequence of random actions makes the likelihood intractable.\n\n* M/G/1: The observable results from a sequence of random times between the arrival of customers and random times it takes to serve a customer. This sequence of random actions makes the likelihood intractable.\n\n* Lotka-Volterra: The likelihood is intractable because the population dynamics are modeled as a time series that is described by a Markov Jump Process.\n\n* Gravitational waves: The likelihood is intractable due to the marginalization over all nuisance parameters to obtain the posterior over the two masses.\n\n> The set of benchmark tasks is limited, it contains rather low-dimensional tasks.\n\nWe agree that our benchmark tasks only contain low-dimensional tasks. The main reason is the coverage diagnostic becomes intractable for high-dimensional tasks and hence the benefit of BNRE cannot be empirically checked on such tasks. That being said, high-dimensional posterior inference remains a challenging issue for all SBI algorithms. We do not claim to solve this issue in our paper.\n\n> It would be more instructive to show one or two tractable examples with reference posteriors (obtained via MCMC or analytically) to study the properties of BNRE in depth, and to then show one or two additional “real-world” examples to demonstrate the use of BNRE in practice and to show how it compares to NRE in practice\n\nThanks for the suggestion, we agree with this view. Actually, we adopt the same template but modify the order of appearance. We first start by showcasing that the method works on real-world examples in Section 4.1 and subsequently study the properties in-depth in Section 4.2. \n\nFor the reference posteriors, they cannot be obtained via MCMC or analytically due to the intractable likelihood of the benchmark. However, we suggest discussing results of Section 4.2 with respect to a reference posterior obtained via NRE with a high simulation budget (e.g., the run using $2^{17}=131\\text{k}$ simulations). Figure 4 is already ready as is and we will add the reference posterior on the two leftmost plots of Figure 5 by the end of the discussion period.\n\n> I see the current results merely as a first study showing preliminary results.\n\nWe respectfully disagree with this statement. As stated by other reviewers, our claims are backed by theoretical arguments supported with empirical evidence. We provide evidence that BNRE leads to more conservative posteriors on a wide range of benchmarks that are representative of real world scientific applications. Moreover, we do provide insights about the behavior of the method by computing the bias and variance of the approximate posteriors and through a study of the effect of the $\\lambda$ parameter. The full set of results is provided in the supplementary materials. \n\n> Choice of metrics: The results of the paper are mainly based on one single metric measuring the “confidence” of the posterior—the expected posterior coverage.\n\nWe evaluate our results across 5 metrics: expected coverage, coverage AUC, expected posterior density, as well as in terms of approximation bias and variance on all benchmarks. The full set of results is provided in the supplementary materials. We frame our discussion and impact of our method based on the evaluation of all those metrics.\n\n> there is a new method for performing a local coverage test for specific observation (Zhao et al. https://proceedings.mlr.press/v161/zhao21b.html), which would be useful to add as a metric as well, e.g., in the “real-world” example.\n\nOur claims are all about the expected coverage and not the local coverage. \n",
" Thank you for your positive review and for pointing out the novelty, good validation, and significance of BNRE.\n\n> The paper is technically sound, and most claims are backed by empirical evidence. However, there is one claim that I would appreciate the authors to illustrate: \"We found λ = 100 to perform well across all benchmarks, which again, is supported by Figure 5.\" I would recommend the authors to provide a figure (in appendix) illustrating this, even if for a subset of the benchmark problems.\n\nThis sentence should not be viewed as a strong claim that the value $\\lambda = 100$ performs best across all benchmarks but rather as a reasonably good default value. To obtain the best value for $\\lambda$, we recommend to start with a low value for $\\lambda$ and gradually increase $\\lambda$ until the approximate posterior becomes conservative (e.g., when its coverage AUC becomes positive). This value is expected to fluctuate across problems. The advantage of following this procedure is that it maximizes the statistical performance of the posterior estimator while ensuring it is conservative.\n\nIn the latest revision, we have now replaced the sentence\n\n *In practice, $\\lambda$ should be sufficiently large such that the approximate classifier is balanced, while maximizing the statistical performance of the posterior estimator. We found $\\lambda = 100$ to perform well across all benchmarks, which again, is supported by Figure 5.* \n\nby \n\n*In practice, $\\lambda$ should be sufficiently large such that the approximate classifier is balanced, while maximizing the statistical performance of the posterior estimator. Therefore, we recommend starting with a small value for $\\lambda$ and to gradually increase $\\lambda$ until the posterior estimator becomes conservative. We empirically found $\\lambda=100$ to be a reasonably good default value leading to good performance across all considered benchmarks with various model architectures.*\n\n> page 4, line 119, \"does not modify the global optimum\". This is a bit imprecise and might lead to confusion, so I would suggest the authors to expand on it;\n\nWe have now replaced this sentence by\n\n*Therefore, minimizing the cross-entropy loss while restricting the model hypothesis space to balanced classifiers results in the same Bayes optimal classifier of Eqn. 1.*\n\n> page 4, line 122, \"[...] result in increasingly conservative [...]\". By \"increasingly\", I assume the authors meant with increasing λ. As this is not clear, I would suggest the authors to drop the word \"increasingly\";\n\nWe have now dropped the word “increasingly”. Thank you for your suggestion.\n\n> page 7, line 186, for clarity, the notation for nominal parameter ($\\theta^*$) should be introduced close to the expected squared error over the approximate posterior.\n\nThis is now addressed in the latest revision.\n\nIf the revised version successfully addressed all your concerns, we would like to kindly ask you to reconsider your score in light of the modifications made.\n",
" Thank you for the insightful and constructive review. We appreciate that you find our validation extensive and well thought of.\n\n> (a) There does not seem to exist a universally \"best\" algorithm in SBI, so potentially extending the same logic to more algorithms would be very valuable for the community.\n\nWe agree that it would have been a nice addition to extend our approach to other inference algorithms. Actually, we tried to enforce the balancing condition to NPE by expressing the modeled posterior through a binary classifier, but ended up with unsatisfactory results. After trying to solve this issue for some time, we believe that the reason behind those unsatisfactory results is not trivial and decided to leave it as future work. Our current hypothesis is that a regular classifier has more flexibility to satisfy the balancing condition in contrast to a flow, since it is not constrained to be a proper density by construction.\n\n> (b) [...] my only point is that a practical example of a \"failure case\" -- or even a \"if they used this overconfident SBI algorithm in this real world study it would have been a problem\" type-of-argument -- would make this paper much more convincing.\n\nThis is indeed a good suggestion to make the motivation more compelling. In the upcoming revision, we will include an example that relates to Dark Matter studies. Of particular interest to these studies is determining the “Dark Matter model” of our universe: which could be cold, warm or hot dark matter. These models describe how clumps of Dark Matter (so-called subhalos) are distributed in the universe. Cold dark matter refers to a distribution of Dark Matter that contains smaller, and more clumpy dark matter subhalos, whereas the gravitational field in hot dark matter is very smooth. In general, thermal dark matter models can be described by a single parameter, the dark matter thermal relic mass, which can be intuitively thought of as the energy the dark matter particle had in the Early Universe. A low particle energy corresponds to a warm or hot dark matter model, while a relatively high particle energy is descriptive of cold dark matter.\n\nPreviously, astronomers mainly relied on rejection sampling studies (rejection ABC) to determine a posterior of the dark matter thermal relic mass (energy). However, these studies mostly relied on hand-crafted summary statistics based on the insights of astronomers. With the advent of Deep Learning and modern SBI techniques, the relation between model parameters and simulated data is automatically learned to produce approximate posteriors. While these techniques typically improve upon the obtained constraints, their explainability remains lacking. Suppose that for some reason practitioners apply an SBI algorithm without diagnosing the learned estimator. In that case, it is possible that they obtain a constraint that is smaller than it should be. Whenever an overconfident estimator produces posterior estimates that favor cold dark matter models, it could easily wipe out decades of research on the Sterile Neutrino, a potential candidate for the Warm Dark Matter particle. \n\nOf course, the above holds under the assumption where the simulator is correctly specified. Whenever the model is misspecified, which is most likely the case anyway, the problem becomes more challenging. On the other hand, combining the intuition of astronomers and a conservative posterior estimator could steer the development of their scientific model in the right direction, especially if the conservative estimator produces posterior approximations that exclude hypotheses that are consistent with observation or theory.\n\nWe will include this example in the manuscript by the end of the discussion period. \n\n> The one (minor) limitation I would like to flag is the various benchmarks description being relegated to Appendix C. I believe it would be nice for the reader to appreciate the range of the benchmarks included in the paper, with some of the simulators being not necessarily trivial. Without a description, even a quick one, a non-expert reader might see these benchmarks as all simple toy-examples.\n\nThank you for your suggestion. We have rewritten the sentence to be `We evaluate the expected coverage of posterior estimators produced by both NRE and BNRE on various problems, whose descriptions can be found in Appendix C` by `We evaluate the expected coverage of posterior estimators produced by both NRE and BNRE on various problems. Those benchmarks cover a diverse set of problems from epidemiology (Spatial SIR), astronomy (Gravitational Waves), particle physics (Weinberg) and population dynamics (Lotka Volterra). They are representative of real scientific applications of simulation-based inference. A more detailed description of the benchmarks can be found in Appendix C`.\n\nIf the revised version successfully addressed all your concerns, we would like to kindly ask you to reconsider your score in light of the modifications made.\n",
" First and foremost we would like to thank the reviewers for the high quality of their reviews and the positive reception of our work regarding its significance, originality and experimental rigor. We appreciate the suggestions to improve the presentation of our work. All of them will be implemented and submitted by the end of the discussion period.",
" The authors propose a modification to the neural ratio estimation (NRE) algorithm in the form of a regularization penalty to avoid overconfident posterior distributions. The authors support their claim by introducing a family of likelihood-to-evidence ratio classifiers which are more conservative than the optimal Bayes classifier, in expectation. By posing an explicit penalty on overconfidence, the authors argue that the simulation-based inference using the NRE algorithm would be more reliable and not leading to false inference. The authors provide empirical results over several datasets, showing how the balanced neural ratio estimation (BNRE) is indeed experimentally conservative and also converges ultimately to the same posterior as the NRE method for large simulation budgets. The paper is well written and well presented, with legible figures and thoughtful considerations. I have checked the proofs of Theorems 1 and 2 and they look correct to me. The validation is extensive and well thought of.\n\nThe minor weakness that I can see are (a) the applicability to the NRE algorithm only (although other algorithms are mentioned in the discussion) and (b) examples of over-confident simulation-based inference algorithms leading a research direction astray.\nFor (a), as shown by [1], there does not seem to exist a universally \"best\" algorithm in SBI, so potentially extending the same logic to more algorithms would be very valuable for the community.\nFor (b), I appreciate the empirical results pointed out by [2]; my only point is that a practical example of a \"failure case\" -- or even a \"if they used this overconfident SBI algorithm in this real world study it would have been a problem\" type-of-argument -- would make this paper much more convincing.\n\nOverall, I believe the paper proposes an interesting idea and it is still worth flagging in the community, although I believe that it is not as compelling as it could be.\n\n[1] Benchmarking Simulation-Based Inference, Lueckmann et al, AISTATS 2021\n[2] Averting A Crisis In Simulation-Based Inference, Hermans et al, 2021 I have no main questions for the authors, apart from any feedback on my comments above. The one (minor) limitation I would like to flag is the various benchmarks description being relegated to Appendix C.\nI believe it would be nice for the reader to appreciate the range of the benchmarks included in the paper, with some of the simulators being not necessarily trivial. Without a description, even a quick one, a non-expert reader might see these benchmarks as all simple toy-examples.",
" This study builds on a previous state-of-the-art method for simulation-based inference (i.e., NRE), and proposes an improvement that aims at avoiding the usual overconfidence of NRE posterior estimates (an issue that might be common to other simulation-based inference methods). The study provides empirical results on several benchmark problems and shows that the new method (BNRE) clearly improves in reliability compared to NRE as assessed by several metrics. ### Originality\n\nTo the best of my knowledge, this is one of two studies that specifically addresses the issue of overconfidence of simulation-based inference algorithms, the other one being Hermans et al. 2021 (cited by the authors). While Hermans et al. diagnose this problem on several simulation-based inference methods and provide a partial solution to this issue using ensemble approaches, the solution provided by this study is novel.\n\n\n### Quality\n\nThe paper is technically sound, and most claims are backed by empirical evidence. However, there is one claim that I would appreciate the authors to illustrate: \"We found $\\lambda$ = 100 to perform well across all benchmarks, which again, is supported by Figure 5.\" I would recommend the authors to provide a figure (in appendix) illustrating this, even if for a subset of the benchmark problems. \n\n\n### Clarity\n\nThe manuscript is clearly written, providing enough information to understand the technical contribution and empirical results, while appropriately putting the contributions in the context of previous work. A few small comments:\n\n-page 4, line 119, \"does not modify the global optimum\". This is a bit imprecise and might lead to confusion, so I would suggest the authors to expand on it;\n\n-page 4, line 122, \"[...] result in increasingly conservative [...]\". By \"increasingly\", I assume the authors meant with increasing $\\lambda$. As this is not clear, I would suggest the authors to drop the word \"increasingly\";\n\n-page 4, line 123, the word \"Ideally\" wrongly suggests that the expression (6) only follows from the previous inequality in the ideal case, which is not what the authors meant. Dropping the word \"Ideally\" would solve the issue;\n\n-page 7, line 186, for clarity, the notation for nominal parameter ($\\vartheta^*$) should be introduced close to the expected squared error over the approximate posterior.\n\n\n### Significance\n\nThe technical development and empirical improvement of BNRE over NRE will be of interest to the ML community, in particular to the simulation-based inference community. Notably, the technical contribution could directly inspire the improvement of other methods for simulation-based inference. As pointed out above, I believe it would be beneficial to the manuscript if evidence was provided for the statement that $\\lambda$ = 100 is a good choice across benchmark problems. An appendix figure would be enough.\n The authors have adequately addressed the limitations of their work.",
" Simulation-based inference enables Bayesian parameter inference for simulation-based models for which the likelihood cannot be evaluated efficiently, by using simulations from the model as training data. Over the last years there has been developed a suite of neural network-based SBI methods that estimate the posterior distribution directly (neural posterior estimation, NPE), or target the likelihood (neural likelihood estimation, NLE) or the likelihood ratio (neural ratio estimation, NRE) to then obtain posterior samples via MCMC. By design, SBI is usually applied in scenarios where the ground-truth posterior is not known, and with limited training data, e.g., because model simulations are expensive. Thus, the approximated posterior (samples) may be inaccurate for a given problem. Preliminary results in the field indicate that posteriors tend to be overconfident, e.g., they tend to underestimate uncertainties.\nThis submission introduces a variant of NRE, called Balanced NRE (BNRE), with the goal to make posterior estimates more conservative in the low simulation budget regime. Like NRE, BNRE trains a classifier in order to approximate the likelihood-to-evidence ratio, to then perform MCMC. The authors introduce the concept of balanced classifier and introduce a modified loss function to train BNRE such it is a balanced classifier. Furthermore, they show that the optimal Bayesian classifier is balanced, i.e., that in the limit of unlimited training data BNRE will eventually converge to be NRE and obtain the exact posterior. They perform experiments with tractable benchmarking problems to show that BNRE tends to be more conservative than NRE in the low simulation budgets, i.e., show larger posterior variance. They conclude that BNRE provides a new SBI method suitable to obtain conservative posterior estimates. **Originality:** The reliability of posterior estimates obtained with SBI is a timely and very important topic which I am happy to see addressed here. The paper picks one of three main SBI approaches, NRE, and suggests a variant with the goal to make it more conservative in the low simulation budget regimes. The introduction of balanced classifiers is novel and useful for the context of NRE and, as mentioned in the conclusion, it may indeed be applicable to NPE as well.\n \n**Quality**: The introduction of balanced classifiers and BNRE as a variant of NRE with an additional term in the loss function is technically sound and a novelty. However, the experimental evaluation of BNRE is limited and less convincing for several reasons:\n- Choice of benchmarks: The set of benchmark tasks is limited, it contains rather low-dimensional tasks most of which (all? depending on the variants used, exact definitions of the tasks missing) are tractable via MCMC. It would be more instructive to show one or two tractable examples with reference posteriors (obtained via MCMC or analytically) to study the properties of BNRE in depth, and to then show one or two additional “real-world” examples to demonstrate the use of BNRE in practice and to show how it compares to NRE in practice. I see the current results merely as a first study showing preliminary results.\n- Choice of metrics: The results of the paper are mainly based on one single metric measuring the “confidence” of the posterior—the expected posterior coverage. This metric was introduced in a recent preprint by Hermans et al [1] which, again, addresses the very important topic of SBI reliability, but makes relatively strong claims about the general reliability of SBI methods based solely on the expected coverage metric, and was not successfully peer-reviewed by the community yet. The evaluation of BNRE would become more informative if additional metrics were considered. One common metric for investigating the posterior coverage and potential biases is simulation-based calibration (SBC). The authors showed in the appendix that in theory expected coverage is a special case of SBC, however, it would be useful to see the practical similarities in the results. Importantly, SBC is able to detect not only miscalibrated uncertainties like over- or underdispersion, but also positive and negative biases of the posterior estimates. Thus, it is additionally suited for evaluating BNRE given that BNRE tends to introduce a bias in the low simulation budget regimes. In addition to SBC which provides a global marginal coverage test across all x, there is a new method for performing a local coverage test for specific observation (Zhao et al. [https://proceedings.mlr.press/v161/zhao21b.html](https://proceedings.mlr.press/v161/zhao21b.html)), which would be useful to add as a metric as well, e.g., in the “real-world” example. \n- For the tractable tasks with known posteriors it would additionally be useful to calculate actual biases, and dispersion with respect to the reference posteriors. \n- For the real-world examples additional checks, e.g., prior and posterior predictive checks would be instructive to show the practical effect of conservative posteriors and difference of BNRE to NRE. \n- Finally, a comparison to other established SBI methods like NPE and NLE would be illustrative. They are readily available in open-source software packages so that adding them to the benchmark would not result in large algorithmic or implementation overhead.\n\nOverall I think, given the importance of this topic, it is essential to evaluate a new method on the available and established metrics (especially when additionally introducing a new metric).\n \n**Clarity**: The paper is very well and clearly written, it was straight forward to follow the line of arguments and the presented results. Regarding the appearance of the figures I have the following comments:\n- A visual explanation of the approach as Figure 0 would be nice to have\n- For most of the subplots in Figure 1 it is hard to see any differences between the methods, a different visualization, e.g., on the log-scale would be more illustrative\n- For Figure 2 it would be good to show standard error of the mean as error bars, given that 5 repetitions were performed\n- For Figure 3 it would better to show standard error of the mean instead of the standard deviation (minor)\n \n**Significance**: As mentioned above, I find the general topic and the approach of BNRE important for SBI research. However, the experiments and results presented here appear to me as preliminary with no direct insights or consequences for other researchers or practitioners (yet). In theory, the BNRE approach makes sense, however, in practice (as indicated by the results presented here) I do not see an advantage over NRE. The results **do** show that on the benchmark tasks presented here, BNRE leads to broader posteriors than NRE. However, the additional results in Figure 3 and 4 also indicate that it comes with a larger bias (which makes sense due to the additional term in the loss). While the bias reduces with increasing simulation budget, but even for the same budget, it seems that BNRE performs worse than NRE. Furthermore, BNRE comes with the additional hyperparameter choice for $\\lambda$. The authors make a suggestion for a default value of lambda based on extensive parameter search on the benchmark tasks, however, it is not clear how this choice would extrapolate to real-world scenarios, e.g., for models with more parameters. Thus, from the practitioners point of view, it would not makes sense to use BNRE as of yet. As outlined in detail above, I think this study would benefit from additional tasks and from additional metrics to study the effect of BNRE in more detail. For the benchmarking tasks I suggest to use one that is fully tractable so that it can be studied in detail, and one with real-world character that is challenging, e.g., high-dimensional, so that one can see the practical relevance of BNRE. For metrics I suggest to additionally use SBC, local coverage tests and analytical over and underdispersion and bias (for the tractable tasks). \n\n The authors transparently addressed limitations of their approach, given the scope of the results presented in the paper (see comments above). \n\nUpdate after rebuttal: The authors addressed all of my questions and the revised version of the submission now adequately discusses limitations of the approach to the low-dimensional parameter regime. I updated my score accordingly. ",
" The paper introduces and discusses a regularization approach for neural ratio estimation that is based on enforcing a balancing condition of the classifier (via an additional regularisation term). It is shown theoretically that on average (expectation) this results in a more conservative classifier.\nIn several experiments it is shown that the approach indeed results in a more conservative posterior. __Strengths__\n- Ensuring that posteriors derived by deep learning algorithms are valid is very important.\n- The use of the balancing condition as an additional measure is an interesting idea. \n- The method is easy to implement.\n- The experiments show that the posterior tends to be more conservative when the regularizer is applied.\n- The induced bias and variance diminish with increasing sample size\n- Little to no exaggeration. The authors are very honest about the advantages and disadvantages of their method which makes the contribution all the more valuable to the field. \n\n__Weaknesses__\n- The penalty term does result in a worse calibration in most examples. While the posterior is more conservative the calibration is often worse, in particular when the number of samples is low. Especially for models where simulation is costly this might be an issue. \n- While I agree that conservative posteriors are preferable, calibration is not a binary issue. Overly conservative posteriors might be unable to identify a parameter to a reasonable degree. In the Weinberg, SIR and Lotka Volterra models, a relatively accurate posterior is traded in for a mis-calibrated conservative posterior. I think the trade-off is important. \n- There is a broad literature in statistics considering model (over-)confidence [A - C] which I think should be mentioned. In particular, power posteriors are quite popular to adjust the posterior in order to account for discrepancies between a model and real data. Since NRE does not estimate the likelihood directly, it might be necessary to use the presented approach instead of power posteriors, but a discussion should be added.\n\n> Recently, Dellaporta et al. [26] further improved GBI by combining \n- The cited work has advantages and disadvantaged but it's not an \"improved\" version of GBI, it's a variant. In particular, it best applies to IID data. When using time-series data as many of the models in this paper [D] would be a more fitting GBI procedure. \n\n> M/G/1, originally introduced by Papamakarios et al. [4],\n- Papamakarios et al. however reference [E] for the M/G/1 model. This should be amended. \n\n\n__References__\n\n[A] - Peter Grünwald. Safe learning: bridging the gap between Bayes, MDL and statistical learning\ntheory via empirical convexity. In Proceedings of the 24th Annual Conference on Learning\nTheory, pages 397–420, 2011.\n\n[B] - Peter Grünwald. The safe Bayesian. In International Conference on Algorithmic Learning\nTheory, pages 169–183. Springer, 2012.\n\n[C] - Chris Holmes and Stephen Walker. Assigning a value to a power likelihood in a general\nbayesian model. Biometrika, 104(2):497–503, 2017\n\n[D] - Dyer, Joel, Patrick Cannon, and Sebastian M. Schmon. \"Approximate Bayesian Computation with Path Signatures.\" arXiv preprint arXiv:2106.12555 (2021).\n\n[E] - A. Y. Shestopaloff and R. M. Neal. On Bayesian inference for the M/G/1 queue with efficient MCMC\nsampling. arXiv:1401.5548, 2014. - Could you please write down the definition of a classifier? I assume you mean something like $d: X \\rightarrow [0, 1]$\n- The discussion around equation (6) is confusing. You say that $\\hat{p}(\\theta|x) < {p}(\\theta|x)$ whenever the $\\hat{d}(\\theta,x) < {d}(\\theta,x)$, but clearly $\\hat{p}(\\theta|x) < {p}(\\theta|x)$ can't hold for all $\\theta$ since we are considering densities. Hence, the crucial aspects of whether the density is more conservative is _for which_ $\\theta$ one has $\\hat{p}(\\theta|x) < {p}(\\theta|x)$ and that seems to be missing. \n- Generally the model details are a bit short. For example, you reference Lotka and Volterra indicating that the underlying model is a deterministic ODE following the Lotka-Volterra equations. However, commonly people in SBI work with the stochastic version, describing a Markov jump process. I would be nice to have more information here. \n- I am not so sure about the expected coverage as a special case of SBC. The internal consistency of the joint distribution that you're relying on is something that is remarked even in the SBC paper to be known before. The devil is in the detail and from my point of view SBC is about an actionable algorithm, which is more than just the expectation that you allude to in Appendix A. In particular, the actual computations (checking whether rank statistics are uniformly distributed vs comparing coverage) are quite different in both cases. The paper contains a good discussion of limitations. A slight improvement would be to discuss the trade-off between accurate calibration and conservative posteriors as mentioned above."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"U3ky3JJojRV",
"AsEt7fj5s0b",
"3aJJ6UeOpc",
"hq7dhKU8Rqq",
"KVt5VMOmqQ",
"c-vPJs_G0in",
"1MvK46KhP0l",
"_IP_aiPyt99",
"_IP_aiPyt99",
"2oJEZOpRDYy",
"NLbqLe_qwHM",
"JBtRJeZvjloL",
"PFshcbP6ec9",
"SydkzNUhyZ7",
"d71VQgqr_yi",
"nips_2022_o762mMj4XK",
"nips_2022_o762mMj4XK",
"nips_2022_o762mMj4XK",
"nips_2022_o762mMj4XK",
"nips_2022_o762mMj4XK"
] |
nips_2022_d229wqASHOT | Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition | Deep learning models have shown their vulnerability when dealing with adversarial attacks. Existing attacks almost perform on low-level instances, such as pixels and super-pixels, and rarely exploit semantic clues. For face recognition attacks, existing methods typically generate the l_p-norm perturbations on pixels, however, resulting in low attack transferability and high vulnerability to denoising defense models. In this work, instead of performing perturbations on the low-level pixels, we propose to generate attacks through perturbing on the high-level semantics to improve attack transferability. Specifically, a unified flexible framework, Adversarial Attributes (Adv-Attribute), is designed to generate inconspicuous and transferable attacks on face recognition, which crafts the adversarial noise and adds it into different attributes based on the guidance of the difference in face recognition features from the target. Moreover, the importance-aware attribute selection and the multi-objective optimization strategy are introduced to further ensure the balance of stealthiness and attacking strength. Extensive experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates while maintaining better visual effects against recent attack methods. | Accept | This paper studies adversarial attacks on facial recognition systems. The key contribution is that, instead of directly manipulating pixel space, this paper proposed to perturb the facial attributes for generating inconspicuous and transferable adversarial examples. The initial concerns are mostly about requiring 1) more ablations/comparisons, and 2) clarifications on experiment details and visualization (especially Figure 6).
Most concerns are well addressed in the rebuttal, and 3 (out of 4) reviewers agree to accept this paper. The reviewer RXXP is still (slightly) concerned about the novelty contribution and rates it as a borderline case. Given its effectiveness and comprehensive analysis, the AC agrees that this paper has its own merits and will be of interest to the general NeurIPS community, therefore recommend accepting it.
In the final version, the authors should include all the clarifications and the additional empirical results provided in the rebuttal.
| train | [
"UTMz7UK2Yot",
"Q8454TVbR6",
"8eqafMy2z0e",
"OJuAq2DDn4D",
"9vNAHpn-WdV",
"O918jL5ubQK",
"LC7b7wvpi3I",
"Vko4bl1F1e1",
"UURI7QzRcRF",
"TU2Cya-A8ie",
"NNw94j-0lJY",
"AvOTKbOB2yW"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your reply and we will further address your concerns as follows.\n\n**[Q3: Semantic inconsistency.]** In the revised supplementary material, we provide more qualitative results from the FFHQ and CelebA-HQ datasets. Figure E and Figure F compare the original source faces, the edited faces by original StyleGAN [19] and the edited faces by our attack. In general, the majority of adversarial edited faces achieve favourable visual quality and attacking performance, while only several examples (e.g., the third row in Figure F) are slightly semantically-inconsistent with the original image for human observers.\n\nIn order to explore the factors that affect the visual quality in our method, we firstly apply our attack with a more advanced face generator (i.e., HFGI [A]) as you suggested. Since the official implementation of HFGI only provides five off-the-shelf attributes in the edited spaces (i.e., smile, age, lip, bread and eyes), which are not consistent with the setting of our attack, we choose bread and smile as the selected attributes during attacking for the fair comparisons. Figure G in the revised supplementary material compares the visual quality of adversarial faces by HFGI [A] and StyleGAN [19]. The numbers below the images are the cosine similarity scores between the targeted faces and crafted faces. We observe that HFGI [A] slightly improves the image quality and achieves comparable attacking performance with our original generator [19]. It indicates that our attack method does not rely on the specific model and could be deployed to various generative models. Due to the limited time, we will add the complete experimental results by HFGI [A] in the revision. \n\nOn the other hand, we further explore the semantic inconsistency through analyzing the attribute editing space for the third example in Figure 6. We observe the visual effects on adversarial faces with different selected attributes and find that the pale face is not fully disentangled in StyleGAN [18], leading to the semantic inconsistency in the third example of Figure 6. Thus, we try to remove this attribute and utilize the rest attributes as the editing spaces to generate the adversarial face by our attack. Figure D compares the adversarial faces under original settings and the adversarial faces without pale face. The crafted adversarial example without pale face is much closer to the original face for human observers. We will supply these discussions and analyses in the revised paper.\n\nThanks again for devoting your time to the careful review. \n\nReference:\n\n[A] High-Fidelity GAN Inversion for Image Attribute Editing, In CVPR, 2022.\n",
" Many thanks for your reply. We agree that no strict guarantee like traditional $\\ell_p$ bounds can be used in the attribute subspace for our attack. We will clarify this issue in our revision as you suggest. Additionally, we calculate MSE scores between the original images and our adversarial faces in Table 3, which may serve as an optional evaluation metric to compare the pixel-level variation for different methods.\n\nWe will further polish this paper based on your comments. Thanks again for devoting your time to the careful review.",
" Thanks for the response to my questions. The authors clarified some of my concerns, including:\n\n1. Q1 - utility of the attacks: the authors specify the scope of this paper for digital-attacks, which is ok to me.\n2. Q2 - notations and training details: this has been clarified. The authors should further revise the paper to make them clearer.\n3. Q4 - compare with GenAP: thanks for the new results. It seems that the proposed method is more effective. Please add the results in the revision. \n\nHowever, I am still not convinced by the response to Q3. The authors have stated that \"Besides, our attack depends on the original StyleGAN to generate adversarial faces, thus the quality of adversarial faces is limited by the synthetic ability of StyleGAN.\" Could you try more effective generative models? The current results in the third example in Figure 6 make me doubt about the performance. Do you cherry-pick the visualization results?",
" Thanks for the detailed response from authors.\n\nOverall, the proposed method manipulates image in the semantics (attribute) space, which can be seen as a new type of perturbation bound compared to the traditional L-infty and L-2. I think even it is hard to conduct the attack in physical setting, demonstrating this conceptually is ok.\n\nHowever, the proposed method heavily depends on a high quality pre-trained generative model, which makes the proposed method inapplicable in domains where there is no such generative model. \n\nSecondly, the proposed method also inherits the long-existing issue on evaluating generative models. Numerical evaluation on the generation quality of GAN is always a problem which is not fully aligned with human perception. To my surprise, another reviewer also found that the third row in Figure 6 has switched identity. Human evaluation is also important regardless of the numerical scores. That said, I do not attempt to make the author struggle with a single badly picked example, and I understand that the generated faces are empirically (in most cases) not touching identity information given the provided additional evaluation. Unlike traditional L-p bounds, where we can strictly guarantee that the perturbation will not exceed the bound. In the current problem setting, there is no strict guarantee that the perturbation will not exceed the attribute subspace. This should be clearly clarified in the experiments section, in order to avoid inducing any unfair comparison for future works due to the underlying problems inherited from GANs.\n\nI'm willing to raise the score to 5, but no higher, based on the manuscript quality and its potential impact in adversarial defense for face recognition.",
" Thanks for your valuable comments and we address your concerns as follows.\n\n**[Q1: Utility of our method.]** Our attack mainly focuses on digital attack, since it can also bring a potential threat to online face recognition (FR) applications, e.g., uploading adversarial photos to impersonate others. Compared with existing digital attacks [6, 16, 24, 27, 33, 36] on FR models, our attack aims to generate more natural and inconspicuous adversarial faces and achieve stronger attack transferability simultaneously. Although some existing methods like wearing AdvGlass [27] and AdvHat [16] implement physical attacks on face recognition, the adversarial patches are visually noticeable and have weak attack transferability across FR models. We consider that both digital and physical attacks are essential for face recognition security, which will draw attention to improving the robustness of FR models.\n\n**[Q2(1): Notations.]** Consistent with your understanding, the vicinity appro vector is defined as $v_i = z_i + n_i$. We will clarify this notation in the revision. Since we do not train or finetune the StyleGAN models, the real attribute vector $z_i$ is a constant vector. Therefore, applying the $ \\ell_2 $-norm to $v_i$ can also restrict the magnitude of $n_i$, which is equivalent to applying the $ \\ell_2 $-norm to $n_i$.\n\n**[Q2(2): Training details.]** During the whole process, we remain the StyleGAN model fixed and thus assign no training data to StyleGAN model. As for the training of adversarial noise generators, we first ensure the targeted faces and randomly choose face images with other identities from FFHQ or CelebA-HQ as training data. The attribute noise generators are trained using Adam optimizer with an initial learning rate 0.0001. More experimental setup can be referred to Section 4.1.\n\n**[Q3: Semantic inconsistency.]** The original StyleGAN itself has slight impacts on the visual quality of edited faces. However, we calculate that the recognition accuracy between original faces and edited faces by the original StyleGAN is 100% for all three FR models on both FFHQ and CelebA-HQ, which indicates that editing attributes with original StyleGAN does not change its original identity. Besides, our attack depends on the original StyleGAN to generate adversarial faces, thus the quality of adversarial faces is limited by the synthetic ability of StyleGAN. In the supplementary material, Figure C shows original source faces, edited faces by the original StyleGAN, and edited faces by our attack (including the third example in Figure 6). And the image quality of edited faces by our attack is close to the ones by the original StyleGAN. \n\nWe consider that the multiple selected attributes like smiling, mustache, blurry and pale skin could impact the visual semantic inconsistency to the original face, while the FR models still recognize them as the same identity. Moreover, compared with existing stealthy-based attacks (e.g., Adv-Face [6], Adv-Makeup [26] and Semantic-Adv [24]), our method generates more inconspicuous adversarial faces and further strengthens the attack transferability across FR models.\n\nThanks for your suggestion, we will add more discussions in the revision.\n\n**[Q4: Compared with GenAP [33].]** Strictly following the settings of GenAP [33], we select eye regions to craft the adversarial patches. The tables below compare the attack transferability between GenAP [33] and our attack on FFHQ and CelebA-HQ, using the same FR models to attack both basic models and robust models. In general, our method performs better transfer attacks on all datasets compared with GenAP. Meanwhile, the edited faces by our attack are more inconspicuous than the adversarial patches of GenAP. We will supply the complete experimental results and visualizations in the revision.\n\nTable 1: ASR results of Gen-AP and our attack against basic models.\n\nFFHQ:\n| Target model | IR152 | MobileFace | FaceNet |\n| :----: | :----: | :----: | :----: | \n| Gen-AP | 12.00 | 19.90 | 8.20 | \n| Ours | 44.30 | 50.20 | 31.80 |\n\n CelebA-HQ:\n| Target model | IR152 | MobileFace | FaceNet |\n| :----: | :----: | :----: | :----: | \n| Gen-AP | 19.50 | 24.40 | 15.80|\n| Ours | 46.30 | 49.90 | 31.90| \n\nTable 2: ASR results of Gen-AP and our attack against robust models.\n\nFFHQ:\n| Training model | -IR152 | -IR152 | -MobileFace | -MobileFace | -FaceNet | -FaceNet |\n| :----: | :----: | :----: | :----: | :----: | :----: | :----: |\n| Target model | AR | TR | AR | TR | AR | TR | \n| Gen-AP | 36.50 | 13.70 | 34.00 | 10.60 | 27.70 | 9.80 |\n| Ours | 60.40 | 30.10 | 53.30 | 26.30 | 63.00 | 30.10 |\n\n CelebA-HQ:\n| Training model | -IR152 | -IR152 | -MobileFace | -MobileFace | -FaceNet | -FaceNet |\n| :----: | :----: | :----: | :----: | :----: | :----: | :----: |\n| Target model | AR | TR | AR | TR | AR | TR | \n| Gen-AP | 46.00 | 15.80 | 45.30 | 15.70 | 44.90 | 15.30 |\n| Ours | 60.80 | 33.50 | 56.60 | 34.00 | 61.60 | 34.10 |",
" Thanks for your valuable comments and we address your concerns as follows.\n\n**[Q1: Problem setting and evaluation.]** According to your suggestion, we choose 100 face images from FFHQ and CelebA-HQ and randomly select from these five attributes with different magnitudes to edit the faces by the original StyleGAN [19] for 10 times, and calculate the recognition accuracy between original faces and edited faces by StyleGAN. We find that the recognition accuracy is 100% for all three FR models (i.e., IR152, MobileFace and FaceNet) on both datasets, which indicates that editing facial attributes with StyleGAN [19] does not change its original identity. On the other hand, the aim of our attack is to make FR models recognize the adversarial faces as the targeted person. First, we compute the average cosine similarity between targeted faces and original faces/edited faces by original StyleGAN on FFHQ and CelebA-HQ, as shown in Table 1. It indicates that the original attribute editing hardly affects the similarity to the target face. Furthermore, we calculate the average cosine similarity between targeted faces and edited faces by our attack on FFHQ and CelebA-HQ, as shown in Table 2. The adversarial faces by our attack successfully impersonate the targeted identity with higher cosine similarity.\n\nTable 1: Cosine similarly between targeted faces and original faces/edited faces by original StyleGAN.\n\n| FR model | IR152 | MobileFace | FaceNet|\n| :----: | :----: | :----: | :----: |\n| FFHQ | 0.045/0.043 | 0.168/0.147 | 0.083/0.061 | \n| CelebA-HQ | 0.052/0.049 | 0.179/0.157 | 0.099/0.072 |\n\n\nTable 2: Cosine similarly between targeted faces and edited faces by our attack.\n\n| FR model | IR152 | MobileFace | FaceNet|\n| :----: | :----: | :----: | :----: |\n| FFHQ | 0.231 | 0.306 | 0.412 |\n| CelebA-HQ | 0.237 | 0.306 | 0.429 |\n\nAdditionally, Figure C in the supplementary material illustrates a qualitative comparison between edited faces by original StyleGAN and edited faces by our Adv-Attribute attack. As for the third line in Figure 6, the cosine similarity between the original face and the edited face by the original StyleGAN (e.g., 0.675 for MobileNet, and 0.856 for FaceNet) is much higher than the predefined thresholds, which is still recognized as the same identity for three FR models. This demonstrates that the identity information is not changed after editing attributes. Meanwhile, we consider that the selected attributes (e.g., smiling, mustache, blurry and pale skin) may affect the visual quality to some extent. We will add the results and more examples in the revision.\n\n**[Q2(1): Limitation.]** Thanks for your suggestion. We agree with you that the quality of edited faces depends on the synthetic ability of face generators (i.e., StyleGAN [19]) and we will further discuss this limitation in the revised manuscript. Based on the generator, the main contribution of this paper is to integrate the adversary into the semantic information (i.e., face attributes), improving the attack transferability across FR models with more inconspicuous adversarial faces.\n\n**[Q2(1): Other domains.]** While the proposed Adv-Attribute attack is specially designed for face recognition task, the idea of integrating the adversary into the image generation process can be transferred into other domains (e.g., ImageNet classification). Although in some scenarios, the high-quality generator is not available, we can also hide the adversarial noise in the simple image transformation process, including changes in brightness, contrast, etc.",
" Thanks for your valuable comments and we address your concerns as follows.\n\n**[Q1: Vicinity appro vector $v_i$.]** The definition of the vicinity appro vector $v_i$ corresponds to Eq.2 as $v_i = z_i + n_i $. We will add this equation in the revision.\n\n**[Q2: Distribution of $\\omega_1$ and $\\omega_2$.]** We illustrate the variation of balanced weights $\\omega_1$ and $\\omega_2$ when attacking IR152 on the CelebA-HQ dataset in the table below.\n\n| Epoch | 1 | 5 | 10| 15 | 20 | 25 | 30 | 35 | 40 | 45 | 50 |\n| :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: |\n| $\\omega_1$ | 0.50 | 0.56 | 0.72 | 0.66 | 0.35 | 0.84 | 0.77 | 0.61 | 0.47 | 0.43 | 0.75 | \n| $\\omega_2$ | 0.50 | 0.44 | 0.28 | 0.34 | 0.65 | 0.16 | 0.23 | 0.39 | 0.53 | 0.57 | 0.25 |\n| $L_{all}$ | 6.87 | 5.36 | 5.16 | 4.92 | 4.70 | 4.52 | 4.41 | 4.42 | 4.42 | 4.32 | 4.43 |\n\nAs the number of epochs increases, $\\omega_1$ and $\\omega_2$ are dynamically adjusted to balance the weights of the impersonation attack loss $L_{adv}$ and the stealthy loss $L_{stea}$, resulting in a drop of the overall training loss $L_{all}$. Additionally, Figure B in supplementary material plots the variation of the overall loss $\\mathcal{L}_{all}$ with and without the multi-objective optimization (i.e., set $\\omega_1=0.50$ and $\\omega_2=0.50$ fixed during training), which indicates that our method combined with the multi-objective optimization better balances these two conflicting losses and yields a stronger attack.\n\n\n**[Q3: Textures of adversarial faces.]** During our attack, we keep the face generator (i.e., StyleGAN [19]) fixed to edit semantic attributes on face images. Thus, the quality of edited face images is limited by the synthetic ability of StyleGAN [19]. In supplementary material, Figure C compares original source images, edited faces by the original StyleGAN, and edited faces by our attack. The image quality of edited faces by our attack is close to the ones edited by the original StyleGAN. Moreover, since we choose pale skin and blurry as two of five selected attributes in our editing spaces, these two attributes could smooth and lose partial textures on edited faces by both the original StyleGAN and our attack. We will add more discussion in the revision.\n",
" Thanks for your careful review. We will fix this typo and thoroughly proofread the paper again.",
" This paper studied the inconspicuous and transferable adversarial attacks on face recognition models. Different from previous works that consider $L_p$ norm perturbations, this paper introduces the framework of adversarial attributes, which generates noise on the attribute space of face images based on StyleGAN. An importance-aware attribute selection approach is proposed to ensure the stealthiness and attacking performance. The experiments on various face models show the effectiveness. Strengths:\n\n+ A new method for face attack is proposed, which crafts adversarial noises on the attribute space of face images. The general idea is reasonable and well realized.\n+ The framework of face attribute attack is illustrated clearly with some novel techniques, such as important-aware attribute selection and multi-objective optimization.\n+ The experiments on typical and robust face recognition models show the effectiveness of the proposed method.\n\nWeaknesses:\n\n- A significant drawback of the proposed method is that it is a digital-world attack method, which can hardly be implemented in the physical world.\n- The notations used in this paper are unclear. The paper uses $v_i$ to denote the vicinity appro vector, but what is the exact formulation of $v_i$. I guess $v_i=z_i + n_i$. But why do you adopt the $L_2$ norm of $v_i$ in the stealthy loss in Eq. (5)? I guess it should be the $L_2$ norm of $n_i$?\n- How to train the whole framework? Do the StyleGAN models keep fixed or need finetuning? How do you choose the training data?\n- The semantics of adversarial face images seem to be inconsistent to the original ones. For the third example in Figure 6, I can hardly recognize the attacker image and adversarial image as the same person.\n- The paper did not compare with a state-of-the-art patch attack method [33], which is also based on StyleGAN. 1. Explain the utility of the proposed method, especially the implementation in the physical world.\n2. Clarify the notations and training details.\n3. Discuss the semantic inconsistency between real images and the adversarial images.\n4. Compare with the SOTA face attacks. The authors have discussed the limitations and negative societal impact of their work.",
" This paper presents a new attack based on styleGAN to manipulate face image attributes and achieve impersonation. Compared to existing methods including gradient-based, patch-based and stealth-based ones, the proposed method aims to result in less visible artifact (to ensure the adversarial perturbation is imperceptible). The proposed method is specific to face recognition and impersonation attack. Experimental evaluations with typical face recognition model and defensively trained face recognition model demonstrate the effectiveness of the proposed method. # Strengths\n\n1. The proposed method is clearly-motivated, intuitive and effective as demonstrated by experimental results.\n\n# Weaknesses\n\n1. [important; problem setting and evaluation] Is identity information really disentangled from the attribute dimension? Is editing facial attribute really leaving identity information intact? Compared to existing methods, a fair setting should be generating person image with identity information intact while attributes can be changed. If identity information is changed to any extent, it would be much more easier to impersonate. In this regard only several qualitative example is not enough, because the given examples (figure 1 and figure 6) give me a feeling that the resulting person image is not the original identity... especially the third row in figure 6. If the attribute editing is not purely editing attributes, then the comparison in table 1 and table 2 in fact involves unfair comparison, because the other methods like gradient methods do not manipulate identity.\n\n2. [limitation] The proposed method is deeply dependant on the underlying face generation model (style GAN) in terms of both imperceptibility of modification artifacts (quality of generated image), as well as attribute manipulation. Meanwhile, when a high-quality generator is not available in a different domain (like natural images form imagenet), this method will be invalid in that domain. My two major concerns about the paper is written in weaknesses. The attribute editing process is not guaranteed to be orthogonal to identity information. Meanwhile, the proposed method is highly dependent on face generator (styleGAN) and not very flexible to be extended to other domains.\n\n---\n\nThe authors have justified the mentioned problems. Some of my original concerns persist, but they largely stem from the underlying GAN.\nAfter rebuttal, I raised the score from 4 to 5. Limitations not clearly elaborated.",
" The authors have proposed to produce inconspicuous and transferable adversarial attacks on face recognition systems. Instead of perturbing the pixel intensity, the authors propose to semantically perturb the facial attributes such as smiling, eyeglass, mustache, blurry, and pale skin. Although adversarially manipulating facial semantics or attributes is not new, in this paper, the authors do bring to the table something new: (1) importance-aware attribute selection strategy that can select and update one particular attribute noise vector that leads to the largest degradation of adversarial loss in each step, and (2) multi-objective optimization that can balance the stealthiness and attacking strength. Experimentally, the proposed method is benchmarked against several types of adversarial attacks including gradient-based noise attacks, patch-based attacks, and stealthy-based attack methods. The evaluation is carried out on both basic face recognition models, as well as robust ones with adversarial training. The proposed method performs favorably compared to the baselines in terms of both the direct attack success rate as well as black-box transferability. \n Strengths:\n\n(1) The paper is well written. The presentation and organization of various components is very clear. The experiments are thoroughly carried out with adequate baselines as a comparison. \n\n(2) Although adversarial perturbation based on semantic manipulation is not new, the authors have managed to incorporate two new components (importance-aware attribute selection and multi-objective optimization to balance the two conflicting losses) to improve upon prior semantic-based adversarial face perturbations. \n\n(3) The experimental results are pretty strong, in favor of the proposed method, in terms of the performance on both the basic FR, robust FR, as well as on black-box transferability tasks. \n\nWeaknesses:\n\nThis is overall a solid paper. At this review stage, I do not seem to spot apparent weaknesses in this submission. \n\n\nMinor point: on line 206, there is a typo: steady-based should be stealthy-based. \n I do not have questions. \n Yes.",
" This paper proposes an adversarial attack method, Adversarial Attributes, against face recognition. To improve attack transferability, this paper generates attacks on the high-level semantics by injecting the adversary into the edited latent vectors in several attributes. To balance stealthies and attacking strength, an importance-aware attribute selection and the multi-objective optimization strategy are introduced. Experiments carried out on FFHQ and CelebA-HQ datasets demonstrate the effectiveness of the proposed method. Strengths:\n\n1. The idea of generating attacks through perturbing on the high-level semantics (facial attributes) is interesting and novel and facilitates attack transferability.\n\n2. Proposed Adversarial Attributes Perturbation has some technical novelties with clear figure illustration. Importance-Aware Attribute Selection and Multi-Objective Optimization is reasonable with theoretical support.\n\n3. Evaluation is reasonably thorough and both promising qualitative and quantitative results are claimed. Ablation study also demonstrates the effectiveness of each module in the proposed attack method.\n\nWeaknesses:\n\nFrom Visualizations in this paper, e.g. Fig. 6, it seems textures of adversarial faces generated by the proposed attack methods are smoothed and lost. This may be one of the drawbacks of the proposed method.\n 1. Which equation corresponds to the vicinity appro vector vi? How to define the vicinity appro vector vi statistically?\n\n2. What is the distribution of values of balanced weights w1 and w2? Could you show more results for these two balanced weights?\n\n3. Why most of textures of adversarial faces generated by the proposed attack methods are smoothed and lost?\n Yes"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"8eqafMy2z0e",
"OJuAq2DDn4D",
"9vNAHpn-WdV",
"O918jL5ubQK",
"UURI7QzRcRF",
"TU2Cya-A8ie",
"AvOTKbOB2yW",
"NNw94j-0lJY",
"nips_2022_d229wqASHOT",
"nips_2022_d229wqASHOT",
"nips_2022_d229wqASHOT",
"nips_2022_d229wqASHOT"
] |
nips_2022_nX-gReQ0OT | Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need? | Finding accurate solutions to the Schrödinger equation is the key unsolved challenge of computational chemistry. Given its importance for the development of new chemical compounds, decades of research have been dedicated to this problem, but due to the large dimensionality even the best available methods do not yet reach the desired accuracy.
Recently the combination of deep learning with Monte Carlo methods has emerged as a promising way to obtain highly accurate energies and moderate scaling of computational cost. In this paper we significantly contribute towards this goal by introducing a novel deep-learning architecture that achieves 40-70% lower energy error at 6x lower computational cost compared to previous approaches. Using our method we establish a new benchmark by calculating the most accurate variational ground state energies ever published for a number of different atoms and molecules.
We systematically break down and measure our improvements, focusing in particular on the effect of increasing physical prior knowledge.
We surprisingly find that increasing the prior knowledge given to the architecture can actually decrease accuracy. | Accept | There is a clear consensus among the reviewers that this is a quality paper and worthy of acceptance (in fact, this may be the first time I've ever seen 4 reviewers give the exact same score), so I recommend accept.
I do however have one additional comment. I find the current title somewhat unwieldy and wonder if it would be possible for the authors to condense it at all. This is not a critical issue, of course, but one that the authors may want to consider (if the program chairs allow it). | train | [
"oOfGUR5Yjj",
"CHOWTb1CXD_",
"yjG4yiywau",
"4Lj66G-Z8E",
"dtch0ur1HUt",
"9loB3M4kwBrR",
"BCKGbpUnQ3W",
"vATYRI-UftQ",
"XGiFILiqHrh",
"TqeKdUiehpK",
"bsNygndN_LAr",
"-pIxqKcoQTp",
"AsVt6zn4K3M",
"9EgffXp72yK",
"jbUaHN_lp07",
"VtjKOqcQOFf",
"c1MbEXRO0Qm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed and clarifying response.",
" Thank you once more, for reviewing our paper and helping us to improve it!",
" Thank you for reviewing our paper and your constructive feedback!\n\nYes, for the systems such as 4th row atoms (K, Fe), and large molecules (e.g. Glycine), there are no published results using FermiNet, so we used the best variational reference energies we could find in the literature. ",
" Thanks for the kind words! \n\nWe're glad to hear you enjoyed the paper, and are very grateful for your support and helpful feedback.",
" thanks again again for the answers to all questions. it has been a joy to review this paper, and I will strongly champion it in the further discussions.",
" Thank the authors for the reply. I understand the full theoretical is out of the scope of this work. And I think the motivations for these modifications that can improve accuracy given by the authors are reasonable. Along with the good performance of the presented method, I will increase my score to 7 and recommend the acceptance of this paper.",
" Thanks for the clarification. Adding Table 6 makes the speedup easier to quantify. As for the convergence, I am convinced by the authors' response that the QMC evaluation is accurate and the uncertainty is smaller than the chemical accuracy. I also notice for the systems with energy well below the reference (K, Fe, Glycine), the authors mainly compare with the MRCI method and FermiNet (VMC or DMC) is not used in comparison (Appendix A, Table 1). I have limited knowledge about the non deep learning methods. If the reference energies are indeed state-of-the art, then I think the improvement of deep QMC over classical methods is well demonstrated by the authors. \n\nWith the above considerations, I will increase my score and will recommend acceptance of this paper. To further make the convergence / speedup more evident, the authors could consider showing some plots of training energy versus training iterations. However, this should be optional.",
" Thank you for your reply!\n\nI agree that finding out the right amount of prior knowledge to bake into the model is indeed a walk on a tightrope. Providing proper evidence that the physics prior is harmful is refreshing, in particular as there are so many armchair scientists at conferences and/social media who happily share their (usually strong) opinions about the need of “principled physics”, but then don’t provide any experiments. It is much appreciated indeed that the authors instead proceed in the opposite way, providing ample empirical evidence while staying humble, and I do recommend that this paper will make its way into the conference.",
" Thank you very much for your review! Regarding your specific question:\n\n**W1**: This paper demonstrates great experimental results, but lacking the necessary theoretical explanation to address the accuracy improvement under reduced computational cost. Section 4 discussed in detail the improvements obtained by each individual change and their combined effects, but the discussion is still restricted within the experimental point of view. Adding theoretical explanation on why the proposed method can obtain accuracy better than the classical as well as existing deep learning method can be beneficial.\n\n**A**: While a full theoretical analysis of deep learning based Variational Monte Carlo is beyond the scope of our work, we explain the motivation and probable cause of our accuracy improvements below. \nWe will add those points to section 4 in the final version of the manuscript, when the page limit is increased to 10 pages.\n\n- *Embedding*: Our embedding is strictly more general than both PauliNet and FermiNet and our ansatz can therefore better approximate the true solution and reach lower energies.\n As discussed in Sec. 2.1 we can represent two-particle interactions more expressive than FermiNet and one-particle effects more expressive than PauliNet. \n In addition we have a dedicated electron-nucleus embedding which further increases expressiveness.\n- *Dense determinants*: Dense determinants are a direct generalization of block-diagonal determinants and therefore naturally more expressive. There is empirical evidence [1] that this improves the description of the wavefunction's nodal surface, but we are not aware of a thorough theoretical analysis.\n- *Local input features*: As argued in Sec. 2.2. and depicted in Fig. 3, local coordinates capture physically more meaningful inputs than raw cartesian coordinates and can therefore be seen as physics-inspired feature engineering, leading to modest improvements in energy.\n The key improvement however is that enforce the required symmetries, enable transfer-learning of the wavefunction across different molecules or geometries.\n- *Envelope initialization*: Initializing the wavefunction parameters closer to their optimal values accelerates optimization, because fewer update steps are needed, and the Monte Carlo sampling provides a better distribution of samples. By using the explicit analytical solution of the one-electron problem, we obtain a well founded estimator for the envelope exponents, which enables initialization of these parameters closer to their optimal values.\n- *Hyperparameters*: The two key changes we propose (besides smaller batch size) are lower learning rate and higher gradient norm-constraint (i.e. less gradient clipping).\n This ensure that more steps are taken according to the curvature estimate by the KFAC-optimizer and fewer steps are limited by the clipped gradient norm.\n\n[1]: *Explicitly antisymmetrized neural network layers for variational Monte Carlo simulation*, Lin et al., arxiv.org:2112.03491",
" Thank you very much for your review! Regarding your specific points:\n\n**Q1**: Some results in Figure 4 are lower than the reference energy, especially for larger molecules where the differences are significant (>100mHa) compared to the chemical accuracy (~2mHa). To which extent are we certain that the QMC results are correct? Although the variational principe could grantee the variational energy to be upper bound but doesn't it assume the sampling to be accurate in the first place? For example, in the Figure 3 of the FermiNet paper [Pfau et al. 2020], during VMC training, the energy could overshot when the MCMC step size is not large enough.\n\n**A**: We are highly confident in the accuracy of our QMC energies to within 0.1-0.5 mHa, well beyond the 100mHa difference observed for larger molecules.\nWe have two independent estimates for this error: Firstly, we can estimate the Monte Carlo uncertainty from the variance within the Monte Carlo samples of a single run. Secondly, we can analyze the energy variance between independent optimization runs as we do for the $N_2$ study.\nBoth estimates yield uncertainties well below the inter-method energy differences.\nWe ensure that our Monte Carlo samples are indeed independent and unbiased during energy evaluation: We perform 20 Metropolis-Hastings steps between energy sampling to ensure independence, and do not optimize wavefunction parameters during final energy optimization to avoid bias.\nSimilar to our work [Pfau et al. 2020] also uses a separate evaluation run without optimization to avoid this bias in the final energies.\nLastly, we want to stress that the overshooting in Figure 3 of [Pfau et al. 2020] is depicted on a logarithmic scale and may therefore appear exaggerated on first glance. It is far less than the actual 100mHa by which we outperform classical methods.\n\n \n\n**Q2**: Under which setting the claimed \"40-70% lower energy error at 8x lower computational cost\" is obtained? As shown in Table 5, the runtime per epoch improves from ~6s to 3.6s. How is the convergence time defined?\n\n**A**: The claims are obtained for the total time of computation under the prerequisite of the same hardware compared to the previously most accurate deep-learning-based method FermiNet. \nThe two key factors leading to the speed-up are that we require fewer training iterations and smaller sample sizes.\nInitially, we roughly estimated the speed-up by 4x fewer training iterations each being 2x faster due to a 50% smaller batch size. Since our architectural changes slightly slow down each iteration by roughly 20%, we agree that this is an slight overstatement and therefore adjust the mentioned speed-up in the manuscript to be around a factor of 6. \nTo increase transparency, we have added in the appendix C of the manuscript a complete table stating for each system the computed speed-ups and accuracy improvements with regard to FermiNet. \nWe want to highlight that for certain systems (e.g. Ar, P, S) we reach similar accuracy as FermiNet 10x faster, and that we reach 70-90% higher accuracy with a speed-up of 6x for medium sized molecules (e.g. C$_4$H$_4$, Ne).\n\n \n\n**W1**: Since the paper essentially integrates FermiNet and PauliNet, the novelty of proposed methods are very limited. In particular, the proposed coordinate transform only has sight effects.\n\n**A**: While our work heavily builds on prior work such as FermiNet and PauliNet, we want to emphasize that it goes well beyond those two approaches:\n\n- Our architecture is based on FermiNet and PauliNet, but in particular the embedding is an extension that is strictly more expressive than either predecessor, leading to better results.\n- Several proposed improvements, such as the local coordinate transformation and envelope initialization are entirely novel to the best of our knowledge. The local coordinates specifically have low impact on single energies, but enable transferring a wavefunction ansatz to a different geometry or molecule [1,2], as we discuss in Sec. 6.\n- Our surprising empirical result that maximizing prior physical knowledge can actually deteriorate accuracy can be of importance not only for the quantum chemistry community, but potentially many other domains.\n\n**W2**: Although the overall improvement and speedup are clear, for some claims it is difficult to find reference in the main paper. E.g., \"40-70% lower energy error at 8x lower computational cost\" as stated in the abstract. It would be clearer to collect these comparisons in one place.\n\n**A**: Please see our answer to your related question and appendix C in the revised manuscript.\n\n \n\n[1]: *Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions*, Gao et al., ICLR 2022\n\n[2]: *Solving the electronic Schrödinger equation for multiple nuclear geometries with weight-sharing deep neural networks*, Scherbela et al., Nature Comp. Sci. 2022",
" Thank you very much for your thorough review! Regarding your specific questions and potential weaknesses:\n\n**Q1**: The units used in the equation before Eq. (1) should be clarified.\n\n**A**: We use atomic units ($e=\\hbar=m_e=a_0=1$) and have clarified this in the revised manuscript.\n\n \n\n**Q2**: The (probably well-known) Rayleigh-Ritz principle mentioned in L57 warrants a citation for the interested reader.\n\n**A**: We added a citation to the original work by Walther Ritz.\n\n \n\n**Q3**: How is $H \\psi(r)$ evaluated?\n\n**A**: The Hamiltonian $H$ applied to the wavefunction $\\psi: \\mathbb{R}^{3 \\times n_\\text{el}} \\rightarrow \\mathbb{R}$ can be evaluated as $H \\psi({r}) = -\\frac{1}{2} \\sum_{i} \\nabla^2_{r_i} \\psi({r})+ \\sum_{i>j} \\frac{1}{|{r_i - r_j}|}\\psi(r) + \\sum_{I>J}\\frac{Z_I Z_J}{|{R_I -R_J}|} \\psi({r}) - \\sum_{i,I} \\frac{Z_I}{|{r_i - R_I}|}\\psi(r)$ with $H\\psi({r}) \\in \\mathbb{R}$. In particular the kinetic energy term $\\nabla^2_{{r}_i}\\psi({r})$ is being evaluated using automatic differentiation of our wavefunction model $\\psi$.\n\n \n\n**Q4**: In Eq. (2), k seems to be, next to i, an index for an electron. In the previous equations, you use j. Why?\n\n**A**: The index $k$ does not run over electrons, but enumerates different orbitals.\nAlthough the indices $i$, $j$, and $k$ run over the same range of $1\\ldots n_\\text{el}$, we reserve $i$ and $j$ for electrons and $k$ for orbitals.\n\n \n\n**Q5**: The concrete form of Eq. (1) is not obvious. I assume that $\\lambda$ is actually a matrix of shape $n_\\text{el} \\times n_\\text{el}$?\n\n**A**: Yes, we compute the determinant of a matrix of shape $n_\\text{el} \\times n_\\text{el}$. We have update the notation in the equation and now explicitly state this fact in the manuscript.\n\n \n\n**Q6**: Below Eq. (3), you have another k. Is that related to the from before?\n\n**A**: In the original manuscript we had the index $k$ for orbitals and the bold-face vector $\\boldsymbol{k}$ denoting embedding vectors of the electron-nucleus stream.\nThese two concepts are entirely unrelated and we have renamed the embedding from $\\boldsymbol{k}$ to $\\boldsymbol{v}$ in the revised manuscript to avoid unnecessary confusion.\n\n \n\n\n**Q7**: I didn't fully understand the challenges you (and the community, I assume) are facing in Section 2.2. How are these requirements different from, let's say, an ML force field.\n\n**A**: One key challenging difference is that the wavefunction $\\psi$ has in general lower symmetry than the Hamiltonian, whereas all macroscopic observables of a force field (such as energies and forces) have the same symmetry. For example for a single atom (which has full rotational symmetry) the wavefunction is not in general rotationally invariant. See for example the cited reference [1] for more detail.\n\n \n\n**Q8**: Do you mention anywhere the total computational cost of each method?\n\n**A**: In appendix C we report the runtime per optimization epoch for every method used in our ablation study for potassium on a NVIDIA A100 GPU. This includes our proposed method as well as our implementation of FermiNet.\nA thorough analysis of the total computational cost for each non-deep-learning-based method mentioned in Fig. 4 is unfortunately out of scope for us, since we did not run these reference calculations ourselves but have taken the results from literature.\nMost of our cited conventional reference methods have much higher scaling of computational cost with the number of particles (e.g. $\\mathcal{O}(n_\\text{el}^7)$ for CCSD(T)) than deep-learning-based VMC.\nAdditionally, we roughly estimate in appendix C the total computational cost used in our paper with about 50k GPUh. \n\n \n\n**W1**: Main weakness is the lack of immediate relevance of the model improvements to the broader ML community.\n\n**A**: We believe that our work will be relevant for this audience for two reasons: First, integrating physical- and machine-learning-models is a key active area of research, with many recent publications in venues such as NeurIPS. Examples of such work are, among many others, Physics Informed Neural Networks [2,3], exoplanet detection [4] chemical potential energy surface prediction [5] and a dedicated workshop *Machine Learning and the Physical Sciences* (https://ml4physicalsciences.github.io/2022/) at NeurIPS 2022. Our surprising observation that maximizing prior physical information can deteriorate model performance will be of interest to many members of this broad ML community. \nSecondly, many recent developments in using ML for quantum chemistry [1,6,7,8] have been published in this and similar venues, suggesting that our specific work will be of interest for another substantial audience at this conference.\n\n[1]: Gao ICRL 2022, arxiv:2110.05064\n\n[2]: Krishnapriyan, NeurIPS 2021\n\n[3]: Belbute-Peres, NeurIPS 2021\n\n[4]: Hönes, NeurIPS 2021\n\n[5]: Corzo, NeurIPS 2021\n\n[6]: arxiv 2011.07125\n\n[7]: Schütt, NeurIPS 2017\n\n[8]: Thölke, ICLR 2022\n",
" Thank you very much for your review!\nRegarding your specific questions and potential weaknesses:\n\n**Q1**: Would a study of the transition states of the butadiene system (as in the paulinet paper) be instructive?\n\n**A**: There are many interesting benchmark systems one could investigate, one of them being the butadiene transition states. For this work we have purposefully focused most of our attention on ground state equilibrium energies because most accurate benchmark data is available for these geometries and we have limited our investigation of transition states to the challenging $N_2$ system.\n\n**W1**: I found the the description of the method quite unclear, and some notation was not very clearly defined where it appeared in the text\n\n**A**: We further improved notation throughout the manuscript and added relevant citations for clarity.\n\n**W2**: one could argue the paper is \"only\" a recombination of prior works, but if we take this argument, AlexNet would have needed to be rejected as well.\n\n**A**: While we are humbled by the comparison to AlexNet, we would like to add that our work goes beyond a recombination of prior works: We introduce novel improvements (such as local coordinates and physics-based envelope initialization) and demonstrate the dangers of maximizing prior knowledge built into a model. In particular the latter point could be relevant for applications far beyond quantum chemistry.\n",
" We would like to thank all reviewers for their appreciation of our work and constructive criticism. We are particularly delighted that most reviewers were impressed by our experimental results and that the overall paper was ”very well-written and clear, even for someone with little domain expertise” (Reviewer fBuC). We have answered all questions individually and have additionally updated our manuscript\nto incorporate the improvement suggestions. Changes to the original version have been highlighted in red for the reviewers' convenience.\n\nIn addition we have incorporated important additional feedback we received during the review process: Several of the reference energies we used as non-variational best-estimate turned out to severely underestimate the true energy. Using more accurate reference calculations reveals that our results are even better for single atoms than originally presented in the manuscript. For example for the F atom our results correspond to 80% lower energy errors, while we had originally only reported 40%. We have updated Fig. 4 accordingly.",
" An improved method to approximately solve the Schroedinger equation is described, which combines ideas from the PauliNet and FermiNet papers. A variety of ablation studies are performed. good performance is achieved. Strengths:\n- good results\n- thorough study and recombination of approaches of prior works, without reinventing the wheel\n- ablation studies\n- no grandiose claims\n- good discussion of limitations\n\nWeaknesses:\n- I found the the description of the method quite unclear, and some notation was not very clearly defined where it appeared in the text \n- one could argue the paper is \"only\" a recombination of prior works, but if we take this argument, AlexNet would have needed to be rejected as well. Would a study of the transition states of the butadiene system (as in the paulinet paper) be instructive?\n\n\n yes",
" The authors present and analyze a list of improvements that significantly increase accuracy and reduce the computational cost of variational Monte Carlo methods based on deep learning. # Strengths\n\n- Overall, the paper is very well-written and clear, even for someone with little domain expertise.\n- The introduction is helpful and allows non-experts to onboard (to the extent one can reasonably expect).\n- The key contributions are laid out.\n- Differences and similarities to previous methods (especially PauliNet) are exhibited.\n- Thoughtful experimental design.\n- Impressive experimental results.\n\n# Weaknesses\n\n- A (minor) weakness is the lack of immediate relevance of the model improvements to the broader ML community. - The units used in the equation before Eq. (1) should be clarified.\n- The (probably well-known) Rayleigh-Ritz principle mentioned in L57 warrants a citation for the interested reader.\n- How is $H \\psi(r)$ evaluated?\n- In Eq. (2), $k$ seems to be, next to $i$, an index for an electron. In the previous equations, you use $j$. Why?\n- The concrete form of Eq. (1) is not obvious. I assume that $\\lambda$ is actually a matrix of shape $n_{el} \\times n_{el}$?\n- Below Eq. (3), you have another $k$. Is that related to the $k$ from before?\n- I didn't fully understand the challenges you (and the community, I assume) are facing in Section 2.2. How are these requirements different from, let's say, an ML force field.\n- Do you mention anywhere the total computational cost of each method? The authors discuss the limitations of their work.",
" This paper focuses on empirical improvements of variational quantum Monte Carlo estimation of molecular ground state energy. In particular, the authors focus on evaluating the various design choices in the FermiNet and the PauliNet, including dense determinant, hyper parameter tuning, the choice of envelope function and the pretraining strategies. On top of these, the authors propose to use a SchNet-like embedding blocks and an input feature transformation to make the input feature to make the feature to be invariant, local and expressive. \n\nThe authors empirically evaluate all these design choices on molecules with up to ~40 electrons and obtain a model which improves upon the existing methods both in terms of accuracy and speed. The authors also conduct ablation studies by removing each design choice one by one. \n\nFrom the experiments, although the factors are not totally independent, it seems the most improvements come from the use of dense determinant, hyper parameters tuning, and the SchNet-like embedding. The proposed feature transformation only has minor effects. **Strength:** \n\nThe paper is well written and easy to follow. The experimental results are good and the ablation studies are extensive and well presented. The effects of different design choices can be easily identified. Since PauliNet and FermiNet differ in many aspects while both give high quality quantum Monte Carlo results, it is interesting to compare and clarify the effects of these differences. Making improvements by fusing these two methods is well motivated.\n\n**Weakness:** \n* Since the paper essentially integrates FermiNet and PauliNet, the novelty of proposed methods are very limited. In particular, the proposed coordinate transform only has sight effects. \n* Although the overall improvement and speedup are clear, for some claims it is difficult to find reference in the main paper. E.g., \"40-70% lower energy error at 8x lower computational cost\" as stated in the abstract. It would be clearer to collect these comparisons in one place. * Some results in Figure 4 are lower than the reference energy, especially for larger molecules where the differences are significant (>100mHa) compared to the chemical accuracy (~2mHa). To which extent are we certain that the QMC results are correct? Although the variational principe could grantee the variational energy to be upper bound but doesn't it assume the sampling to be accurate in the first place? For example, in the Figure 3 of the FermiNet paper [Pfau et al. 2020], during VMC training, the energy could overshot when the MCMC step size is not large enough.\n\n* Under which setting the claimed \"40-70% lower energy error at 8x lower computational cost\" is obtained? As shown in Table 5, the runtime per epoch improves from ~6s to 3.6s. How is the convergence time defined?\n\n[Pfau et al. 2020] Ab initio solution of the many-electron Schrödinger equation with deep neural networks. Physical Review Research, 2020. \n\n------\n**After rebuttal:**\n\nThe authors' response clarifies the concern about the uncertainty and the speedup. Overall I think the proposed method is well motivated and results are convincing. Hence I will increase my score from 5 to 7 and will recommend acceptance of this paper. The authors state the limitations adequately.",
" This paper proposes a deep-learning architecture to solve ground state of many-electron systems. The proposed model combines PauliNet-like neural network and envelope function in FermiNet, with additional improvements on embedding, input feature and parameter initialization of previous methods. Experimental results show that the proposed method can reduce errors by 40-70% with 4-8x lower computational cost. This paper also establish a new benchmark of several deep learning based and classical methods over a number of different atoms and molecules. Authors also research into the reason for improvements and find out that including too much physical prior knowledge can deteriorate the accuracy. **Strengths:**\n- *originality:* Related works are adequately cited and it is clear to see the difference from these works. This work is a novel combination of PauliNet and FermiNet along with additional improvements. \n- *quality:* This work provides solid experiments to prove the accuracy and efficiency improvements. Also, this work systematically breaks down which changes cause the improvements, and gives insights that including physical prior knowledge can hinder the optimization. \n- *clarity*: This paper is overall well written and easy to follow. The method and results are clearly presented.\n- *significance:* This work achieves the best results for the numerical solution of the electronic Schrodinger equation and establishes a new benchmark for current most accurate methods on a number of molecules and atoms. The proposed method is 4-8x faster than FermiNet in terms of optimization. \n\n**Weaknesses:**\nThis paper demonstrates great experimental results, but lacking the necessary theoretical explanation to address the accuracy improvement under reduced computational cost. Section 4 discussed in detail the improvements obtained by each individual change and their combined effects, but the discussion is still restricted within the experimental point of view. Adding theoretical explanation on why the proposed method can obtain accuracy better than the classical as well as existing deep learning method can be beneficial. \n Please see weaknesses. Limitation of this work is well addressed."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
3
] | [
"bsNygndN_LAr",
"9loB3M4kwBrR",
"BCKGbpUnQ3W",
"dtch0ur1HUt",
"-pIxqKcoQTp",
"XGiFILiqHrh",
"TqeKdUiehpK",
"-pIxqKcoQTp",
"c1MbEXRO0Qm",
"VtjKOqcQOFf",
"jbUaHN_lp07",
"9EgffXp72yK",
"nips_2022_nX-gReQ0OT",
"nips_2022_nX-gReQ0OT",
"nips_2022_nX-gReQ0OT",
"nips_2022_nX-gReQ0OT",
"nips_2022_nX-gReQ0OT"
] |
nips_2022_7hhH95QKKDX | Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks | The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores. Nonetheless, we note that if the loss trend of the outputs is slightly perturbed, SQAs could be easily misled and thereby become much less effective. Following this idea, we propose a novel defense, namely Adversarial Attack on Attackers (AAA), to confound SQAs towards incorrect attack directions by slightly modifying the output logits. In this way, (1) SQAs are prevented regardless of the model's worst-case robustness; (2) the original model predictions are hardly changed, i.e., no degradation on clean accuracy; (3) the calibration of confidence scores can be improved simultaneously. Extensive experiments are provided to verify the above advantages. For example, by setting $\ell_\infty=8/255$ on CIFAR-10, our proposed AAA helps WideResNet-28 secure 80.59% accuracy under Square attack (2500 queries), while the best prior defense (i.e., adversarial training) only attains 67.44%. Since AAA attacks SQA's general greedy strategy, such advantages of AAA over 8 defenses can be consistently observed on 8 CIFAR-10/ImageNet models under 6 SQAs, using different attack targets, bounds, norms, losses, and strategies. Moreover, AAA calibrates better without hurting the accuracy. Our code is available at https://github.com/Sizhe-Chen/AAA. | Accept | This paper proposes a defense against score-based black-box attacks by post-processing the output probabilities to misguide the attacker. The method enjoys several advantages such as not reducing test-time accuracy or increasing the train-/test-time cost, improving calibration for the model, and superior performance under black-box attack compared to prior work. The authors also included additional experiments during the discussion phase that show effectiveness against adaptive attacks.
One weakness is that the method does not improve robustness of the underlying model and hence is still susceptible to surrogate model and/or hard-label attacks. However, most reviewers consider this weakness as minor and that the paper’s contribution is significant enough for publication. AC therefore recommends acceptance for publication at NeurIPS.
| train | [
"NQqbKli24Z",
"aLkLjiRw_yd",
"KtBKLo7HGQ_",
"hqq5u6eS0R8",
"cacoukOUvw7",
"itSr8-BUvd",
"-NwL_TnNb8QC",
"jiKSENTHhmX",
"1bFRe2KvfAc",
"S5dn0Dj7jZ",
"zn1sb-CWWQS",
"yVnwY3jIY5v",
"0Z1wxB5KGXr",
"H9oqtwVcw_",
"2TwoWG0-FK"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Program Chairs, Area Chairs, and Reviewers,\n\nThanks for the constructive comments and helpful discussions, we have carefully modified the manuscript according to the reviewers’ suggestions.\n\n- Descriptions on AAA-sine for adaptive attacks **(Line 53-56, 62-64, 162-183, 216-219, 312-332)**\n- Discussions on the motivation of Eq (4)(5)(6) **(Line 162-195)**\n- Study of time consumption **(Line 302-307)**\n- Clarification of notations **(changed most subscripts, Line 203)**\n- Moving additional defense results and hyper-parameter study to Appendix **(Line 650-676)**\n\nSince our main contribution lies in the philosophy to confound real-world attackers by a misleading but slightly-modified loss trend, rather than the specific strategy, the new version does not contain substantial changes to our claim but provides more analysis, discussions, and clarifications thanks to the reviewer’s insightful feedback.\n\nCurrently, we have addressed all concerns (e.g., adaptive attacks, time consumption) from Reviewer uWUQ and Reviewer 78Et thanks to their active feedback and careful re-evaluation. We are enthusiastically eager to discuss with Reviewer cULW and Reviewer AAQp since we are not allowed to respond to your valuable opinions tomorrow.\n\nBest wishes,\n\nAnonymous author(s) of Paper2763",
" Dear Reviewer cULW,\n\nThanks for your detailed suggestions again. We understand that the discussion period is short. And it would be time-consuming for you to inspect the response in detail. Thus, we summarize our modifications of the paper in your advice again, hoping to receive your feedback.\n\n**R1.1 Motivation of Eq (4)(5)**\n\nWe add a significant number of paragraphs (Line 162-183) to clarify the formulation and necessity of Eq (4)(5). Motivated by you, it is interesting to see that it is necessary to divide the loss values into intervals by Eq (4), but there are various choices to fool SQAs in an interval as Eq (5) shows.\n\n**R1.2 Eq (4) misses abs when $l_0=0$**\n\nWe have made the equation more rigorous in Line 169 thanks to your suggestions.\n\n**R1.3 Discussions of Eq (6) on formulation, optimization, and runtime**\n\nLine 189-195 and Line 302-307 are additional discussions on Eq (6) besides our initial response. We sincerely thank you for making the paper more solid significantly.\n\n**R1.4 Explanations on accuracy increase by AAA**\n\nMotivated by your, the interesting discovery that it comes from randomness has been mentioned in Line 259.\n\nWe sincerely hope you could check your great help in our work and re-evaluate them.\n\nBest wishes,\n\nAnonymous author(s) of Paper2763",
" We would like to express our sincerest thanks to Reviewer uWUQ for the efforts to re-evaluate and (again) improve our work. Besides, we are grateful to see Reviewer uWUQ understand that our contributions lie beyond defending against decision-based and transfer-based attacks.\n\n**R3.1 Revision of notations**\n\nIt would be undoubtedly clearer to distinguish the loss attractor $t$ from the “target” subscript $l_t$. Following your constructive suggestions, we have modified the notations with specifications on variables to avoid future confusion.\n\nConcretely, we denote the period as $\\tau$, and spell out $l_a, l_t, l_0, z_o, p_t$ as $l_\\mathrm{atr}, l_\\mathrm{trg}, l_\\mathrm{org}, z_\\mathrm{org}, p_\\mathrm{trg}$, respectively in the revised version. Besides, we also note in Alg. 1 that only $T, \\tau, \\alpha, \\beta$, and the number of optimization iterations $\\kappa$ are constant hyper-parameters.\n\nWe hope that readers, especially those interested in our specific method, could easily grasp our notations.\n\n\n**R3.2 Highlighting the sine design of AAA earlier**\n\nThanks for your careful thought. Indeed, our contribution lies in the philosophy to confound attackers by slight output perturbations, which requires a periodic design. Thus, the specific strategy to control the in-interval loss trend indeed has many choices, e.g., the linear and the sin curves.\n\nInspired by the discussions with you, we plan to put the emphasis on the periodic design and include different specific functions, especially AAA-sine. We think the logic of your suggestion is more nature and the modification is not too much. We will try our best to prepare a rigorous revised version. ",
" We are very grateful for your efforts in re-considering our sine design and adaptive attacks. We would keep improving our manuscript to involve your insights.",
" Thank you for your detailed rebuttal. In particular, the introduction of the sine function is very interesting and I am glad to see results on the bidirectional adaptive attack idea, both with the larger drop in performance initially and with the quite modest drop in performance after adding the sine function. The addition of these results make the paper stronger and addresses my adaptive attack concerns. The runtime results are also very encouraging. \n\nI agree with the updated opinion of reviewer uWUQ and think this is a good submission with these changes. I have thus accordingly updated my score to 7 as well.",
" I appreciate the detailed clarifications, and all my concerns are clarified.\n\n**R3.1 Misunderstanding.**\n\nThank you for the clarification, it makes more sense now. I believe the misunderstanding came from misreading $l_0$ as a constant and $t$ as the step variable in Eq. 4 (since $t$ was redefined and used as a subscript in Eq. 5 and all these subscripts in Eq. 5 quite looked like losses at different steps).\n\nI would recommend revising the notations to avoid this confusion. For example, use $\\tau$ for the period, spell out $l_\\mathrm{a}$ and $l_\\mathrm{t}$ as $l_\\mathrm{attract}$ and $l_\\mathrm{target}$ (or something else). It might also be good to emphasize which symbols are variables (wrt the input), it is very easy (at least for me) to misinterpret $l_0$ and $z_0$ as global constants, rather than variables that change corresponding to the query sequence.\n\nFor another statement in my initial review, *\"extract the sequence of queries coming from the designated attacker\"*, it is suggested to clarify that the proposed defense does not need to have different actions for benign and adversarial queries, as it won't affect benign queries much.\n\n**R3.2 Adaptive attacks.**\n\nThe proposed sine variant is insightful and convincing. It should withstand a reasonable level of adaptive attacks to some extent. Given its higher robustness, I would recommend highlighting this variant earlier in the paper. I personally think that this variant is more mature than the original defense (and should have been the original proposal).\n\n**Summary**\n\nGiven the above clarifications, I am switching my score from 2 to 7.\n\nThe strengths were in my original review. My initial negative score was mainly due to a major misunderstanding, as explained above. I am also not too worried about decision-based and transfer-based attacks (raised by other reviewers), as they are not claimed as contributions, and I am satisfied with the current contribution.",
" Thanks for highly appreciating the originality and experiments of our work. We address your concerns as below:\n\n**R3.1 AAA seems to unrealistically require knowledge of attack iterations, seeing the t in Eq (4)**\n\nIt is a misunderstanding. AAA does NOT need the attack running iterations. The t in Eq (4) stands for the period of loss intervals, a constant hyper-parameter of AAA that defenders set as explained on **Line 169-171 (original submission)**, and it is independent and irrelevant to the attack process. \n\nAfter receiving the query image, AAA uses the model to conduct normal inference and calculates the original loss value $l_0$ by Eq (2). Then AAA uses the pre-set hyper-parameter t to decide the target loss value $l_t$ by Eq (4) and optimizeS as Eq (6). Since t simply divides the original loss values into intervals [0, t], [t, 2t], [2t,3t], … and it is obvious that defenders do not need the attack running iterations to calculate the loss, AAA requires no impractical knowledge of attackers. We hope that making the notation t more clear in the **revised version (Line 172)** avoids future confusion.\n\nGiven the analysis above, defenders do not peek at the attacker's inner state. And vice versa, attackers do not know about the defender's strategy, forming the realistic double-blind real-world threat model.\n\n\n**R3.2 AAA on defending adaptive attacks, e.g., search following an increased margin loss**\n\nDesigning adaptive attacks, though possible, is costly and easy to bypass in real-world scenarios. Because here, according to **R3.1**, attackers and defenders are in a double-blind relationship, i.e., attackers do not know the model, including the defense strategy.\n\nThus, the discovery process of defense strategies for developing adaptive attacks would require additional queries and creative deductions. For example, attackers have to first query lots of times before observing the rare exceptional loss change. Then deduce the whole unknown reversing loss strategy and its hyper-parameters with great manual efforts. Only after such endeavor could SQA attackers decide to go opposite as you creatively mention.\n\nMoreover, attackers are also foolable if they go in case of increased loss or dramatically decreased loss (to jump out of an interval). Because we could simply smooth the transition between intervals, e.g., by using a sine function to design the target loss following the same idea of confounding attackers. In this way, neither direction of search is likely to figure out the defense strategy, seeing the results below. \n\nTable A: AAA performance under adaptive attacks (100 queries)\n| Defense methods | None | AAA | AAA-sine |\n| :------------ | :-----------: | :-----------: | :-----------: | \n| Square | 39.38 | 81.36 | 79.78 |\n| opposite Square | 94.78 | 57.31 | 75.23 |\n\nThe opposite search weakens AAA from 81.36\\% to 57.31\\%, but motivated by you, we could use a sine-like function to let the loss periodically ascend and descend along the original attack direction. In this way, the defense performance for the regular attack is kept as 79.78\\% while the performance under the opposite search is improved to 75.23\\%.\n\nOur main contribution lies in the philosophy to confound real-world attackers by a misleading but slightly-modified loss trend, rather than the specific strategy. And therefore, there are lots of other designs, and defenders could flexibly switch between them to mislead the attacker's guessing. Thus, AAA is effective even if attackers try to develop adaptive strategies. Relevant discussions have been added to the **revised submission (Line 308-328)** thanks to your insightful comments.\n\n\n\n**R3.3 Clarity of equations, notations, and SQA backgrounds**\n\nThanks a lot for your careful reading. In the revised version, we have replaced “clean sample x” as “sample x” for Eq (1) **(Line 117)**, simplified Eq (2) by setting ground truth label in view of defenders **(Line 120)**, explained z0 the first time it appears **(Line 171)**, and provided introductions of several specific SQAs **(Line 220-224)**. We are very grateful that you are willing to conduct such insightful and detailed discussions and consider raising the score.",
" Thanks for appreciating the novelty of our work. Before our response, we would like to thank reviewer AAQp for highlighting the defense against all unseen attacks. Indeed, making DNNs truly robust in the worst case is significant for the community to guarantee the non-existence of adversarial examples and make DNNs explainable and interpretable from a theoretical aspect. From an application aspect, similarly, studying defenses feasible only in real-world scenarios also makes a difference to secure DNNs, seeing a recent defense especially against SQAs [a]. We address other concerns as below:\n\n\n**R4.1 Three unrealistic constraints of SQAs make especially mitigating SQAs a weak motivation**\n\nLet us consider that in real-world applications, attackers and defenders are in a double-blind relationship. Thus, attackers naturally have real-world constraints as widely highlighted [b,c,d].\n\n(1) Target model is unobtainable because revealing model details reduces the commercial competence of model owners [e]. (2) Substitute model is also unobtainable [b,c,d] because it needs the target model’s training data, and leaking user data violates the company's privacy commitments. Attackers can indeed train a substitute model by querying the target model, but it would require tens of thousands of queries to the target model [f]. (3) Model owners would reveal its prediction confidence, seeing Google Cloud Vision API, Baidu EasyDL, etc., because model uncertainty information is crucial for users to make downstream decisions [g].\n\nWe understand that objective real-world constraints on attackers are not commonly seen by defenders because defending against SQAs is rarely explored as you mention. But that is exactly why our new take on such a field is valuable as also acknowledged by Reviewer 78Et. \n\n\n**R4.2 AAA defense under decision-based attacks (DQAs)**\n\nYes, this is one limitation of our method (discussed on **Line 334 in the original submission**), i.e., AAA cannot mitigate DQAs because it does not change the model’s decision. However, that does not hurdle the significance of defending against the much more threatening SQAs.\n\nAs you mention, attackers could either use SQAs or decision-based attacks (DQAs) in real-world scenarios. With SQAs, attackers could greatly decrease the model’s performance within dozens of queries [b,c,d]. Thus, defending especially against such threatening attacks is well motivated as acknowledged by Reviewer uWUQ. \n\nWith DQAs, however, model accuracy could not be perceptibly influenced within thousands of queries [h,i,j] due to the lack of model scores, as we demonstrate in **Appendix A in the original submission**. In this regard, especially defending against SQAs is meaningful despite AAA's failure in mitigating DQAs.\n\n\n**R4.3 AAA does not significantly hurdle SQAs**\n\nWe respectfully disagree with Reviewer AAQp on the insufficiency of AAA's performance. According to Fig 3, AAA prevents the adversarial accuracy from dropping quite effectively, doubling the performance of AT and tripling that for RND. Table 2 also supports the non-trivial protection ability of AAA against 8 defenses. Although in rare (2 out of 30 thanks to your careful reading) cases AAA does not top the performance, AAA significantly outperforms baselines in accuracy (no decrease), calibration (improvement), and defense costs (no cost on training and negligible cost on testing), seeing Table 1 and Table 2. Such effective defenses against SQAs have not been proposed to the best of our knowledge. The good performance of AAA is also acknowledged by all other reviewers, e.g., \"AAA shows strong performance on a variety of settings\" by **Reviewer 78Et**. \n\n**References**\n\n[a] Z. Qin, Y. Fan, H. Zha, and B. Wu, “Random noise defense against query-based black-box attacks,” NeurIPS 2021.\n\n[b] C. Guo, J. Gardner, Y. You, A. G. Wilson, and K. Weinberger, “Simple black-box adversarial attacks,” ICML 2019\n\n[c] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square attack: A query-efficient black-box adversarial attack via random search,” iECCV 2020\n\n[d] A. Al-Dujaili and U.-M. O’Reilly, “Sign bits are all you need for black-box attacks,” ICLR 2019.\n\n[e] Li Y, Zhu L, Jia X, et al. Defending against model stealing via verifying embedded external features, AAAI 2022.\n\n[f] Zhou, Mingyi, et al. \"Dast: Data-free substitute training for adversarial attacks.\" CVPR 2020.\n\n[g] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” ICML 2017\n\n[h] Brendel W, Rauber J, Bethge M. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models, ICLR 2018.\n\n[i] D. C. R. Viet Quoc Vo, Ehsan Abbasnejad, “Ramboattack: A robust query efficient deep neural network decision exploit,” NDSS, 2022.\n\n[j] M. Cheng, S. Singh, P. H. Chen, P.-Y. Chen, S. Liu, and C.-J. Hsieh, “Sign-opt: A query-efficient hard-label adversarial attack,” ICLR, 2019.",
" Thanks for appreciating our new direction in defending against realistic SQAs. We address your concerns as below:\n\n\n**R2.1 AAA on defending against adaptive attacks, e.g., bidirectional search to jump out of an interval**\n\nDesigning adaptive attacks, though possible, is costly and easy to bypass in real-world scenarios. Because here, attackers and defenders are in a double-blind relationship, i.e., attackers do not know the model, including the defense strategy.\n\nThus, the discovery process of defense strategies for developing adaptive attacks would require additional queries and creative deductions. For example, the interesting bidirectional attack strategy you propose uses extra queries to probe the exceptional loss change, which is rare if attackers act in a common way. After that, attackers also have to devote considerable manual efforts to figure out what defenders actually do. In this regard, AAA imposes a great hurdle to adaptive attackers.\n\n\nMoreover, attackers are also foolable if they base their action on the signal of jumping out of an interval. Because such a signal may not be sensible if we simply smooth the transition between intervals, e.g., by using a sine function to design the target loss following the same idea of confounding attackers. In this way, neither direction of search is likely to figure out the defense strategy, seeing the results below. \n\nTable A: AAA performance under adaptive attacks (100 queries)\n| Defense methods | None | AAA | AAA-sine |\n| :------------ | :-----------: | :-----------: | :-----------: | \n| Square | 39.38 | 81.36 | 79.78 |\n| bidirectional Square | 57.09 | 62.91 | 75.36 |\n\nThe bidirectional search weakens AAA from 81.36\\% to 62.91\\%, but motivated by you, we could simply use a sine-like function to let the loss ascend and descend along the original attack direction. In this way, the defense performance for the regular attack is kept at 79.78\\% while the performance under the bidirectional search is improved to 75.36\\%.\n\nOur main contribution lies in the philosophy to confound real-world attackers by a misleading but slightly-modified loss trend, rather than the specific strategy. And therefore, there are lots of other designs, and defenders could flexibly switch between them to mislead the attacker's guessing. Thus, AAA is effective even if attackers try to develop adaptive strategies. Relevant discussions have been added to the **revised submission (Line 308-328)** thanks to your insightful comments.\n\n\n**R2.2 AAA runtime on device**\n\nThanks for bringing in this concern. We report AAA’s actual runtime by looking at the balance between the amount of computation (number of optimization iterations) and defense performance in CIFAR-10 experiments.\n\nTable B: Influence of the optimization times in AAA (100-query Square attack on CIFAR-10)\n| No. iter | 0 | 20 | 40 | 60 | 80 | 100 |\n| :------------ | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | \n| ECE | 3.52 | 2.87 | 2.81 | 2.66 | 2.53 | 2.53 |\n| Adv-Acc | 39.38 | 79.29\t| 80.92| 81.37\t| 81.28\t| 81.36 \n| inference time per sample (ms) |1.016 | 1.034 | 1.088 | 1.099 | 1.143\t| 1.163 \n\nAs we can see, optimizing low-dimensional logits is not costly, which thus has already become a common practice in model calibration [32, 57]. It only consumes 1.5s to optimize 10000 logits for 100 iterations (AAA’s default setting) in an NVIDIA Geforce RTX 2080Ti. And good defense and calibration results are also obtainable by 60-80 iterations, which costs even less time. Since optimizing logits is independent of model size, model owners could determine AAA runtime precisely. \n\nRelevant contents have been put in the **revised submission on Line 298-303**. We are very grateful that you engage in such insightful discussions and consider raising the score.",
" Thanks for your insightful comments and appreciation of the novelty. We address your concerns as below:\n\n\n**R1.1 Motivation of Eq (4)(5)**\n\nOur design is motivated by two goals of real-world defenses: fooling attackers and serving users. The former requires reversing the loss trend along the attack direction while the latter demands slight changes in the loss value. More importantly, we found these two (seemingly contradictory) goals can be reconciled if we instantiate them locally, i.e., first divide loss values into intervals by Eq (4), and then reverse the loss trend in each interval by Eq (5). Thus, Eq (4) is necessary to ensure slight output perturbations.\n\nOur main contribution lies in the philosophy to confound real-world attackers by a misleading but slightly-modified loss trend, rather than the specific strategy. And therefore, there are indeed lots of designs besides Eq (5) to fool SQAs in each interval as you wisely guess. For example, we could map original loss to target loss in a sine way. Since attackers and defenders are double-blind without knowledge about each other in real-world scenarios, either design could effectively fool SQAs (even adaptive attacks) well as discussed on **R2.1, R3.2, and the revised version.**\n\nAs for Eq (5), it maps a large original loss $l_0>l_a$ to a small target loss $l_t<l_a$, and vice versa. In this way, when the original loss decreases from $l_a + t/2$ to $l_a - t/2$, AAA outputs loss that increases from $l_a - \\alpha * t/2$ to $l_a + \\alpha * t/2$. \n\n\n**R1.2 Eq (4) misses abs when $l_0=0$**\n\nThanks for your insightful thinking. Although $l_0=0$ is only a corner case since $l_0 \\geq 0$, it would be more rigorous to add an abs to Eq (4), i.e., $l _ { a } = ( \\operatorname { floor } ( l _ { 0 } / t ) + 1 / 2 ) \\times t$ in the revised version.\n\n\n**R1.3 Discussions of Eq (6) on formulation, optimization, and runtime**\n\nEq (6) is a straightforward design to fulfill the two goals that we discussed in **R1.1**. First, we perturb the logits to have a margin loss $L_u(z)$ close to the target value $l_t$, forming the reversed loss curve to fool SQAs. Accordingly, the first term in Eq (6) is formulated to minimize the distance between $L_u(z)$ and $l_t$. Second, we want the modified logits to output confidence ($\\sigma (z)$, the maximum probability after softmax) close to the calibrated one $p_t$ so that users get accurate confidence scores. In this regard, the second term in Eq (6) is designed to minimize the distance between $\\sigma (z)$ and $p_t$. The $\\beta$ balances the optimization between the above two goals.\n\nDespite its simplicity, Eq (6) has to be solved by optimization because the exponential operation in softmax makes Eq (6) a transcendental equation without closed-form solutions. Luckily, optimizing low-dimensional logits is not costly, which has already become a common practice in model calibration [32, 57]. Please see the results below.\n\nTable A: Influence of the optimization times in AAA (100-query Square attack on CIFAR-10)\n| No. iter | 0 | 20 | 40 | 60 | 80 | 100 |\n| :------------ | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | \n| ECE | 3.52 | 2.87 | 2.81 | 2.66 | 2.53 | 2.53 |\n| Adv-Acc | 39.38 | 79.29\t| 80.92| 81.37\t| 81.28\t| 81.36 \n| inference time per sample (ms) |1.016 | 1.034 | 1.088 | 1.099 | 1.143\t| 1.163 \n\nIf we choose the default optimization iterations (100 as shown on **Line 213 in the original submission**), it only consumes 1.5s to optimize 10000 logits in an NVIDIA Geforce RTX 2080Ti. And good defense and calibration results could also be obtained by 60-80 iterations, which costs even less time. Since the time for optimizing logits is independent of model size, model owners could determine its runtime very precisely. A study on AAA's runtime is on the **new version**.\n\n**R1.4 Explanations on accuracy increase by AAA**\n\nThanks for pointing out this phenomenon. We re-run experiments multiple times with different seeds (for optimization) and hyper-parameters, and find that AAA’s accuracy cannot stably outperforms the original baselines (e.g., usually oscillating very slightly above or below). Therefore, we conclude that such a small difference comes from randomness and may not worth further discussion.\n\n**R1.5 Defense results by calibration methods**\n\nThanks for bringing up this comparison. Note that calibration is supposed to map $p_1 > p_2$ to $p'_1 > p'_2$ without reversing the loss trend along the attack direction. Thus, attackers can still steal the gradient and attack, seeing our additional test of a set of standard calibration baselines below.\n\nTable B: SQA Defense performance by calibration methods (30-query Square attack on CIFAR-10)\n| Calibration methods | None | temperature scaling [32] | histogram binning [57] |\n| :------------ | :-----------: | :-----------: | :-----------: | \n| ECE | 3.52 | 2.02 | 0.78 |\n| Adv-Acc | 68.85 | 66.81 | 68.26 |",
" Dear Program Chairs, Area Chairs, and Reviewers,\n\nFirst of all, we would like to thank you for your time, constructive critiques, and valuable suggestions, which greatly help us improve the work. We are also grateful that reviewers unanimously regard our work as novel and interesting. The concerns are mainly focused on attackers’ and defenders' knowledge of each other. Below we would like to first respond to issues concerning our threat model in general.\n\nIn real-world scenarios we focus on, attackers do not access the defender's model gradients, training data, and defense strategies. Therefore, the constraints on attackers (inaccessibility to model details and substitute models) mentioned by Reviewer AAQP are realistic. Moreover, without knowing the defender's strategy, attackers cannot easily design adaptive attacks as Reviewer 78Et and uWUQ bring in. To guess the defender's strategy, additional queries and creative deduction are required. And such guessing could be greatly complicated in an easy way for defenders (R1.1, R2.1, R3.2). In this regard, the great hurdle AAA imposes on even adaptive attackers verifies the significance of our work.\n\nDefenders also do not know whether a query is malicious, what the attack method is, and when the attack has proceeded. Thus, accurate decisions are preferable, though preserving decisions makes AAA not able to mitigate decision-based attacks as Reviewer AAQP guesses, it does not reduce the significance of our defense against the more threatening score-based attacks (R4.2). Additionally, AAA does not require knowledge of the attack iterations as Reviewer uWUQ misunderstood (R3.1).\n\nMore in-depth analysis concerning the threat model (R2.1, R3.2, R4.1), AAA time-efficiency (R1.3, R2.2), and other issues have been added in the revised submission with red markers. We sincerely look forward to further discussions with the reviewers.\n\nBest wishes,\n\nAnonymous author(s) of Paper2763",
" The present work introduces a novel defense to confound the score-based query attacks (SQAs). Its main novelty includes three parts: (1) an effective and user-friendly post-processing module; (2) a novel adversarial attack on attackers (AAA) defense by slightly deviating the output scores; (3) AAA beats other prior defenses in terms of the effectiveness and robustness. Due to the impractical cost of prior defenses, AAA designs the post-processing method to improve efficiency and exerts a significant impact on accuracy. Strengths:\n+ The idea is sound and interesting. Instead of enhancing the robustness of deep learning models, this proposed method mislead and confuse the attackers to protect the DNN.\n+ Meanwhile, the method costs a little to improve efficiency, which makes defense more practical in real-world settings.\n+ the general design of the experiments is meaningful in helping to understand the role of AAA in outperforming the considered defense approaches.\n\nWeaknesses:\n- Some operations for example, the determination of specific operation, like Equation (4) is short of discussion. The detail is as follows. 1. Interesting, the clean model performance with AAA seems a little higher than original clean model, for example, 94.84 vs 94.78 in CIFAR10. Can you provide more analysis about this? Similar results can also be found in baseline DENT.\n\n2. In Equation (4), does it missing abs for ceil(·)? when l_0=0, ceil(l_0/t=0)-1/2 = -1/2? and the result is -2?\n\n3. The idea of attacking on attacker is straightforward, what is the motivation in designing the specific operated shown in Equation (4) and (5). For example, why push l_t even lower than l_a by a margin of (l_0-l_a), and what is the detailed motivation in designing equation (4). I am just curious if there are any other operations following the same idea achieves better defense results.\n\n4. Have you studied the proposed method without optimization in Equation (6)?\n\n5. The adjusted loss is closely related with confidence calibration. Can you show some defense results, in comparison with some confidence calibration methods? The discussion about optimizing Equation 6 seems missing. What is the number of optimization steps? The time consumed? etc. What are the results without optimization Equation (6).",
" This paper proposes a defense called Adversarial Attack on Attackers (AAA) that is designed specifically towards mitigating score-based query attacks (SQAs). AAA is a post-processing attack that attempts to modify the logits loss curve to locally point in the incorrect attack direction in a periodic fashion, which steers SQAs away from a true adversarial attack. As such, AAA takes advantage of typical SQA behavior of sampling nearby loss value changes to find a local direction to optimize in and steers them in the wrong way. AAA could then be added on to other models as a way to deter and prevent these attacks in more real-world scenarios where such black-box access is present. The authors evaluate AAA over several baseline defenses on CIFAR-10 and ImageNet on various SQAs, finding AAA to be effective at improving robustness. Strengths\n- Interesting new defense approach that targets a realistic deployment of ML models by attacking the query process\n- Seems like a lightweight defense with a low cost\n- Preserves natural accuracy\n\nWeaknesses\n- Unclear if SQAs could possibly be adapted to re-attack AAA\n- Unknown what the exact runtime cost is\n- Does not increase underlying model robustness This paper provides an interesting new take on adversarial robustness. I like the direction of targeting realistic attacks such as SQAs at a higher level than simply robustifying the model to all possible attack inputs. Indeed, such a post-processing approach such as AAA to deter attacks that that level is appealing and I believe that further research with this philosophy would be interesting. The experiments presented provide a useful understanding of the characteristics of AAA and show strong performance on a variety of settings. \n\nI have two main concerns. The primary one is whether or not SQAs could possibly be adapted to attack AAA. In Section 5.5 “Hyper-parameters of AAA” and Figure 4, it is clear that the choice of the attractor interval t can greatly influence the adversarial accuracy as the attack success rate would seemingly come down to the attack’s ability to jump out of an interval. While this is an interesting study to see how the choice of t impacts the performance of a static SQA, could SQAs be modified to look at a wider interval? \n\nFor example, perhaps an SQA could be modified to maintain two search branches, one that goes in the default direction implied by the loss changes and another that continually goes in the opposite for a limited time, and then if the one that goes in the opposite direction eventually drops by jumping out of the interval, an attacker could determine that this was actually the correct direction to go. Since the underlying model is not inherently more robust, a process that finds a jump may still be able to eventually break the model, albeit with more queries.\n\nSecondarily, how much does it cost to run AAA? It would appear to be a fairly lightweight process, but do you know how much slower this would make the model?\n\nGiven this, I still like the general direction of the paper and if my concerns are addressed I will consider raising my score.\n\n-----------------------------------------------\n\nIn the rebuttal, the authors provided results on the proposed bidirectional adaptive attack as well as a new sine function to use, as well as runtime results, alleviating these concerns. The authors have adequately addressed limitations and impact. The authors discuss AAA in the context of real-world systems such as autonomous driving where AAA could be added with pre-trained models to increase the reliability of such systems. The authors also acknowledge the limitation that AAA is designed to target SQAs specifically, which is a useful defense approach, but does mean that worst-case robustness under white-box settings is not improved.",
" This paper proposes a post-processing defense against score-based black-box evasion attacks. This defense modifies the original logits so that the trend of the margin loss is reversed in each small interval of the attack, but the overall trend remains unchanged to preserve accurate prediction confidence. As a result, score-based attacks using the margin loss will optimize towards the opposite (hence non-adversarial) direction. Meanwhile, this defense does not hurt the model’s benign performance, and the calibration performance is preserved by solving a joint optimization problem. Experiments show that the proposed defense outperforms 8 previous defenses under 6 score-based attacks on CIFAR-10 and ImageNet. ### Originality\n\n**Strengths**\n* This paper studies post-processing defenses, which are less explored in defending evasion attacks.\n* The proposed defense avoids hurting the model’s benign performance and computational overheads.\n\n**Weaknesses**\n* Not much in this perspective.\n\n### Quality\n\n**Strengths**\n* Good experiments. I appreciate the evaluation of 8 defenses, 6 attacks, and 2 datasets.\n* Good ablation study. Most hyper-parameters are supported by the ablation study.\n\n**Weaknesses**\n* **Impractical threat model.** While the attack’s threat model is well defined, the proposed defense implicitly assumes complete control of the attacker’s optimization process. **Specifically, the defense assumes unrealistic knowledge of the attack’s running iteration, i.e., the period $t$ defined at L171**. Note that all compared defenses in this paper did not have such an assumption. This knowledge is only realizable if given the following two assumptions. First, the attacker discloses the attack’s current number of iterations for each query. Second, the defender can extract the sequence of queries coming from the designated attacker (across all queries from all legitimate users and potentially other attackers). These two assumptions, however, are largely unrealistic or challenging. Since such knowledge is the base assumption of this defense at Eq. (4), I am not sure if the current defense would work if the assumption did not hold. **In particular, the defense’s code in Appendix D seems to work inside the attack, which is impossible in practice.**\n\n* **Lack of adaptive evaluation.** This paper does not discuss how the attacker could potentially modify their attacking procedure to evade the proposed defense, although some adaptive attack papers like [46] are cited. Currently, the proposed defense assumes a static (i.e., non-adaptive) attack that is unaware of the defense. Following the paper’s own motivation, is it possible that the attackers also revert their updating logic in Eq. (3) to evade the proposed defense? For example, now that the returned score exhibits a reverted trend, **would this defense still work if the attackers update their adversarial examples only when observing an increased margin loss?**\n\n### Clarity\n\n**Strengths**\n* Good motivation for post-processing defenses.\n* The proposed defense is easy to follow.\n\n**Weaknesses**\n* The notation in Eq. (1) is slightly confusing. At L117, the x is defined as a clean sample, but it is later used as a placeholder in Eq. (1) and (2).\n* The unsupervised margin loss in Eq. (2) can be simplified. Specifically, if the defender could use one query to obtain the ground-truth label (from the black-box model’s perspective), Eq. (2) can reduce to Eq. (1) and therefore simplify the notations.\n* At L123, it is suggested to add some (brief) background of how these attacks sample their queries $x_q$. This would be greatly helpful for readers to understand the defense. Currently, it seems that all attacks are compressed into one high-level idea, which might reduce the confidence in understanding how the defense would work if the attack changes adaptively.\n* At L171 and L193, the notation $z_0$ is not defined until algorithm 1.\n\n### Significance\n\nI appreciate the effort of motivating post-processing defenses against evasion attacks, but the proposed defense considers a somewhat unrealistic threat model, which is different from the defenses compared in this paper. Moreover, this paper assumes a static non-adaptive attacker, which might be easily broken by adaptive attacks based on my assessment. My current score is mainly based on the weaknesses outlined in the Quality section. I am willing to raise my score if the following concerns are adequately clarified or justified.\n* [Quality-Weakness-1] Please clarify if the knowledge of the attack’s period $t$ at L171 is practical. Specifically, please explain how the proposed defense can be implemented *independently of the attack’s code* with only the following inputs: (1) query image, (2) model, and (3) the defense’s hyper-parameters. If you need any other inputs, please clarify relevant assumptions and their practicability.\n* [Quality-Weakness-2] Please discuss if the defense would still work if the attackers revert their update logic in Eq. (3), without significantly modifying the current defense. The primary limitation is summarized in the Significance section. While I appreciate the effort of motivating post-processing defenses against evasion attacks, I strongly recommend the authors carefully address these limitations in future versions of this paper.",
" This paper proposes an adversarial defense against black-box score-based query attack. Specifically, the AAA defense proposed in this paper can mislead SQA methods to simulate the direction of gradients by post-processing the logits output by the target model. The method is validated on cifar-10 and Imagenet datasets The strengths and weaknesses of this paper are summarized as follows:\n\nThe main strength of this paper is that it presents a relatively new approach of adversarial defense. The defense against SQA is rarely covered by previous work, and the AAA method proposed in this paper may bring inspiration to the subsequent score-based attacks and related defenses\n\nThe weaknesses of this paper are twofold. The first aspect is the motivation of the method. The AAA method proposed in this paper is in fact specially targeted for SQA. While SQA belongs to a class of query-based attacks in black-box attack methods in adversarial attacks. That is to say, the effectiveness of AAA is based on at least three constraints: the attacker cannot obtain the details of the target model, the attacker does not use a substitute model but can only query the target model, and the attacker can obtain the confidence information of the target model. This situation implies one-way transparency from the defender to the attacker, not the other way around. This setting is very unrealistic, which makes the motivation somewaht weak. \n\nIn addition, even from the perspective of exploring the robustness of deep neural networks, the method proposed in this paper to specifically deal with one class of attacks cannot improve the robustness of the model itself, or as the authors put it in the paper, worst-case robustness. However, improving the robustness of the model without considering the attack method is the direction that should be advocated in the field of adversarial defense [1]. Otherwise, this is only an arms race between adversarial attacks and defenses.\n\n[1] Athalye, Anish, Nicholas Carlini, and David Wagner. \"Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples.\" International conference on machine learning. PMLR, 2018.\n\nSecondly, because this paper defends against SQA, there is a realistic problem that other query-based attacks like Boundary Attack require less information than SQA (no confidence information, just hard labels). Can the method proposed in this paper effectively defense against such attacks? If so, AAA must changes the hard label, will this affect the availability of the target model? If not, can the attacker directly use decision-based attacks to bypass this defense?\n There seems to be some problems in the penultimate row of Table 2. The performance of 69.3 of RND defense under NES attack should be bolded, not 68.51 of AAA. From the perspective of experimental performance, compared with other defense methods, AAA is still not significantly improved in terms of affecting SQAs’ attack effect."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"nips_2022_7hhH95QKKDX",
"S5dn0Dj7jZ",
"itSr8-BUvd",
"cacoukOUvw7",
"1bFRe2KvfAc",
"-NwL_TnNb8QC",
"H9oqtwVcw_",
"2TwoWG0-FK",
"0Z1wxB5KGXr",
"yVnwY3jIY5v",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX",
"nips_2022_7hhH95QKKDX"
] |
nips_2022_yW5zeRSFdZ | Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models | Transformer architecture has become the fundamental element of the widespread natural language processing~(NLP) models. With the trends of large NLP models, the increasing memory and computation costs hinder their efficient deployment on resource-limited devices. Therefore, transformer quantization attracts wide research interest. Recent work recognizes that structured outliers are the critical bottleneck for quantization performance. However, their proposed methods increase the computation overhead and still leave the outliers there. To fundamentally address this problem, this paper delves into the inherent inducement and importance of the outliers. We discover that $\boldsymbol \gamma$ in LayerNorm (LN) acts as a sinful amplifier for the outliers, and the importance of outliers varies greatly where some outliers provided by a few tokens cover a large area but can be clipped sharply without negative impacts. Motivated by these findings, we propose an outlier suppression framework including two components: Gamma Migration and Token-Wise Clipping. The Gamma Migration migrates the outlier amplifier to subsequent modules in an equivalent transformation, contributing to a more quantization-friendly model without any extra burden. The Token-Wise Clipping takes advantage of the large variance of token range and designs a token-wise coarse-to-fine pipeline, obtaining a clipping range with minimal final quantization loss in an efficient way. This framework effectively suppresses the outliers and can be used in a plug-and-play mode. Extensive experiments prove that our framework surpasses the existing works and, for the first time, pushes the 6-bit post-training BERT quantization to the full-precision (FP) level. Our code is available at https://github.com/wimh966/outlier_suppression. | Accept | This paper proposes an outlier suppression method to improve transformer quantization. The method is derived based on careful analysis and thorough experiments demonstrate the efficacy of it. All reviewers agreed that this is a good paper. I recommend acceptance. | train | [
"WzfMlzneog9",
"u_Giy2f3mp",
"KPi4xVKsfiS",
"2gVh7U0pv3o",
"EsKMXbbybee",
"fMqljYmE1eE",
"BFDKWp_VhYA",
"I4JtIPGyT2C",
"ovm4S1Cg9uY",
"IAKuEfK8poF",
"0bjfZMrd0Vx",
"G3b7IZuEjOk",
"tGK7Qdoxc87"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the detailed responses and the additional experiments conducted to answer my questions. I have increased the soundness score for the paper.",
" Thanks for the responses to my questions. The explanations and the experiments added answered my questions well, which makes this manuscript more solid. Increased my scores.",
" We would like to thank the reviewer for the valuable suggestions and thoughtful insight on this paper. The detailed response is listed below. Hope our reply can address the concerns.\n\n* Q1: The ablation study points to the limited efficacy of gamma migration. Have you conducted an analysis of why is this the case? Given that gamma amplification is presented as the origin of the outlier problem, a conclusive analysis of its impact will be useful. (The ablation study points out that token clipping is the most important component, and that gamma migration is of limited importance. This does not gel well with the fact that the analysis on the origin of the outliers is presented as a significant work.)\n\n A: Sorry for the confusion. Here we make a further clarification for a better understanding of how Gamma Migration and Token-Wise Clipping help the outlier suppression. \n\n * First, for the phenomenon of outliers, we want to indicate that **(1) there have existed outliers in some embedding dimensions, especially for some specific tokens, and (2) the gamma in LayerNorm further amplifies them, making the quantization more intractable.** To suppress these harmful outliers for quantization, both problems should be handled.\n\n **For the first problem**, we think it might correlate with token frequency during the pre-training phase (see Appendix C.2). In this paper, we aim to improve the quantization accuracy with as low costs as possible. Thus it is unrealistic to adjust the pre-training phase. But the observations that some outliers are unimportant inspire us to pursue a suitable clipping range for a good trade-off between clipping error and rounding error for quantization. Therefore, the Token-Wise Clipping is proposed to clip the outliers directly and appropriately under limited bit-width. Compared to previous calibration algorithms like MinMax, OMSE, and Percentile, which also aim to find a clipping range for quantization. Our method considers outlier importance and leverages the token’s characteristics thus works more effectively and efficiently.\n\n **For the second problem**, Gamma Migration transforms gamma into the later layer and eliminates the amplification effect from the origin. Then it contributes to a more quantization-friendly distribution without any extra inference time. Moreover, as a general module transformation technique, it can be combined with any common calibration methods and boost their performance by weakening the outliers in advance.\n\n Therefore, both methods are important because they suppress the outliers from different parts.\n\n * With the above explanation, we can have a better understanding of the ablation study results. Compared with MinMax, Token-Wise Clipping is a better calibration scheme since it can efficiently find the outliers and suitably clip them. Due to the page limit, the comparison experiments with other calibration methods were put in Appendix D.3 in the original paper. The Gamma Migration, as a general plug-in technique, have helped both MinMax and Token-Wise Clipping pursue better accuracy. It is designed to cooperate with calibration algorithms as a general and supplementary technique rather than replacing them. We also give an experiment based on the Percentile calibration algorithm to further verify this point. As shown in the following table, Gamma Migration helps Percentile achieve an accuracy enhancement of 2%-11% on the GLUE dataset. It can be seen that combined with Gamma Migration, the calibration algorithms (MinMax, Percentile, Token-Wise Clipping etc.) can enjoy further improvement.\n\n | RoBERTa-base | bit | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg |\n | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n | | W-E-A | MR | acc m /mm | acc/f1 | acc | f1/acc | acc | acc | Pear./Spear. | |\n | FP | 32-32-32 | 62.5 | 87.75/87.23 | 90.44/93.1 | 92.68 | 88.78/91.6 | 80.51 | 95.18 | 91.04/90.72 | 86.40 |\n | Percentile | 6-6-6 | 20.73 | 72.23/73.68 | 78.43/84.83 | 77.16 | 82.21/87.44 | 62.82 | 88.19 | 79.41/79.64 | 70.98 |\n | Percentile + Gamma Migration | 6-6-6 | 29.06 | 83.17/83.51 | 82.84/87.97 | 81.13 | 85.13/88.86 | 64.62 | 91.4 | 83.53/85.53 | 75.81 |\n \n Therefore, by putting the Gamma Migration and Token-Wise Clipping together, the performance can be largely improved with outliers first weakened, then clipped appropriately. These two methods complement each other and jointly push the limit of low-bit quantization for language transformers. \n\n Thanks for the valuable question and we have revised the paper to make these points clearer.",
" * Q2: Have you tried the proposed methods (Gamma migration and Token-wise clipping) on GPT-2? \n\n A: Thanks for the suggestion. It is of high value to investigate the effect on more types of language models. We validated the effect of the proposed methods for GPT-2 on the WikiText 103 dataset for 8-bit quantization. The results are listed below. It can be seen that for GPT-2, our methods also achieve a consistent improvement compared with others, proving its effectiveness and generalization. We will add more comprehensive verification on other datasets in the future.\n \n | | WikiText 103 |\n | --- | --- |\n | GPT-2 | |\n | FP | 15.97 |\n | MinMax | 25.32 |\n | Percentile | 23.17 |\n | OMSE | 21.09 |\n | Ours | **17.16** |\n \n To explain the results, we observe that the GPT-2 model also suffers from the outlier amplification phenomenon due to the scaling parameter of LayerNorm. And concerning the outlier importance is also beneficial for finding a clipping range. Therefore, our methods achieve better PPL.",
" We would like to sincerely thank the reviewer for providing insightful suggestions on this paper. We have revised the paper and added the necessary experiments as suggested. The detailed response is listed below. We hope our reply can address the questions.\n\n- Q1: What is the total number of parameters of the models in the experiments? With a fixed size training set, the degradation caused by quantization should be highly related to the number of parameters. Please consider adding these numbers to the tables. If possible, please also conduct experiments on different model sizes to see if the proposed approach is beneficial to Transformer models in GLUE in general.\n\n A: Thanks for your constructive advice on presenting the model size, and we have added these numbers to the tables. \n\n Apart from the 6-bit RoBERTa-base model (89.5MB) in the original submission, we also conducted experiments on 6-bit DistilRoBERTa (58.9MB) and RoBERTa-large (255.2MB) to explore the performance of models with different sizes. On RoBERTa-base model, we have achieved the 8.64% average enhancement. And the performance on DistilRoBERTa and RoBERT-large can also be boosted by 7.36%, and 2.59% respectively.\n\n Detailed results are listed in the following two tables. For 6-bit DistilRoBERTa with a much smaller size, our methods outperform others by a large margin. For RoBERTa-large, ours also shows satisfying outcomes consistently, while Percentile behaves much worse on STS-B, QQP and OMSE do not work well on MRPC and QNLI. Moreover, we will cover more validations on other models in the future.\n\n | DistilRoBERTa (W6E6A6) | Model Size (MB) | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg |\n | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n | | | MR | acc m/mm | acc/f1 | acc | f1/acc | acc | acc | pearson/spearmanr | |\n | FP | 313.3 | 60.77 | 84.10/84.38 | 87.50/91.28 | 91.07 | 87.32/90.56 | 71.84 | 92.20 | 88.55/88.21 | 83.35 |\n | OMSE | 58.9 | 8.33 | 81.62/81.49 | 75.0/83.65 | 80.98 | 82.49/85.84 | 65.7 | 89.11 | 78.18/78.07 | 70.91 |\n | Percentile | 58.9 | 28.47 | 78.45/78.71 | 75.25/84.49 | 84.18 | 79.16/81.64 | 58.12 | 89.68 | 75.5/76.44 | 71.91 |\n | Ours | 58.9 | 47.86 | 82.93/82.41 | 80.88/87.09 | 88.32 | 85.12/89.09 | 67.51 | 91.86 | 84.99/84.78 | 79.27 |\n\n | RoBERTa-large (W6E6A6) | Model Size (MB) | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg |\n | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n | | | MR | acc m /mm | acc/f1 | acc | f1/acc | acc | acc | pearson/spearmanr | |\n | FP | 1355.6 | 67.74 | 90.16/90.06 | 87.99/91.27 | 94.69 | 89.58/92.14 | 84.84 | 96.33 | 91.82/91.70 | 88.25 |\n | OMSE | 255.2 | 56.20 | 85.83/85.38 | 75.0/84.45 | 86.07 | 85.4/89.19 | 72.57 | 93.92 | 85.49/85.54 | 80.86 |\n | Percentile | 255.2 | 55.73 | 84.95/85.37 | 81.86/87.33 | 90.92 | 79.21/85.92 | 70.04 | 93.0 | 82.16/82.37 | 80.53 |\n | Ours | 255.2 | 58.52 | 85.16/85.52 | 83.58/89.0 | 91.05 | 85.7/89.49 | 78.34 | 94.27 | 86.17/86.22 | 83.45 |",
" - Q2: For PTQ, what would be the performance of 4-bit quantization? I understand that 4-4-4 PTQ with any quantization paradigm might result in garbage results. However, it would interesting to see something like 4-8-8 or 4-6-6. The rationale is that, in real-world scenarios like on-device models, memory usage could be the bottleneck, while the inference speed with 8bit or 6bit is fast enough. In this case, 4bit weight quantization would be very helpful in reducing memory usage.\n\n A: It is indeed interesting to investigate the 4-bit weight quantization setting that helps reduce model size. We tried 4-6-6 bit on BERT-base and RoBERTa-base. The results in the two tables below show that our methods can also benefit the 4-bit quantization and is helpful with the device of constrained memory. \n\n For BERT-base models, our outlier suppression framework achieves near-floating point performance with a reduction of 4.83% on these so small models with low-bits, while others suffer from a performance degradation of 15.4% and 13.74%, respectively. For RoBERTa-base, though the results reveal that it’s hard to be close to FP values on 4-6-6 settings, ours still helps a lot and outperforms existing methods by about 15.62%. \n\n | RoBERTa-base (W4E6A6) | Model Size (MB) | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg |\n | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n | | | MR | acc m/mm | acc/f1 | acc | f1/acc | acc | acc | pearson/spearmanr | |\n | FP | 475.5 | 62.5 | 87.75/87.23 | 90.44/93.1 | 92.68 | 88.78/91.6 | 80.51 | 95.18 | 91.04/90.72 | 86.40 |\n | OMSE | 69.1 | 0.0 | 55.48/57.33 | 68.38/75.14 | 61.12 | 81.57/86.3 | 50.9 | 81.42 | 38.47/37.58 | 55.45 |\n | Percentile | 69.1 | 2.11 | 59.06/61.85 | 53.92/52.76 | 61.54 | 75.14/83.33 | 47.65 | 86.24 | 56.98/57.07 | 55.95 |\n | Ours | 69.1 | 30.26 | 76.02/76.99 | 73.77/80.65 | 78.22 | 83.01/87.5 | 57.76 | 89.91 | 76.95/77.9 | 71.57 |\n\n | BERT-base (W4E6A6) | Model Size (MB) | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | STS-B | Avg |\n | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n | | | MR | acc m/mm | acc/f1 | acc | f1/acc | acc | acc | pearson/spearmanr | |\n | FP | 417.6 | 59.6 | 84.94/84.76 | 87.75/91.35 | 91.84 | 87.82/90.91 | 72.56 | 93.35 | 89.70/89.28 | 83.83 |\n | OMSE | 58.3 | 29.79 | 69.47/68.76 | 75.74/81.43 | 78.66 | 67.19/78.36 | 62.82 | 84.86 | 83.93/84.29 | 70.09 |\n | Percentile | 58.3 | 25.63 | 67.98/69.04 | 78.92/83.27 | 69.34 | 67.63/78.31 | 58.12 | 86.12 | 84.95/86.37 | 68.43 |\n | Ours | 58.3 | 52.99 | 80.26/80.49 | 81.62/85.93 | 88.17 | 79.27/85.38 | 65.34 | 91.97 | 87.09/87.15 | 79.08 |\n\n- Q3: The authors mentioned that \"In Discussions we leave some topics as future work.\" However, I would suggest to explicitly summarize them either in a separate discussion section or change conclusions to something like \"conclusions and discussions of limitations\". Also, I don't think the discussions in the results section covered \"the limitations and potential negative societal impact of their work\" well enough. Please revise according to NeurIPS requirements.\n\n A: Thanks for pointing that out. We have changed conclusions to “conclusions and discussions of limitations” and clarified the limitation and future work in the paper. \n\n In this paper, we mainly analyze the challenge of language transformer quantization. It is valuable to systematically explore whether the conclusion in this paper benefits other fields such as computer vision. And as we mention in the Appendix that the outlier emergence involves not only the fine-tuned models but also the pre-trained ones. Diving into the pre-training process is also a profound future topic for a better understanding of the outliers.",
" Thanks for the reviewer’s valuable suggestions and positive feedback on this paper. Below is the detailed response to each question. Hope the following reply helps address the concerns.\n\n- Q1: Some results could be better justified. e.g., It seems that Gamma Migration and Token-Wise Clipping complement each other, but why? How to understand the joint effect of the two techniques? \n\n A: Thanks for the suggestion. The joint improvement brought by Gamma Migration and Token-Wise Clipping is mainly because they suppress the outliers from different perspectives. Here we give a detailed explanation below.\n \n Gamma Migration eliminates the outlier amplification phenomenon in LayerNorm and weakens the outliers. After getting rid of this outlier amplification, some specific tokens still suffer from an outlier problem which is harmful for the quantization. Thus Token-Wise Clipping is further devised to suppress these outliers by appropriately clipping them. It takes advantage of the outliers' importance and tokens' information and thus obtains a better trade-off between clipping error and rounding error compared with other calibration algorithms. \n \n In other words, Token-Wise Clipping acts as a better quantization calibration algorithm that finds a better clipping range, and Gamma Migration acts as a layer transformation plugin that removes the outlier amplifier without extra computation overhead. In theory, Gamma Migration can be combined with any calibration algorithms for improvement, and in practice, due to the superiority of Token-Wise Clipping, the combination of these two methods achieves the best performance. \n \n From the above analyses, we can find that these two techniques solve different parts of problems and thus complement each other. Benefitting from the power of each method, combining them achieves the best results and thereby pushes the limit of low-bit language transformers to a new SOTA. We are sorry for the confusion and have revised the paper to make these points clearer.\n\n- Q2: Some results could be better justified. e.g., how do Gamma Migration and Token-Wise Clipping compare to approaches like QBERT or Q8BERT? \n\n A: Thanks for the useful advice. Here, we explain the Q-BERT, Q8BERT, and our outlier suppression framework from technique design and experimental results to help the understanding of their relationships.\n\n - For technique design, both of the two mentioned methods are quantization-aware training (QAT) algorithms. Q8BERT shows the scheme to perform QAT into the fine-tuning phase of BERT. Q-BERT advises a new group-wise quantization scheme and explores the mixed-precision quantization using Hessian information. Our framework investigates the structured outliers, which hurts the quantization performance seriously, and devises two methods to eliminate the outlier amplification effect and find a suitable clipping range. The framework both suits post-training quantization (PTQ) and quantization-aware training. In QAT, our methods apply at the calibration phase and provide good initialization for later training.\n - For experimental results, as our framework is devoted to the calibration phase of QAT, we select a strong quantization training baseline LSQ+ and combine our methods with it. In the original paper, we listed the results of different QAT methods, including Q-BERT, Q8BERT and LSQ+. It can be seen that the LSQ+ (4-4-8 bit) has surpassed Q-BERT (8-4-8 bit) by 7.8% and Q8BERT (8-8-8 bit) by 0.6% on average. And as a better initialization, our methods continue to improve the results upon LSQ+ by 1.26% on 4-4-8 bit (the LSQ+ on this setting has already pursued little accuracy degradation from the full-precision counterpart) and 9.64% on the 4-4-4 setting. This indicates that even combined with a strong baseline, our method can still bring an extra accuracy increase. Besides, Figure 5 in the paper demonstrates how a good initialization influences QAT training. In short, with a good starting point, the training converges faster, and the loss soon reaches a lower, more stable level.\n - In fact, Q8BERT and Q-BERT target the training procedure or the quantization scheme, such as group-wise and mixed-precision schemes. Ours targets at the inherent harmful outliers of language transformer models. Thus, we work on different aspects of quantization. Moreover, as our framework involves the calibration phase in QAT, our methods can also be combined with theirs, such as applying to the mixed-precision problem. And we will validate this point in the future.\n\n We have revised the discussion for this part in the paper to make the readers better understand the contribution of our methods.",
" - Q3: Missing citation: [Compression of Generative Pre-trained Language Models via Quantization](https://arxiv.org/pdf/2203.10705.pdf(https://arxiv.org/pdf/2203.10705.pdf), they developed a dynamic scaling approach and it would be nice to understand how are the observations in two papers are related with each other.\n\n A: Thanks for pointing out the relevant paper. We have cited it in the revision and added analyses of the relationship between these two papers. \n\n We find that the two papers have different observations and different methods. And both two works are devoted to the quantization bottleneck of language models. They notice that due to the sequential computation nature of generative models, word embedding is easier to be homogeneous, and propose a token-level contrastive distillation method. They also observe the outliers in weights and suggest a dynamic scaling technique, which calculates a good clipping range for weights during QAT. Our work begins from the activation outlier phenomenon and correlates it with obvious performance degradation. Concerning the structured characteristic of outliers, we propose Gamma Migration to weaken the outliers from the origin, and the Token-Wise Clipping calculates a good clipping range for activation. They are applied to the calibration phase, thus both suits PTQ and QAT with a better initialization point. \n\n To summarize, our work and theirs both work well and target different parts of quantization on language models. They focus on the inherent quantization bottleneck for weights for generative models. We pay much attention to the structured activation outliers of language models, including classification and generative aspects. \n\n- Q4: One figure is missing from the description. Figure 2 is cited in texts multiple times but is not present in the paper.\n\n A: Thanks for pointing that out, and sorry for the confusion caused by an error from latex autoref. The “Figure 2” mentioned in the text should be “Figure 1”. We have fixed this error in the revision.",
" * Q3: Do you have any comments on whether the work is generic to transformers or is highly applied on the text domain? For example, does the analysis hold on vision transformers or multimodal models?\n\n A: In this paper, we mainly focus on the language task, and it is also interesting and valuable to investigate the phenomenon of vision and multimodal tasks. In the limited time, we analyze the outlier phenomenon, apply our methods to vision transformers, and find that our framework can also help with quantization in the vision domain. For multimodal models, we will further analyze and validate them in the future. The detailed analyses and effects of vision transformers are given below.\n\n * For Gamma Migration, we observe that there are also outliers that emerge in the LayerNorm’s output for vision transformers. Different from language models, outliers at some embedding dimensions are amplified by the scaling parameter at the same dimension in the LayerNorm, and outliers at some other dimensions are alleviated by the corresponding scaling parameter. Therefore, we migrate the scaling parameters that amplify the outliers into the weights of later layer and keep others still in the LayerNorm. Naturally, this can reduce the quantization error by eliminating the amplification effect.\n\n As for the computation overhead, fortunately, these models take the pre-norm where the layer normalization is put inside the residual connection. Then, we do not need to consider transferring the amplifier into two branches (weight in the next layer and shortcut) like post-norm, but just let the weight absorb it. Thus not transferring the whole scaling parameters will not increase any computation costs during inference.\n\n * For Token-Wise Clipping, we observe the [CLS] token in vision transformers often holds more aggressive outliers, which is similar to the phenomenon in language models. As for other tokens, we notice the meaning of tokens is indeed not the same as language tasks. In language tasks, tokens are usually fixed from the beginning and we have a vocabulary file. In vision tasks, tokens represent the information of patches in an image, which covers a large number of combinations of pixels and the information is often caught by a convolution module first. Therefore, more in-depth investigations about tokens in vision models need to be made in the future. Right now, we directly implement the Token-Wise Clipping in the vision tasks and surprisingly get good results. It reveals that there are also unimportant outliers in vision transformers, and Token-Wise Clipping can effectively find them.\n\n * To give a concrete example, we conduct experiments on DeiT-Base model with patches of 16 x 16 size, resolution of 224 x 224 size with ImageNet dataset. The model’s full-precision performance is 81.80, and we take 6-6-6 bit setting for quantization. The following table shows the effect of each part, which demonstrates the generalization ability of our methods.\n\n | | MinMax | Percentile | Token-Wise Clipping |\n | ------------------- | --------- | ---------- | ------------------- |\n | w/o Gamma Migration | 78.04 | 78.81 | 79.76 |\n | w/ Gamma Migration | **79.05** | **80.11** | **81.12** |\n\n Besides comparison with common calibration algorithms for post-training quantization, we also consider some recent post-training quantization works likePTQ4ViT[1] and PTQ-ViT [2] on vision transformers. The table below proves our superiority in quantization by suppressing the outliers. \n\n | | DeiT-B/224 |\n | --- | --- |\n | FP | 81.80 |\n | PTQ4ViT [1] | 80.25 |\n | PTQ-ViT [2] | 77.02|\n | Ours | **81.12** |\n\n [1] Yuan Z, Xue C, Chen Y, et al. PTQ4ViT: Post-Training Quantization Framework for Vision Transformers[J]. arXiv preprint arXiv:2111.12293, 2021.\n\n [2] Liu Z, Wang Y, Han K, et al. Post-training quantization for vision transformer[J]. Advances in Neural Information Processing Systems, 2021, 34: 28092-28103.\n",
" * Q4: The tasks tackled are only classification tasks, where word-similarity post quantization can give a good estimate of downstream performance. Have you evaluated any generation tasks on BART?\n\n A: We highly agree that it is meaningful to evaluate both the classification and generation tasks. Thus in the initial submission, we have also conducted experiments on the generation tasks (XSUM and CNN/DailyMail) using BART. As shown in Table 7 in the paper, our methods can achieve an improvement of 3%-4% for the 6-bit setting. This proves that our analyses also stand under the generation task setting and the proposed outlier suppression framework is general. \n\n As for the word-similarity post quantization mentioned by the reviewer, we guess it might refer to the fact that word similarity is related to the effect of classification to some extent but can not reflect the effect of generation. Although these two tasks have different objectives, as long as we can reduce the quantization error and align the outputs of the quantized model to the full-precision model as close as possible, the quantization accuracy can naturally be improved. The experimental results verify this point.\n\n For a more intuitive understanding, we show some generated sequences produced by the origin full-precision model, our quantized model, and trivial quantized model, respectively. From the table below, we can find that the summary generated by our quantized model is much closer to the full-precision one while the trivial one collapses.\n | Article | Following last year's successful U.K. tour, Prince and 3rdEyeGirl are bringing the Hit & Run Tour to the U.S. for the first time. The first -- and so far only -- scheduled show will take place in Louisville, Kentucky, the hometown of 3rdEyeGirl drummer Hannah Welton. Slated for March 14, tickets will go on sale Monday, March 9 at 10 a.m. local time. Prince crowns dual rock charts . A venue has yet to be announced. When the Hit & Run worked its way through the U.K. in 2014, concert venues were revealed via Twitter prior to each show. Portions of the ticket sales will be donated to various Louisville charities. See the original story at Billboard.com. \\u00a92015 Billboard. All Rights Reserved. |\n | ---------- | ------------------------------------------------------------ |\n | FP | Prince and 3rdEyeGirl are bringing the Hit & Run Tour to the U.S. for the first time. The first -- and so far only -- scheduled show will take place in Louisville, Kentucky. Portions of the ticket sales will be donated to various Louisville charities. |\n | Percentile | The first -- and so far only only -- scheduled show will take the Hit & Run to the U U.U.S. following last year's successful U.K. tour. Prince and 3rd3rd3's hit hit hit the hit & Run is bringing the Hit and Run Tour to the United States for the first time. The first - and the so far far only scheduled shows will take place in Louisville, Kentucky. |\n | Ours | Prince and 3rdEyeGirl are bringing the Hit & Run Tour to the U.S. for the first time. The first -- and so far only -- scheduled show will take place in Louisville, Kentucky. Portions of the ticket sales will be donated to various Louisville charities. |\n* Q5: In general, the paper writing can certainly be improved, e.g. a number of acronyms such as PTQ, QAT are never introduced to the reader.\n\n A: Thanks for pointing that out. We have revised our paper and introduced the acronyms in a suitable position to make the paper easier to follow.\n",
" The paper tries to make improvements on the problem of quantizing transformers, wherein the key problem is the presence of outliers. The paper presents an analysis of the problem and tries to track the origin of outliers and find that the scaling parameter (gamma) in layer norm acts as an outlier amplifier. The paper then presents a method named Gamma Migration, which moves computations associated with the gamma parameter to subsequent layers. This method improves quantization performance against a simple minmax baseline. The key gains in the paper come from token-wise clipping, in which the authors propose coarse to fine grained pipeline to clip the more aggressive outliers. The strengths of the paper are:\n\n1. The empirical results are considerably strong. The 6-bit PTQ results are the first such high-quality results on GLUE.\n2. Given the importance of pre-trained transformer LMs such as BERT, Roberta, the proposed methods can help system developers and deployers. \n3. Each of the methods (although highly applied) is well motivated, with good intermediate quantifications to illustrate their utility.\n\nThe paper has the following weaknesses:\n\n1. In general, the paper writing can certainly be improved, e.g. a number of acronyms such as PTQ, QAT are never introduced to the reader. \n2. Limited novelty of the work. The ablation study points out that token clipping is the most important component, and that gamma migration is of limited importance. This does not gel well with the fact that the analysis on the origin of the outliers is presented as a significant work. 1. The ablation study points to the limited efficacy of gamma migration. Have you conducted an analysis of why is this the case? Given that gamma amplification is presented as the origin of the outlier problem, a conclusive analysis of its impact will be useful.\n2. Have you tried the proposed methods (Gamma migration and Token-wise clipping) on GPT-2? \n3. Do you have any comments on whether the work is generic to transformers or is highly applied on the text domain? For example, does the analysis hold on vision transformers or multimodal models?\n4. The tasks tackled are only classification tasks, where word-similarity post quantization can give a good estimate of downstream performance. Have you evaluated any generation tasks on BART? The authors haven't adequately addressed the limitations and the pain points on reproducing this work. The negative societal impact is irrelevant for this work.",
" The papers pursues a better solution to deal with outliers in special tokens that are considered as the critical bottleneck for the quantization accuracy. They identify that it is the \\gamma variable in LayerNorm which amplifies outliers and propose an outlier suppression framework, consisting of Gamma Migration and Token-wise Clipping to overcome the quantization bottleneck. Extensive results show that the proposed framework outperforms existing quantization algorithms and for the time time fully recovers full-precision BERT results with a 6-bit post training quantization model and 4-bit model produced by quantization-aware training. ### Strengths \n- The proposed approach is well motivated by the observation of the layernorm outliers and the experiments are extensive and empirically strong on a wide range of tasks including sentence classification, question answering, and summarization.\n- The analysis and visualization are very informative and well presented\n\n### Weaknesses\n- One figure is missing from the description\n- Some results could be better justified. e.g., It seems that Gamma Migration and Token-Wise Clipping complement each other, but why? How to understand the joint effect of the two techniques and how do they compare to approaches like QBERT or Q8BERT? - Missing citation: [Compression of Generative Pre-trained Language Models via Quantization](https://arxiv.org/pdf/2203.10705.pdf(https://arxiv.org/pdf/2203.10705.pdf), they developed a dynamic scaling approach and it would be nice to understand how are the observations in two papers are related with each other.\n- Figure 2 is cited in texts multiple times but is not present in the paper.\n Yes.",
" This manuscript focuses on Transformer quantization for Bert. It first analyzes the performance regression from outliers during model quantization, specifically in LayerNorm. Following this, the authors propose a Gamma Migration approach to change how we consider the parameter Gamma in LayerNorm, during quantization. In addition, they propose a coarse-to-fine algorithm to find the outliers during token-wise clipping. Strengths\n* The paper is well organized and very easy to follow\n* The references are enough\n* The technical novelty is enough (see summary)\n\nWeakness\n* Although the authors conducted extensive experiments, there are still some key aspects in results missing. See below\n * What is the total number of parameters of the models in the experiments? With a fixed size training set, the degradation caused by quantization should be highly related to the number of parameters. Please consider adding these numbers to the tables. If possible, please also conduct experiments on different model sizes to see if the proposed approach is beneficial to Transformer models in GLUE in general.\n* For PTQ, what would be the performance of 4-bit quantization? I understand that 4-4-4 PTQ with any quantization paradigm might resulting in garbage results. However, it would interesting to see something like 4-8-8 or 4-6-6. The rationale is that, in real-world scenarios like on-device models, memory usage could be the bottleneck, while the inference speed with 8bit or 6bit are fast enough. In this case, 4bit weight quantization would be very helpful in reducing memory usage.\n The authors mentioned that \"In Discussions we leave some topics as future work.\" However, I would suggest to explicitly summarize them either in a separate discussion section or change conclusions to something like \"conclusions and discussions of limitations\". Also, I don't think the discussions in the results section covered \"the limitations and potential negative societal impact of their work\" well enough. Please revise according to NeurIPS requirements.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"IAKuEfK8poF",
"fMqljYmE1eE",
"0bjfZMrd0Vx",
"0bjfZMrd0Vx",
"tGK7Qdoxc87",
"tGK7Qdoxc87",
"G3b7IZuEjOk",
"G3b7IZuEjOk",
"0bjfZMrd0Vx",
"0bjfZMrd0Vx",
"nips_2022_yW5zeRSFdZ",
"nips_2022_yW5zeRSFdZ",
"nips_2022_yW5zeRSFdZ"
] |
nips_2022_aAs8KTbZvc9 | Fine-Grained Analysis of Stability and Generalization for Modern Meta Learning Algorithms | The support/query episodic training strategy has been widely applied in modern meta learning algorithms. Supposing the $n$ training episodes and the test episodes are sampled independently from the same environment, previous work has derived a generalization bound of $O(1/\sqrt{n})$ for smooth non-convex functions via algorithmic stability analysis. In this paper, we provide fine-grained analysis of stability and generalization for modern meta learning algorithms by considering more general situations. Firstly, we develop matching lower and upper stability bounds for meta learning algorithms with two types of loss functions: (1) nonsmooth convex functions with $\alpha$-H{\"o}lder continuous subgradients $(\alpha \in [0,1))$; (2) smooth (including convex and non-convex) functions. Our tight stability bounds show that, in the nonsmooth convex case, meta learning algorithms can be inherently less stable than in the smooth convex case. For the smooth non-convex functions, our stability bound is sharper than the existing one, especially in the setting where the number of iterations is larger than the number $n$ of training episodes. Secondly, we derive improved generalization bounds for meta learning algorithms that hold with high probability. Specifically, we first demonstrate that, under the independent episode environment assumption, the generalization bound of $O(1/\sqrt{n})$ via algorithmic stability analysis is near optimal. To attain faster convergence rate, we show how to yield a deformed generalization bound of $O(\ln{n}/n)$ with the curvature condition of loss functions. Finally, we obtain a generalization bound for meta learning with dependent episodes whose dependency relation is characterized by a graph. Experiments on regression problems are conducted to verify our theoretical results. | Accept | The reviewers and AC are in agreement that this paper is a solid work, and its contributions are significant. The theoretical results of this paper advance the theory of meta-learning, and, in particular, the provided generalization guarantees are strong. All reviewers were satisfied with the responses provided by the authors and even one of the reviewers increased their score. Overall, this is a good paper and my recommendation is "Accept".
AC | train | [
"gLCPi-RYfIC",
"6KZolUIjzOI",
"zj2gOGM9G8s",
"ylMyQv6WXC",
"LPUqh5HhMd",
"dy83VBaw--",
"bjW8UcaZCcX",
"T5VgsijGoeGJ",
"_nUS3yq5na",
"wK6Lj2Iw-Ft",
"i9nLvL8RJq5",
"e0sJlLK-dcP",
"Yfe3WioyGf4",
"I5zwatDfJyJ",
"goohdZaO6J",
"OTyjbmapI9C",
"fnGyC1A2e3P",
"3pPcuy1_i6Z"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your support very much!",
" I updated the review and increased the score as promised.",
" We appreciate your support very much!",
" **Q1. The authors did not mention that the fast generalization bound for PL functions is \"deformed\" neither in the abstract nor in the contribution section. I would be happy to raise my score if this is done**.\\\nA: Dear Reviewer E4JU,\\\nThanks for your multi-round discussions and constructive suggestions. We have mentioned that the fast generalization bound for PL functions is \"deformed\" in the abstract, contribution, related work, and conclusion section in the updated version (rendered in purple in .pdf).\n",
" I would like to thank authors for their response. After reading other reviews and responses, I have decided to keep my score as is.",
" I am partly satisfied with the author response which incorporated some of my suggestions in the updated manuscript. However, I am keeping the weak accept score since the authors did not mention that the fast generalisation bound for PL functions is \"deformed\" neither in the abstract nor in the contribution section. I would be happy to raise my score if this is done.",
" **Q1. My question was that if the stepsizes required in the analysis contradict with those in convergence analysis of meta-learning algorithms. If so, do the stepsizes in the experiments follow $O(1/j)$ or $O(1/\\sqrt{j})$?** \\\nA: Dear Reviewer DKCG, \\\nThanks for your responses. Our explanations are two-fold: \\\n$(1)$ For the convergence guarantee of MAML in our Theorem 4: It seems like that both reference [2] and [3] require the step size $\\eta_{j} \\in (0, \\frac{1}{G})$ for $G$-smooth functions to guarantee the convergence of gradient based MAML algorithms (see Theorem 5.12 in [2] and Theorem 5 in [3]). Reference [1] also requires the step size $\\eta_{j} \\leq \\min \\lbrace O(\\frac{1}{\\sqrt{j}}), \\frac{1}{G}\\rbrace$ to guarantee the convergence of MAML (see Eq.(12) in Theorem 1 in [1]). Therefore, it seems like that our step size $\\eta_{j}=O(\\frac{1}{jG})$ in our Theorem 4 satisfies the requirements in [1,2,3] (i.e. $\\eta_{j}=O(\\frac{1}{jG}) \\leq \\frac{1}{G}$ and $\\eta_{j}=O(\\frac{1}{jG}) \\leq\\frac{1}{\\sqrt{j}}$) , and our step size can guarantee the convergence performance. \\\n$(2)$ For the experimental setting: on one hand, for the fair comparisons with existing works, we follow the step size setting as in reference [4,5] where the initial learning rate of meta learning algorithms is set as $0.001$, and we use the function torch.optim.lr_scheduler.StepLR(step_size=20, gamma=0.5) to decrease our learning rate every 20 training epochs. On the other hand, sine it is hard to approximately estimate the order of the smoothness constant $G$ of the $G$-smooth loss function, it is infeasible to set step sizes $\\eta_{j}=O(\\frac{1}{jG})$ in practice. Therefore, the step size in our experimental setting does not rigorously follow the one in our theoretical setting. However, our step size in practice may also cause convergence result of MAML, since our step sizes in experiment may be small enough (i.e. less than $\\frac{1}{G}$) to guarantee the convergence. \n\n**Reference**\n\n[1] Closing the gap: tighter analysis of alternating stochastic gradient methods for bilevel problems\n\n[2] On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms\n\n[3] Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning\n\n[4] A closer look at the training strategy for modern meta-learning.\n\n[5] Model-agnostic meta-learning for fast adaptation of deep networks.\n",
" Dear authors,\n\nThanks for the rebuttal. Most of my questions have been answered except Q4. \nMy question was that if the stepsizes required in the analysis contradict with those in convergence analysis of meta-learning algorithms. If so, do the stepsizes in the experiments follow $1/j$ or $1/\\sqrt{j}$? It will not affect my rating, but clarifying would be great. ",
" **Q1. Theorem 6 does not contain a valid bound since in the RHS it has (1+η)er instead of just er**.\\\nA: Thanks. It is true that the bound in the RHS in Theorem 6 is actually the so-called “deformed” transfer error bound, where there is a larger-than-1 multiplicative factor in front of the empirical multi-task error $er$. When the empirical multi-task error is close to zero, the transfer error has a convergence rate of $O(\\ln{n}/n)$. We have added these discussions below Theorem 6 in the revised version.\n\n**Q2. The advantage of S/Q meta-learning over ERM has already been shown by [8]. I suggest removing this fact from the claims in the introduction and possibly cite [8] in Remark 6**. \\\nA: Thanks for your suggestions. Although [8] has shown the advantage of S/Q meta-learning, our work makes improvements in two aspects: \\\n$(1)$ We conduct a more comprehensive comparison between our bound and other transfer error bounds obtained via traditional ERM strategy, to show the advantage of S/Q meta-learning. Concretely, in Remark 6, we compare our stability-based bound for S/Q meta-learning with transfer error bounds obtained with PAC-Bayes analysis [37,38], the bounds with model-capacity theory [4,22], and the bounds with algorithmic stability analysis [33]; in contrast, [8] only compared their bound for S/Q meta-learning with the stability-based bound for traditional meta-learning in [33]. \\\n$(2)$ Our comparisons are more accurate: directly comparing different generalization UPPER bounds, to some extent, is not so accurate. However, our work has shown that the transfer error bound of $O(1/\\sqrt{n})$ is near optimal, and compared such bound with others to show the advantage of S/Q meta-learning. Therefore, our comparisons are more accurate. Nevertheless, we have refined our statements in the introduction and in Remark 6 to clarify our contributions in the revised version.\n\n**Q3. Proof of Theorem 5 uses Lemma E.1 in the appendix, which should be equivalent to [5, Corollary 8]. However, the lower bound is derived in the subsequent section of [5]**.\\\nA: Thanks. We have clarified the citation of Lemma E.1 in the revised version.\n\n**Q4. I suggest the authors either include log terms inside $O()$**.\\\nA: Thanks for your suggestion. We have included log terms inside $O(\\cdot)$ throughout the paper in the revised version.\n\n**Q5. Looking at [6, Theorem 12], I think you should replace $\\gamma$ with $2\\gamma$ in the bound of Theorem 1**.\\\nA: Thanks for pointing this out. Our Theorem 1 is true. The reason for the difference between our Theorem 1 and [6, Theorem 12] is that the uniform stability notion in our Theorem 1 is slightly different from the uniform stability notion in [6]. The details can be summarized in two aspects: \\\n$(1)$ the uniform stability in [6] is defined as the upper bound of the change of the loss when we REMOVE one sample from the dataset; while in our work, the uniform stability is defined as the upper bound of the change of the loss when we REPLACE one sample of the dataset. \\\n$(2)$ As shown in the discussion under the Eq.(7) of [6], if an algorithm has uniform stability $\\gamma$ (w.r.t. the exclusion of one sample), then the algorithm has uniform stability $2\\gamma$ (w.r.t. the change of one sample). Therefore, we replace the $2\\gamma$ factor (w.r.t. the exclusion of one point) in Theorem 12 of [6] with $\\gamma$ (w.r.t. the change of one point), leading to the bound in our Theorem 1..\n\n**Q6. Theorem 1 corresponds to [8, Theorem 2], not [8, Theorem 1]**.\\\nA: Thanks. We have corrected this typo in the revised version.\n\n**Q7. In Remark 6 “m=1 in few-shot learning”. If m=1 is impossible to split the dataset in support and query. At least should be m=2 but in practice is larger than that**.\\\nA: Thanks. We have given a more rigorous statement in our Remark 6 in the revised version.\n\n**Reference**\n\n[4] J. Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149–198, 2000.\n\n[6] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research (JMLR), 2:499–526, 2002.\n\n[22] J. Guan and Z. Lu. Task relatedness-based generalization bounds for meta learning. In International Conference on Learning Representations (ICLR), 2022.\n\n[33] A. Maurer. Algorithmic stability and meta-learning. Journal of Machine Learning Research (JMLR), 6:967–994, 2005.\n\n[37] A. Pentina and C. H. Lampert. A PAC-Bayesian bound for lifelong learning. In International Conference of Machine Learning (ICML), pages 991–999, 2014.\n\n[38] J. Rothfuss, V. Fortuin, M. Josifoski, and A. Krause. PACOH: Bayes-optimal meta-learning with PAC-Guarantees. In International Conference on Machine Learning (ICML), pages 9116–9126, 2021.\n",
" **Q7. Why not define the uniform stability of meta learning by just changing one point in the training or test set?** \\\nA: Thanks. The reasons for defining the uniform stability of meta learning algorithms in this way lie in two aspects: \\\n$(1)$ Under the task environment assumption, we can regard the whole dataset in one task as a training sample, then for a meta algorithm whose input is the datasets from all training tasks, the algorithmic stability of a meta algorithm should be defined by changing the dataset corresponding to one task. \\\n$(2)$ Such uniform stability definition method is originated from [33], and is developed by [8]. We follow the work of [8,33] to develop improved bounds for fair comparisons, and also point out the limitations of such stability notions for meta learning in our Remark 5.\n\n**Q8. Though the new transfer error bound is sharper, it requires the stronger uniform stability definition than [8]**. \\\nA: Thanks. Our explanations are two-fold: \\\n$(1)$ Actually, our improved stability-based transfer error bounds in Theorem 5 and 6 are not only applicable to uniformly argument stable meta learning algorithms, but also applicable to uniformly stable meta learning algorithms. \\\n$(2)$ The reason for the improvements of our results over [8] is not the use of a stronger definition, but the key observation that single-task learning and episodic S/Q meta-learning are essentially equivalent. Therefore, we can apply recent fast-rate stability-based bounds in single-task learning to meta learning to give improved meta learning bounds. Therefore, the comparisons with [8] are fair.\n\n**Q9. What is the output of hypothesis** A(S)(S)?\\\nA: $\\mathbf{A}(\\mathbf{S})$ is an inner-task algorithm. Given the training dataset $S$ associated with one task, the algorithm $\\mathbf{A}(\\mathbf{S})$ will output a hypothesis $\\mathbf{A}(\\mathbf{S})(S)$ suitable for that task. Such hypothesis always contains two parts: (1) An embedding that is shared across different tasks; (2) A prediction function (on the top of features extracted by the embedding) suitable for that task.\n\n**Q10. In Remark 1, how does Theorem 2 show the importance of good embedding to generalization?** \\\nA: Thanks. A good embedding means that the model has a good initialization and can achieve low empirical errors at the first several optimization steps, and hence can have a small stability. We have added more explanations in Remark 1.\n\n**Q11. Is S^{tr} in Eq. (2) just a general notation of S^{tr}_i?** \\\nA: $S^{tr}$ in Eq. (2) is the support set in one training task. It is just a general notation of $S^{tr}_i$.\n\n**Q12. In line 163-164, why is S ~ D^m and S ~ D_{\\tau} at the same time?** \\\nA: Our explanations are two-fold: \\\n$(1)$ First, each sample in $ S $ is drawn independently according to the probability measure $D$, therefore $S$ can be regarded as sampled according to the measure $D^{m}$; \\\n$(2)$ In meta-learning, under the task environment assumption, the probability measure $D$ is regarded as a random variable and is sampled from the environment $\\tau$, therefore we have $D \\sim \\tau$. Combining (1) and (2), in meta learning setting we have $S \\sim D^{m}, D\\sim \\tau$, and we can write $S \\sim \\mathbf{D}_{\\tau}$ for simplicity (see the formal definition in lines 161-162). \n\n**Q13. Should line 161-162: $\\mathbf{D}_{\\tau}$ be $\\mathbf{D}_{\\tau}^m$?** \\\nA: No. $\\mathbf{D}_{\\tau}$ in line 161-162 is actually a probability measure over the space $\\mathcal{Z}^{m}$.\n\n**Q14. It would be more interesting to add additional experiments on real-world data such as for few-shot image classification tasks.** \\\nA: Thanks. Actually we have conducted experiments on miniImageNet for few-shot image classification tasks as in [8]. However, we found that the convergence performance of generalization gap was not so satisfactory. We thought maybe it is because the classification episodes cannot be regarded as “sampled independently from one environment”. Therefore, we only conduct simulation experiments on regression tasks where the parameter corresponding to each task can be guaranteed to be sampled independently from a task distribution.\n\n**Q15. It would be better to add a theoretical curve of the generalization gap v.s. the number of tasks in Figures 1-2.** \\\nA: Our explanations are two-fold: \\\n$(1)$ Since it is hard to approximately estimate the Lipschitzness, smoothness and boundedness constants of the loss function, it is difficult to directly give the order of algorithmic stability $\\beta$. \\\n$(2)$ We cannot estimate how large our generalization bound can be (i.e., we have no idea what the exact multiplicative factor is in our optimal bound $O(1/\\sqrt{n})$). Therefore, we are unable to add a theoretical curve of the generalization gap in Figures 1-2.\n\n**Q16. In Remark 1, the stability bounds in [29, 30] depend on either the population risk or the expected empirical risk.** \\\nA: Thanks. We have refined our Remark 1 in the revised version.",
" **Q1. The comparison to [8,16] is not clear.** \\\nA: Thanks. Our explanations are two-fold: \\\n$(1)$ For the comparison to [8]: the detailed comparison to [8] is listed in our Remark 4 in the main paper. Besides, in Table A.2 of Appendix A, we also compare our bound with the bound from [8] (the reference [6] in the appendix is actually the reference [8] in the main paper). \\\n$(2)$ For the comparison to [16]: it is still hard for us to directly compare our bounds with that of [16], the reasons are as follows: $(i)$ We focus on different bounding objectives. Our work aims to bound the transfer error over the novel task (under the task environment assumption), whereas [16] aims to bound the (expected) excess risk over the novel task (without the task environment assumption, see its Corollary 2). $(ii)$ The generalization bounds hold with different forms. The bounds in our Theorems 5-7 all hold with high probability, but the generalization bounds in [16] (i.e. the bound on the gap between the expected multi-task error and empirical multi-task error in its Theorem 1, as well as the bound on the excess risk on the novel task in its Corollary 2) hold in expectation (w.r.t. all training samples). $(iii)$ We take different assumptions of the loss function. In Assumption 1 of [16], the authors assume the loss function satisfy 4 conditions: strong convexity, Lipschitzness, smoothness and Hessian Lipschitzness. But our work only takes one or two conditions to derive stability for meta learning algorithm. Consider the aforementioned reasons, we think it is not suitable to directly compare our in-probability generalization bound with the in-expectation bound of [16]. We have added such explanations in Remark A.1 in Appendix A in the revised version (the reference [16] in the main paper is the reference [7] in the appendix).\n\n**Q2. The techniques to obtain stability in non-smooth convex loss are not new as it is already established in single-level problems [30].** \\\nA: Thanks. The techniques to obtain stability in non-smooth convex loss are originated from [1] (for convex and Lipschitz loss), and in this work we extend such techniques to the convex Holder smooth setting. Meanwhile, we also use the techniques from [30] to derive uniform argument stability for non-smooth convex loss (see our Theorem D.1 and Theorem D.2 in Appendix D.2.2), and compare the tightness of the stabilities obtained with different techniques in our Remark 3.\n\n**Q3. In lines 302, 322, and 344, Table A.2, should the “transfer error” be the “generalization gap**”? \\\nA: Sorry for the confusion. Although transfer error bound and the bound on the generalization gap are equivalent to some extent, “transfer error bound” always includes the empirical error term, whereas “the bound on the generalization gap” does not involve the empirical error term. We have clarified these notations in the revised version.\n\n**Q4. What is the step size used for the experiments presented in Section 6?** \\\nA: Thanks. We follow the same experimental setting in [8,20], and the initial learning rate in these works is set as $0.001$.\n\n**Q5. Since the experiment settings belong to a non-convex non-smooth loss function, it is not covered by the theoretical results**. \\\nA: Thanks. Although it is hard to validate the smoothness of the neural network model in our experiment, we assume $l_{2}$ loss is smooth and $l_{1}$ loss is non-smooth (w.r.t. parameter $w$). Therefore, our experiments can verify the generalization behavior of non-convex smooth loss. For non-convex non-smooth loss function $l_{1}$, deriving its stability is still difficult and serves one of our ongoing research. We conduct experiments with $l_{1}$ loss to see whether there exists difference of convergence performance between smooth and non-smooth functions.\n\n**Q6. In Table A.2, what is the difference between $\\gamma_n$ and $\\beta_n$?** \\\nA: Sorry for the confusion. Our explanations are two-fold: \\\n$(1)$ the $\\gamma_{n}$ is Table A.2 represents the uniform stability of meta algorithms, where the subscript $n$ means the number of training tasks; the $\\gamma_{m}$ represents the uniform stability of inner-task algorithm, where the subscript $m$ means the number of samples per task. \\\n$(2)$ $\\beta_n$ represents the uniform argument stability defined in our Definition 3, and the subscript $n$ means the number of training tasks. We have added more explanations in the caption of Table A.2 in the revised version.\n",
" **Q1. The coefficient of Theorem 1 cited in this manuscript is slightly different from that in the reference [6]**? \\\nA: Thanks for pointing this out. The reason for the difference is that the uniform stability notion in our Theorem 1 is slightly different from the uniform stability notion in reference [6]. The details are as follows: \\\n$(1)$ the uniform stability in [6] is defined as the upper bound of the change of the loss when we REMOVE one sample from the dataset; while in our work, the uniform stability is defined as the upper bound of the change of the loss when we REPLACE one sample of the dataset. \\\n$(2)$ As shown in the discussion under the Eq.(7) of [6], if an algorithm has uniform stability $\\gamma$ (w.r.t. the exclusion of one sample), then the algorithm has uniform stability $2\\gamma$ (w.r.t. the change of one sample). Therefore, we replace the $2\\gamma$ factor (w.r.t. the exclusion of one point) in Theorem 12 of [6] with $\\gamma$ (w.r.t. the change of one point), leading to the bound in our Theorem 1.\n\n**Q2. In Appendix D1.1, Lemma D.1 does not mention the precondition of which is written in the subtitle. So what is this precondition for?** \\\nA: Sorry for the confusion. Our Lemma D.1 truly holds without the precondition $T > n$ written in the subtitle. The precondition $T > n$ in the subtitle indicates that the stability bound in our Lemma D.1 is more suitable for the case when $T > n$. When $T \\leq n$, we can derive a sharper stability bound in Lemma D.2 for convex Holder smooth function. We have added more explanations above Lemma D.1 in the revised version.\n\n**Q3. Whether the nonexpansiveness (line 71 in Appendix and line 223 in Appendix) of the projection operator needs to be proved**? \\\nA: Thanks. The proof of the nonexpansiveness of projection operator in Euclidean space is a little lengthy and unrelated to the theoretical results in this work. Therefore we omit the detailed proof of the nonexpansiveness and refer interested readers to Proposition 4.4 in the book “Convex Analysis and Monotone Operator Theory in Hilbert Spaces”. \n\n**Q4. The upper bound of the uniform argument stability in Theorem 3 in line 259 of the manuscript, and the upper bound in line 267 of the manuscript are not proved.** \\\nA: Thanks. The proof for the upper bound of the uniform argument stability in Theorem 3 (in line 259 of the manuscript) is deferred to line 229-233 in Appendix D.3. The proof for the upper bound in line 267 of the manuscript is deferred to line 237-240 in Appendix D.4.\n\n**Q5. In line 293 of the manuscript, the authors show the result of Theorem 1 in reference [8] is the same as the bound of Theorem 5 in this manuscript. Why do the authors think the result of this manuscript is better than that in reference** [8]? \\\nA: Thanks. Consider a scenario where the algorithmic stability $\\gamma$ of SGD has the order $O(1/\\sqrt{n})$ (such example can be found in the discussion under Eq.(2) in [19]), then the bound $O(\\gamma\\sqrt{n} + M/\\sqrt{n})$ in Theorem 1 of reference [8] will become vacuous (i.e., $O(\\gamma\\sqrt{n} + M/\\sqrt{n})=O(1)$), while our bound of $O(\\gamma\\ln{n} + M/\\sqrt{n}) )$ in Theorem 5 has the order of $O(\\ln{n}/\\sqrt{n})$ and still has an asymptotic guarantee. Therefore, our result in Theorem 5 is better than that in reference [8]. More explanations for our improvements can also be found in our Remark 4.\n\n**Reference**\n\n[6] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research (JMLR), 2:499–526, 2002.\n\n[8] J. Chen, X. Wu, Y. Li, Q. LI, L. Zhan, and F. Chung. A closer look at the training strategy for modern meta-learning. In Conference on Neural Information Processing Systems (NeurIPS), pages 396–406, 2020.\n\n[19] V. Feldman and J. Vondrák. High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. In Conference on Learning Theory (COLT), pages 1270–1279, 2019.\n",
" **Q1. My main concern is that the algorithmic stability is defined in a way that the whole dataset corresponding to a task changes. The results will not be tight with respect to the number of samples per task (m), as the authors point out in Remark 5**.\\\nA: Thanks. There are two main reasons for our generalization bounds unrelated to the number of samples per task ($m$):\\\n$(1)$ We rely on a basic assumption that the tasks are sampled from the same environment. \\\n$(2)$ Our algorithmic stability is defined in a way that the whole dataset corresponding to a task changes. \\\nActually, only under the task environment assumption, can we view the dataset in each training task as a training sample, treat single-task learning and episodic meta learning equally, and derive an optimal bound of $O(1/\\sqrt{n})$ for meta learning via algorithmic stability analysis. Therefore, our bound also reveals the limitation of the task environment assumption.\n\n**Q2. I would appreciate it if the authors discuss how they could potentially study the effect of m in generalization of meta-learning algorithms for smooth and nonsmooth functions**. \\\nA: Thanks for your comments. Our explanations are two-fold: \\\n$(1)$ Under the task environment assumption: actually we can extend the algorithmic stability notions in single-task learning (e.g. uniform stability [6], uniform argument stability [2,32], on-average stability [29], on-average model stability [30]) to the episodic meta learning setting by defining an algorithmic stability in a way that the whole dataset corresponding to a task changes (see our Definition 2 and Definition 3). However, no matter which stability notion we use, our Remark 5 tells us that the stability-based transfer error bound will not be tighter than $O(1/\\sqrt{n})$. Therefore, to derive sharper transfer error bound (e.g. of $O(1/\\sqrt{nm})$) for meta learning under the task environment assumption, we should leverage the tools of other theories (e.g. model-capacity theory in [5]), instead of the tool of algorithmic stability analysis. \\\n$(2)$ Without the task environment assumption: without such assumption, we cannot define the transfer error of a meta learning algorithm on the novel task, so we should focus on the excess risk bound on the novel task. In this case, we may define a more elaborate algorithmic stability notion in a way that the part (not the whole) of dataset in a task change, and may derive a sharper bound that is related to the number $m$ of samples per task (like the expected multi-task error bound of $O(\\frac{1}{nm})$ in [16]).\n\n**Reference**\n\n[2] R. Bassily, V. Feldman, C. Guzmán, and K. Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. In Conference on Neural Information Processing Systems (NeurIPS), 2020.\n\n[5] S. Ben-David and R. Schuller. Exploiting task relatedness for mulitple task learning. In Conference on Learning Theory (COLT), pages 567–580, 2003.\n\n[6] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research (JMLR), 2:499–526, 2002.\n\n[16] A. Fallah, A. Mokhtari, and A. E. Ozdaglar. Generalization of model-agnostic meta-learning algorithms: Recurring and unseen tasks. In Conference on Neural Information Processing Systems (NeurIPS), 2021.\n\n[29] I. Kuzborskij and C. H. Lampert. Data-dependent stability of stochastic gradient descent. In International Conference on Machine Learning (ICML), pages 2820–2829, 2018.\n\n[30] Y. Lei and Y. Ying. Fine-grained analysis of stability and generalization for stochastic gradient descent. In International Conference on Machine Learning (ICML), pages 5809–5819, 2020.\n\n[32] T. Liu, G. Lugosi, G. Neu, and D. Tao. Algorithmic stability and hypothesis complexity. In International Conference on Machine Learning (ICML), pages 2159–2167, 2017.\n",
" We thank all reviewers for their detailed reading and constructive comments. In the revision, we made the following major changes (rendered in purple in .pdf) regarding the reviewers' concerns: \n\n1. A new remark (Remark A.2 in Appendix A) explaining how we could potentially derive a sharper generalization bound w.r.t. $m$ for meta-learning algorithms (In response to the question by **Reviewer rSh4**); \n\n2. Clarification on the precondition of Lemma D.1 in Appendix D.1.1 (As suggested by **Reviewer Skjh**); \n\n3. A new remark (Remark A.1 in Appendix A) explaining the differences between our high-probability generalization bounds for meta learning and the in-expectation generalization bounds in [Fallah et al. NeurIPS2021]. (In response to Weakness 1 by **Reviewer DKCG**); \n\n4. Fixing important typos (e.g. use “Bounds on Generalization Gap” instead of “Transfer Error Bounds”) and giving more explanations for different algorithmic stability notions in Table A.2 (In response to Question 1 by **Reviewer DKCG**); \n\n5. Giving a more rigorous statement in our Remark 1 (In response to Minor Comments 8 by **Reviewer DKCG**); \n\n6. Clarifying the “deformed” transfer error bound in our Theorem 6 (In response to Question 1 by **Reviewer E4JU**); \n\n7. Clarifying the contribution (in our Introduction and Remark 6) of giving more comprehensive comparisons between the bounds for S/Q meta-learning and the bounds for ERM meta-learning (In response to Question 2 by **Reviewer E4JU**); \n\n8. Clarifying the citation of our Lemma E.1, including log terms inside $O(\\cdot)$ throughout the paper, and fixing important typos in Remark 6 (In response to Questions 3-4 and Minor Question 4 by **Reviewer E4JU**). \n\nBesides the above major changes, we also fixed other minor typos.\n",
" This paper studies the generalization of meta-learning algorithms. In particular, this paper has three contributions:\n1- Extending the stability analysis for convex smooth functions to convex Holder-smooth functions (similar to [30] for single task)\n2- Improving the stability analysis of nonconvex functions\n3- High probability generalization bounds\n\nIn particular, the authors show that the meta learning algorithms are less stable in the nonsmooth convex case, compared to smooth convex case. Overall, I find the paper and its results interesting to the community. My main concern is that the algorithmic stability is defined in a way that the whole dataset corresponding to a task changes. This makes the analysis simpler (as it would be closer to the single task case), but the results will not be tight with respect to the number of samples per task (m), as the authors point out in Remark 5.
\n I would appreciate it if the authors discuss how they could potentially study the effect of $m$ in generalization of meta-learning algorithms for smooth and nonsmooth functions. Yes.",
" This manuscript provide fine-grained analysis of stability and generalization for modern meta learning algorithms by considering more general situations including -Holder continue convex, smooth convex and smooth non-convex functions. First, the authors give the lower and upper bounds of the uniform argument stability for -Holder continue convex functions and smooth convex functions respectively to show that meta learning algorithms in the smooth convex case is more stable than that in the non-smooth convex case. Second, a tighter stability bound of than the existing bound is proved. Then, to show the advantage of S/Q episodic strategy for meta learning over traditional ERM strategy, the authors develop a near-optimal high-probability generalization bound , and further improve the bound to by considering Polyak-Łojasiewicz condition. Finally, a generalization bound for meta learning with dependent episodes is given. Pros\n1.The symbol description of the manuscript is relatively detailed.\n2.The manuscript comprehensively considers various function settings, and gives relatively tight results.\n3.The results of this manuscript are compared with the results of many previous works, and the authors show the situations where these results are better in detail.\n\nCons\n1.The coefficient of Theorem 1 cited in this manuscript is slightly different from that in the reference [6].\n2.In Appendix D1.1, Lemma D.1 does not mention the precondition of which is written in the subtitle. So what is this precondition for?\n3.At the beginning of the proof of Theorem 2 (line 71 in Appendix) and Theorem 3 (line 223 in Appendix) in this manuscript, the nonexpansiveness of projection operator is used. Whether the nonexpansiveness needs to be proved?\n\nIn summary, I believe this paper is interest. It provides some tighter stability bounds and generalization bounds for meta learning. 1. For the upper bound of the uniform argument stability in Theorem 3 in line 259 of the manuscript, there is not the prove of it in Appendix. Similarly, the upper bound in line 267 of the manuscript is also not proved.\n2. In line 293 of the manuscript, the authors show the result of Theorem 1 in reference [8] is and , that is, the bound is , which is the same as the bound of Theorem 5 in this manuscript. Why does the authors think the result of this manuscript is better that that in reference [8]?\n It seems no potential negative societal impact, sine this paper just considers theoretical properties of meta learning. ",
" \nThis paper studies stability and generalization in meta-learning. Specifically, it develops matching upper and lower bounds for non-smooth convex loss function with Holder continuous subgradients, as well as smooth convex and smooth nonconvex functions.\n\nThe paper also provides generalization bounds for both independent and dependent episode environments. It shows that in the independent episode environment, the generalization bound of $O(1/\\sqrt{n})$ is nearly optimal. And it can achieve $O(1/n)$ with an additional curvature (Polyak-Lojasiewicz) condition of the loss function.\n **Strengths**\n\n1. This is the first work that captures the effect of dependent episodes in generalization ability, which explains the empirical discovery that meta learning trained with independent episodes generalizes better than with dependent episodes.\n\n2. The consideration of the weakest curvature condition, Polyak-Lojasiewicz condition, instead of the strong-convexity condition for O(1/n) rate of generalization is interesting.\n\n3. This work develops stability of meta learning for both smooth and non-smooth loss functions. And conduct experiments with examples in both conditions to verify their theorems.\n\n4. The paper provides matching upper and lower bounds for stability to show the upper bound is tight. \n\n**Weaknesses**\n\n1. The comparison to [8,16] is not clear, which is the most relevant papers that also provide stability-based generalization bounds for meta learning.\n\n2. The techniques to obtain stability in non-smooth convex loss are not new as it is already established in single-level problems [30]. And the technique for lower bound is also not new. The paper directly assumes continuity or smoothness or convex assumptions for the outer level function w.r.t. the meta parameter w, therefore, the analysis of single-level ERM can be directly applied without many challenges that are uniquely caused by the bi-level (compositional) structure of the problem.\n\n\n3. The authors sometimes exchange the notions of “transfer error” and “generalization gap” in the paper. For example, in lines 302, 322, and 344, Table A.2, should the “transfer error” be the “generalization gap”?\n\n4. The stability bound in Theorem 4 requires step sizes $\\eta_j = O(\\frac{1}{j})$, while the convergence of a smooth nonconvex loss function for compositional or bilevel problems requires outer level step size is at least $\\eta_j = O(\\frac{1}{\\sqrt{j}})$ (see e.g. [1,2,3]). What is the step size used for the experiments presented in Section 6, where both the training error and generalization gap converge? This phenomenon should receive more explicit discussions.\n\n\n5. Since the experiment settings belong to a non-convex non-smooth loss function, it is not covered by the theoretical results that include smooth convex, smooth non-convex, and non-smooth convex functions. It would be better to conduct some simulations with the loss functions under the Assumptions used in the theoretical results to verify the theoretical rates more explicitly. Therefore, the experiments in Section 6 do not serve the purpose of verifying the theoretical claims. The observation that a non-smooth non-convex loss function has similar convergence rates in the generalization gap as the smooth non-convex loss function suggests further study and should be pointed out clearly in the paper.\n\n6. Some discussions are not clear to me, as detailed in “Questions”. \n\n[1] Closing the gap: tighter analysis of alternating stochastic gradient methods for bilevel problems\n\n[2] On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms\n\n[3] Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning\n\n\n **Major comments**\n\n1. Questions regarding Table A.2\n\n* Why in Table A.2, there is no comparison with [8,16], which is the most relevant paper that also provides stability-based generalization bounds for meta learning?\n\n* In Table A.2, what is the difference between \\gamma_n and \\beta_n should be stated more clearly. For example, the uniform stability definitions in those papers are the same or not? If they are the same, why don’t you use the same notation?\n\n* In Table A.2, the last column should be generalization gap bounds instead of transferring error bounds as it does not explicitly include the dependence of the training error \\hat{er}(\\cdot).\n\n\n2. Questions regarding the new uniform stability notion\n\n* Definition 2 is a stronger notion of uniform stability compared to single level problems and the definition in [8,16] for meta learning, since it requires changing both K points in the training set and q points in the test set. Why is it necessary to define the uniform stability of meta learning in this way instead of just changing one point in the training or test set?\n\n* Though the new transfer error bound is sharper, it requires the stronger uniform stability definition in Definition 2. Therefore it is not a very fair comparison with existing works such as [8].\n\n3. Questions regarding the notation A(S)(S)\n\n* The notation A(S)(S) is a bit confusing to me, what is the output of hypothesis A(S)(S) should be stated clearly. Is it the per-task (e.g. for the i-th task) hypothesis or the meta model hypothesis (w)? \n\n* Does this S belong to S or it can be any set S? Since I see A(S)(S), A(S)(S^{tr}), and A(S)(S^{tr}_i).\n\n4. Questions regarding remark 1\n\n* In Remark 1, line 236-237, how does Theorem 2 show the importance of good embedding to generalization? To show this, I would expect a term in Eq. (4) that is directly related to the representation or embedding error. I think the authors need to elaborate more on this point.\n\n\n**Minor comments**\n\n1. Should line 130: $|\\partial f(u,z) - \\partial f(v,z)| be \\|\\partial f(u,z) - \\partial f(v,z)\\|$ instead?\n\n2. What is S^{tr} in Eq. (2) and line 163, is it $S^{tr} = \\{ S^{tr}_i\\}_{i=1}^n$ or is it just a general notation of S^{tr}_i?\n\n3. In line 163-164, why is S ~ D^m and S ~ D_{\\tau} at the same time?\n\n4. In this paper, if I understand correctly, the number of episodes is equal to the number of tasks in meta training. This should be stated more clearly at the beginning of the paper, for example, move Algorithm 1 to the main text as it is important to understand the settings of the analysis.\n\n5. Should line 161-162: $\\mathbf{D}_{\\tau}$ be $\\mathbf{D}_{\\tau}^m$?\n\n6. It would be more interesting to add additional experiments on real-world data such as for few-shot image classification tasks.\n\n7. It would be better if the authors could add a theoretical curve of the generalization gap v.s. the number of tasks in Figure 1 and Figure 2.\n\n8. In Remark 1, line 232-234, it is stated that [29, 30] have stability bounds that depend on the empirical risk, which I do not agree since they obtain dependence on either the population risk or the expected empirical risk, which is not directly the empirical risk minimized during optimization without expectation over the data sample and algorithm. Therefore, their theoretical results do not imply small training error leads to a small generalization error.\n\n9. Grammar\n\n* Line 160: as follow -> as follows \n\n* Line 423: stability notations -> stability notions\n \n n/a",
" This work provides a comprehensive stability analysis of gradient-based modern meta-learning algorithms, i.e. that use Support/Query (S/Q) splits of the task dataset for meta-training. In particular, the authors provide matching upper and lower stability bounds for convex loss functions with $(\\alpha, G)$-Holder subgradients and for smooth functions. The bounds show that in the non-smooth convex case, meta-learning algorithms are inherently less stable than in the smooth convex case. In particular, the lower bound in the non-smooth case is vacuous. In the smooth case, the bounds are sharper than previous analysis. Furthermore, they also develop a near-optimal generalization upper bound in high probability, which shows the advantage of S/Q training compared to empirical risk minimization (ERM). With the additional PL curvature condition, the authors derive a faster generalization bound. Finally, they derive bounds for the case where the episodes have a dependency relation encoded in a graph. Experiments on a synthetic regression problem with L1 and L2 loss validate the theoretical findings. Strengths:\n+ Original and significant results on S/Q meta-learning. In particular, I am not aware of lower bounds for this setting.\n+ Clearly written.\n\nWeaknesses:\n- Some incorrect and not novel claims. (See Question 1 and 2)\n- The case with vacuous lower-bound is not analyzed in the experiments.\n\n**Update after the discussion with the authors**\n\nThe authors properly addressed my main concerns by updating the manuscript. Therefore I increase the score from 6 to 7 Major question and comments.\n1. Theorem 6 does not contain a valid bound since in the RHS it has $(1+ \\eta) er$ instead of just $er$. From the analysis we can see that this result follows from [12, Theorem 1.2], which specifies that it is useful only when the empirical error is small, which might happen when dealing with overparameterized neural networks. The correct fast bound is [12, Theorem 1.1] which bounds the excess risk instead of the generalization error. I think this should be clarified and claims should be modified accordingly.\n2. The advantage of S/Q meta-learning over ERM has already been shown by [8]. I think readers might mistakenly think that this is a new result of the paper. I suggest removing this fact from the claims in the introduction and possibly cite [8] in Remark 6.\n3. Proof of Theorem 5 uses Lemma E.1 in the appendix, which should be equivalent to [5, Corollary 8]. However, such corollary is only an upper bound, the lower bound is derived in the subsequent section of [5]. I think this should be clarified and the proof should be expanded.\n4. I suggest the authors either include log terms inside $O()$ or use $\\tilde{O}(\\cdot)$.\n\n\nMinor questions and typos:\n1. Looking at [6, Theorem 12], I think you should replace $\\gamma$ with $2\\gamma$ in the bound of Theorem 1.\n2. Addiction -> addition.\n3. Theorem 1 corresponds to [8, Theorem 2], not [8, Theorem 1] \n4. In Remark 6 “m=1 in few-shot learning”. If m=1 is impossible to split the dataset in support and query. At least should be m=2 but in practice is larger than that.\n\nReferences:\n\n[5] O. Bousquet, Y. Klochkov, and N. Zhivotovskiy. Sharper bounds for uniformly stable algorithms. In Conference on Learning Theory (COLT), pages 610–626, 2020.\n\n[6] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research (JMLR), 2:499–526, 2002.\n\n[8] J. Chen, X. Wu, Y. Li, Q. LI, L. Zhan, and F. Chung. A closer look at the training strategy for modern meta-learning. In Conference on Neural Information Processing Systems (NeurIPS), pages 396–406, 2020.\n\n[12] Y. Klochkov and N. Zhivotovskiy. Stability and deviation optimal risk bounds with convergence rate o(1/n). In Conference on Neural Information Processing Systems (NeurIPS), 2021.\n\n\n\n The authors adequately discuss the limitations of the analysis in Remark 5.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
2
] | [
"6KZolUIjzOI",
"ylMyQv6WXC",
"LPUqh5HhMd",
"dy83VBaw--",
"Yfe3WioyGf4",
"_nUS3yq5na",
"T5VgsijGoeGJ",
"fnGyC1A2e3P",
"3pPcuy1_i6Z",
"fnGyC1A2e3P",
"fnGyC1A2e3P",
"OTyjbmapI9C",
"goohdZaO6J",
"nips_2022_aAs8KTbZvc9",
"nips_2022_aAs8KTbZvc9",
"nips_2022_aAs8KTbZvc9",
"nips_2022_aAs8KTbZvc9",
"nips_2022_aAs8KTbZvc9"
] |
nips_2022_xaWO6bAY0xM | Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective | Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples. However, the relevant progress for the important $\ell_\infty$ perturbation setting is rather limited, and a principled understanding of how to design expressive $\ell_\infty$ Lipschitz networks is still lacking. In this paper, we bridge the gap by studying certified $\ell_\infty$ robustness from a novel perspective of representing Boolean functions. We derive two fundamental impossibility results that hold for any standard Lipschitz network: one for robust classification on finite datasets, and the other for Lipschitz function approximation. These results identify that networks built upon norm-bounded affine layers and Lipschitz activations intrinsically lose expressive power even in the two-dimensional case, and shed light on how recently proposed Lipschitz networks (e.g., GroupSort and $\ell_\infty$-distance nets) bypass these impossibilities by leveraging order statistic functions. Finally, based on these insights, we develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained (making certified robust training free). Extensive experiments show that our approach is scalable, efficient, and consistently yields better certified robustness across multiple datasets and perturbation radii than prior Lipschitz networks. | Accept | The paper presents novel theoretical results and a novel architecture for designing Lipschitz constrained neural networks (with respect to the infinity norm). The authors have addressed all the concerns from the reviewers properly. All the reviewers agreed that the paper contains significant contributions and should be accepted at NeurIPS 2022. | train | [
"xYGx06fRjL9",
"-tybpQJZRTy",
"5tNunerdaeD",
"5D3-Y1f7oD0",
"VcUuDpaLntkm",
"noObnzZGIym",
"-wfZBo2O3ym",
"prM3p7aVdRF",
"SbNhFcz0IxZ",
"uTK-droDpzf",
"1GbWfjtlNw",
"xRwBlZjTNW9",
"CkaS5pPHcDr",
"hi34jv2_VtB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed discussion of the points that I raised.\n\nThe new theoretical result is exciting and helps to complete the previously presented theoretical work. The additional results, that provide the error bars, are also a great addition.\n\nThe discussion here on other $\\ell_p$ norms is interesting. I think this discussion would make a valuable addition to the paper. If not in the main text, then in the supplementary material.\n\nI will maintain my score.",
" Dear reviewer wXX6:\n\nWe believe we have incorporated most of your suggestions in the updated version. We would be grateful if you can confirm whether our response has addressed your concerns and please let us know if any questions remain. Thank you for your consideration!",
" We sincerely thank all the reviewers and the area chair for their efforts in reviewing our paper. We would like to take this opportunity to highlight that we have recently proved an important theoretical result conjectured in the initial submission, which we believe can further strengthen this work.\n\n**New results**. We successfully proved the conjecture raised in Section 3.4, showing that MaxMin networks require a depth of $\\Omega(d)$ to represent Boolean functions of $d$ variables. The proof is more challenging than other proofs in this paper and leverages the tool of Boolean circuit theory in computer science. Please see Section 3.4 in the updated version for a precise description as well as a proof sketch. Significance and further implications of this result are shown below:\n\n1. Such a lower bound is much stronger than the result of representing order statistics which requires a depth of $\\Omega(\\log_2 d)$ (Theorem 3.9).\n\n2. As a corollary, universal approximation is impossible if the depth of MaxMin networks is $o(d)$ (Corollary 3.11).\n\n3. Our proof discovers that the weights in a MaxMin network will inherently become sparse when learning Boolean functions, and MaxMin networks fit Boolean functions purely by using the biases and the activation. This further justifies the design of SortNet.\n\n4. We discovered an interesting relationship between MaxMin/$\\ell_\\infty$-distance networks and Boolean circuits. It links the certified robustness area to the field of theoretical computer science (TCS), which may inspire future works to solve more deep learning problems using various tools in TCS. Therefore, we believe this paper may have a broader impact beyond the certified robustness community.\n\n**We solved an open problem raised in a concurrent work**. A concurrent work arXiv:2204.06233 [1] investigated the expressive power of general 1-Lipschitz networks with piecewise linear activation (which they called the spline networks). They proved that using 1-Lipschitz piecewise linear activation with 3 linear regions, the corresponding network achieves the maximum expressive power compared with other Lipschitz activations and can approximate any one-dimensional 1-Lipschitz function. They pose the high dimension setting as an important open problem (in page 14). However, Theorem 3.6 addresses the open problem with a negative answer, stating that such networks are not expressive even for the two-dimensional setting.\n\n**Paper updates**. Below we highlight the major updates of the revised submission. All major changes in the main paper have been marked $\\textcolor{red}{\\text{red}}$.\n\n- Section 3.3, lines 211-217: add justification of order statistics as an important class of functions for measuring the expressive power of neural networks.\n\n- Section 3.3, lines 239-246: add discussions of the concurrent work [1] and show that we addressed an open problem in their paper.\n\n- Section 3.4, lines 288-310: add the new result (Theorem 3.10), which gives an $\\Omega(d)$ lower bound on the depth of MaxMin networks in representing Boolean functions.\n\n- Section 6, lines 458-470: add detailed discussions on the limitation of this paper.\n\n- Lines 471-473: discuss the broader impact of this paper.\n\n- Appendix B.7: give a proof of Theorem 3.10.\n\n[1] Approximation of Lipschitz functions using deep spline neural networks. arXiv preprint 2204.06233.",
" We sincerely thank Reviewer 6St4 for the careful reading, positive feedback, and detailed examination of our proofs. We have fixed the typos in the updated version.",
" We sincerely thank Reviewer P71v for the careful reading, valuable suggestions, and positive feedback. Below we would like to give detailed responses to each of your comments.\n\n**Providing confidence intervals**. Thanks for the suggestion. We have followed your advice and run multiple experiments on MNIST and CIFAR-10 during these days. Due to the time limit, we have not completed the ImageNet experiments yet. The current results are shown as follows, where we present the results of five independent runs and report the 95% confidence interval. These results are run on NVIDIA RTX 3090 GPUs.\n\nMNIST ($\\epsilon=0.1$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 99.13 | 99.03 | 99.08 | 99.02 | 99.00 | 99.05 ± 0.04 |\n| Certified | 98.22 | 98.05 | 98.12 | 98.26 | 98.14 | 98.16 ± 0.07 |\n\nMNIST ($\\epsilon=0.3$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 98.58 | 98.56 | 98.54 | 98.54 | 98.46 | 98.54 ± 0.04 |\n| Certified | 93.60 | 93.43 | 93.45 | 93.56 | 93.40 | 93.49 ± 0.08 |\n\nCIFAR-10 (SortNet $\\epsilon=2/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 65.86 | 65.96 | 65.69 | 65.98 | 65.79 | 65.86 ± 0.11 |\n| Certified | 56.65 | 56.67 | 56.20 | 56.23 | 56.29 | 56.41 ± 0.20 |\n\nCIFAR-10 (SortNet $\\epsilon=8/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 54.84 | 54.30 | 54.59 | 54.28 | 54.70 | 54.54 ± 0.22 |\n| Certified | 40.39 | 40.13 | 40.14 | 39.96 | 40.05 | 40.13 ± 0.14 |\n\nCIFAR-10 (SortNet+MLP $\\epsilon=2/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 67.64 | 67.72 | 67.57 | 67.59 | 67.72 | 67.65 ± 0.06 |\n| Certified | 56.80 | 56.94 | 56.70 | 56.63 | 56.30 | 56.67 ± 0.21 |\n\nCIFAR-10 (SortNet+MLP $\\epsilon=8/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 54.80 | 54.13 | 54.33 | 54.02 | 54.39 | 54.33 ± 0.26 |\n| Certified | 39.56 | 39.99 | 39.76 | 39.45 | 39.70 | 39.70 ± 0.18 |\n\nImageNet (SortNet+MLP $\\epsilon=1/255$):\n\n| | Exp1 | Exp2 | Exp3 |\n| ---- | ---- | ---- | ---- |\n| Clean | 13.21 | 13.48 | 13.37 |\n| Certified | 9.00 | 9.02 | 8.97 |\n\nIt can be seen that in most settings, we still significantly outperform other baselines of this paper. The only exception is the CIFAR-10 ($\\epsilon=8/255$) setting, where SortNet still achieves the best but the gap is within the confidence interval. For the ImageNet dataset, we find the variance of different runs to be very small possibly because the test set has 50000 images (10x more than MNIST and CIFAR-10). We will update these results in our paper once the ImageNet results are ready.",
" **The improvements over $\\ell_\\infty$-distance nets**. In this paper, we performed thorough experiments with 6 different settings:\n\n- (a) MNIST $\\epsilon=0.1$\n- (b) MNIST $\\epsilon=0.3$\n- (c) CIFAR-10 $\\epsilon=2/255$\n- (d) CIFAR-10 $\\epsilon=8/255$\n- (e) TinyImageNet $\\epsilon=1/255$\n- (f) ImageNet $\\epsilon=1/255$\n\nAs can be seen, out of these 6 settings on 4 datasets, SortNet consistently outperforms $\\ell_\\infty$-distance net across 5 settings. The certified accuracy gap of the settings (c),(e),(f) are all prominent (i.e. $>1.8\\%$). For MNIST, since the performance of $\\ell_\\infty$-distance is already above 90% (even approaching 98%), we believe the current gap is also significant. Finally, for setting (d), SortNet at least matches the performance of $\\ell_\\infty$-distance net while being faster to train. Considering that the problem of certified $\\ell_\\infty$ robustness is very challenging, we think overall the improvement over $\\ell_\\infty$-distance nets is significant. Nevertheless, the theoretical contribution in this paper is more important, and we believe we do not overemphasize the performance comparison with $\\ell_\\infty$-distance network in this paper.\n\n**Theoretical advantage of SortNet vs $\\ell_\\infty$-distance network**. Thanks for the good question. One advantage of the general SortNet is that it can precisely represent piecewise linear 1-Lipschitz functions, while $\\ell_\\infty$-distance nets cannot. Indeed, a straightforward calculation shows that the gradient norm of $\\ell_\\infty$-distance nets with respect to the input must be exactly one, and the gradient is sparse and has exactly one non-zero element. Therefore, when fitting piecewise linear 1-Lipschitz functions using $\\ell_\\infty$-distance nets, the learned function will have a ''jagged'' shape. As a result, the approximation error cannot be zero and depends on the network size. On the other hand, SortNet with a finite size can perfectly fit such functions.\n\n**Clarification of line 34**. Sorry for the misleading. The previous work of Anil et al. achieved 79% certified accuracy under $\\epsilon=0.1$ and 2% accuracy under $\\epsilon=0.3$ on MNIST according to their experiments, which we think can be largely improved. We have changed the words ''far from'' to the word ''not''. We believe the modified statement would be appropriate.\n\n**Miscellaneous**. We have discussed these references with our work in the updated version. We also thank the reviewer for pointing out the typo and we have corrected it.\n\nWe hope our response can clarify your concerns. We are happy to go into more detail regarding any of them and we look forward to your reply.",
" We sincerely thank Reviewer G1MR for the insightful comments, valuable suggestions, and positive feedback. Below we would like to give detailed responses to each of your comments.\n\n**Why do MaxMin nets require $O(d)$ scaling**? Thanks for the question. We are excited to show you that we have successfully proved this conjecture during these days (see Appendix B.7 in the updated version). The proof is quite non-trivial and brings new insights into how MaxMin networks express Boolean functions, which we believe is novel and interesting. We give a brief proof sketch below, which can also be found in Section 3.4 in the updated version.\n\nThe key insight is that for any Boolean function, if it can be represented by some MaxMin network $f$, it can also be represented by a *special* MaxMin network with the same topology as $f$, such that all the weight vectors $w$ are sparse with at most one non-zero element, either 1 or -1 (Lemma B.22 and Corollary B.23). We prove this lemma by induction on the depth of the network. The induction step is divided into two parts and five sub-cases, for each of which we give an explicit construction of the network parameters.\n\nCorollary B.23 implies that weight vectors only perform the *neuron selection* operation and *have no use* in representing Boolean functions. Therefore, MaxMin networks reduce to *2-ary Boolean circuits*, i.e., directed acyclic graphs whose internal nodes are logical gates including NOT and the 2-ary AND/OR. Note that for a 2-ary Boolean circuit with $M$ layers and a scalar output, the number of nodes will not exceed $2^{M+1}-1$ (achieved by a complete binary tree). However, the classic result in Boolean circuit theory (Shannon 1942) showed that for most Boolean functions of $d$ variables, a lower bound on the minimum size of 2-ary Boolean circuits is $\\Omega(2^d/d)$, which thus yields $M=\\Omega(d)$ and concludes the proof.\n\nWe also remark that as a corollary, $M$-layer MaxMin networks are not universal approximators for the $d$-dimensional 1-Lipschitz function class if $M=o(d)$.\n\n**Regarding fixing the weights in SortNet**. Thanks for the good question. The above proof may give insights into this question and further justify the design of SortNet. In particular, we prove that when using MaxMin networks to express Boolean functions, the learned weights will inherently be sparse, and the role of biases is much more important than the weights. We suspect such a result may transfer to general SortNet architectures with absolute value activation. In practice, while the learned function is clearly not Boolean-valued, we believe the intuition still makes sense: the learned weight may still have a certain degree of sparsity, and the biases are more important than the weights. This possibly justifies that fixing the weights to geometric series is reasonable. Nevertheless, it is still likely that better results can be achieved using trainable weights. We will study how to design efficient training strategies for the general setting in future work.",
" **Including error bars would strengthen the empirical results further**. Thanks for the suggestion. We have run multiple experiments on MNIST and CIFAR-10 during these days. Due to the time limit, we have not completed the ImageNet experiments yet. The current results are shown as follows:\n\nMNIST ($\\epsilon=0.1$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 99.13 | 99.03 | 99.08 | 99.02 | 99.00 | 99.05 ± 0.04 |\n| Certified | 98.22 | 98.05 | 98.12 | 98.26 | 98.14 | 98.16 ± 0.07 |\n\nMNIST ($\\epsilon=0.3$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 98.58 | 98.56 | 98.54 | 98.54 | 98.46 | 98.54 ± 0.04 |\n| Certified | 93.60 | 93.43 | 93.45 | 93.56 | 93.40 | 93.49 ± 0.08 |\n\nCIFAR-10 (SortNet $\\epsilon=2/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 65.86 | 65.96 | 65.69 | 65.98 | 65.79 | 65.86 ± 0.11 |\n| Certified | 56.65 | 56.67 | 56.20 | 56.23 | 56.29 | 56.41 ± 0.20 |\n\nCIFAR-10 (SortNet $\\epsilon=8/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 54.84 | 54.30 | 54.59 | 54.28 | 54.70 | 54.54 ± 0.22 |\n| Certified | 40.39 | 40.13 | 40.14 | 39.96 | 40.05 | 40.13 ± 0.14 |\n\nCIFAR-10 (SortNet+MLP $\\epsilon=2/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 67.64 | 67.72 | 67.57 | 67.59 | 67.72 | 67.65 ± 0.06 |\n| Certified | 56.80 | 56.94 | 56.70 | 56.63 | 56.30 | 56.67 ± 0.21 |\n\nCIFAR-10 (SortNet+MLP $\\epsilon=8/255$):\n\n| | Exp1 | Exp2 | Exp3 | Exp4 | Exp5 | Confidence interval |\n| ---- | ---- | ---- | ---- | ---- | ---- | ------------------- |\n| Clean | 54.80 | 54.13 | 54.33 | 54.02 | 54.39 | 54.33 ± 0.26 |\n| Certified | 39.56 | 39.99 | 39.76 | 39.45 | 39.70 | 39.70 ± 0.18 |\n\nImageNet (SortNet+MLP $\\epsilon=1/255$):\n\n| | Exp1 | Exp2 | Exp3 |\n| ---- | ---- | ---- | ---- |\n| Clean | 13.21 | 13.48 | 13.37 |\n| Certified | 9.00 | 9.02 | 8.97 |\n\nWe will update these results in our paper once the ImageNet results are ready.",
" **Other $\\ell_p$ norms**. Indeed, it is a good question whether our results can be generalized to other $\\ell_p$-norms. In the updated version, we have discussed the $\\ell_p$ case in detail in lines 458-462. We believe the main impossibility results should approximately hold when $p$ is large, and we will rigorously write it down in future work. However, it definitely does not apply in the standard $\\ell_2$-norm: in this case MaxMin is equivalent to the absolute value function in terms of expressive power, as pointed out by Anil et al. [1] (also by Reviewer G1MR), and empirical results suggest that these $\\ell_2$ Lipschitz networks are expressive [2] (although it is still a fantastic open problem to formally prove that they are universal approximators).\n\nTherefore, our results reflect an interesting ``phase transition'' in the expressive power of standard Lipschitz networks when $p$ is switched from 2 to a large number. Coincidentally, a similar limitation is also proved when using randomized smoothing, which suffers from the curse of dimensionality when $p>2$ [3]. This raises an interesting question of why the effect of $p$ is very similar for both methods and how things change as $p$ increases. We will investigate these aspects in future work.\n\n**Regarding convolutional architectures**. The proposed network can be applied to the convolutional architecture by treating the image pixels in each convolutional window as the input vector and using shared weights and biases. We have tried the convolutional architecture in ImageNet-like experiments (see Table 5 in Appendix E.2) using a simple architectural design, and the results can be improved (SortNet 2x in Table 3). For future work, we are interested in designing better convolutional architectures with higher performance.\n\n**Minor Comments**. Thanks for pointing out these typos. We have corrected them in the updated version. \n\nThe MaxMin activation and absolute value activation are proved to be equivalent for the $\\ell_2$-norm in Anil et al. [1]. But for the $\\ell_\\infty$-norm, Anil et al. did not show the equivalence between these two activations. In this paper, we prove that they are actually *not* equivalent, and the MaxMin activation is *strictly* more powerful than the absolute value in the $\\ell_\\infty$-norm case. \n\n**Limitations**. Thanks for the suggestion. We have followed your advice and discussed the limitations in detail in the updated version. We believe most of them in your list are present, e.g., general $\\ell_p$-norm perturbations, margin-based certification, architectures beyond fully-connected networks, and using learnable weights. We also discussed the potential broader impact of this paper. \n\n[1] Sorting out Lipschitz function approximation. ICML 2019.\n\n[2] Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100. ICLR 2022.\n\n[3] Randomized Smoothing of All Shapes and Sizes. ICML 2020.",
" We thank Reviewer wXX6 for the comments and suggestions. We have followed your advice and revised our submission accordingly. In particular, we can elaborate further on the motivation of each theorem given the extra one page. We have also modified the introduction part by splitting it into different paragraphs with paragraph names as bold as you suggested.\n\nRegarding the problem relevance, there have been a large number of recent advances in certified robustness, which successfully enable network training scaling up to **ImageNet**-like dataset. For example, using randomized smoothing [1] one can efficiently train a deep neural network (such as ResNet50) on Imagenet with good robustness guarantees under $\\ell_2$ perturbation, and a recent work even achieved 71% top-1 accuracy under $\\epsilon= 0.5$ [2]. There are also many papers focusing on certified robustness using Lipschitz networks this year, e.g. [3] (ICLR22 spotlight), [4] (ICLR22), [5] (ICML22). We believe this direction is promising and there is a high opportunity that we can learn robust models with certified guarantees for both $\\ell_2$/$\\ell_\\infty$ perturbations. \n\nWe hope our response as well as the new version of this paper can clarify your concerns. We are happy to go into more detail regarding any of them and we look forward to your reply.\n\n[1] Certified Adversarial Robustness via Randomized Smoothing. ICML 2019.\n\n[2] (Certified!!) adversarial robustness for free! arxiv preprint 2206.10550.\n\n[3] Improved deterministic l2 robustness on cifar-10 and cifar-100. ICLR 2022.\n\n[4] Boosting the Certified Robustness of L-infinity Distance Nets. ICLR 2022.\n\n[5] A dynamical system perspective for Lipschitz neural networks. ICML 2022.",
" 1. Problem/Motivation\n- The paper looks into the relationship between certified robustness and Lipschitz continuity for the $l_\\infty$ case. \n- It has been observed that Lipschitz networks don't work for $l_\\infty$ as well as they work for $l_2$ setting. However, recent work from Zhang et al. created some particular 1-Lipschitz network which has good robustness.\n- The authors try to understand this phenomenon.\n\n2. Methodology/Technical Finding\n\nA) Methodology\n- Authors study this question by using discrete Boolean functions and prove some impossibility results (negative results) for this setting. \n- This relates to the difficulty in training certifiably robust Lipschitz network for the $l_\\infty$ perturbations setting.\n\nB) Explaining performance of recent Lipschitz networks (Zhang et al.)\n- Authors use the above machinery to examine how some recent networks for this setting work well thus justifying their empirical performance.\n\nC) Designing even better Lipschitz networks\n- From the above insights, authors propose new networks.\n\n3. Experiments\n- Authors show the empirical performance of the proposed SortNet, where the methods perform almost at par (maybe slightly better) than recent methods.\n\n########## POST REBUTTAL ##############\nAfter going through other reviews and the authors rebuttal, I have decided to keep my score as it is (borderline accept) and not increase the rating. My main concern remains the same, that we need to move beyond theoretically convenient settings, to scale to networks used in practise.\nThe authors shared 2 threads of work, which I am aware of. Here are my concerns with them:\n- Randomised smoothing: The test time cost of randomised smoothing is very high as it needs to run for many perturbations. Thus even if they work on large datasets, the methods are not practical.\n- Certified robustness using Lipschitz networks: Are we able to convince the practitioners to use Lipschitz networks? I have given a thorough summary in the above box. \n\nStrengths\n- The technical part of the paper is solid. Authors have used Boolean functions to explain the workings/robustness of Lipschitz networks. \n\nWeaknesses:\n1. Writing: The method section can be improved. It is a bit dense and hard to follow. I have provided suggestions for improvement in the Questions box.\n2. Problem Relevance: I am not sure about this whole line of work. Certified robustness has been there for a while now. It is not scaling up and there is less chance of scaling up. So, I am not sure if this is so relevant for the community. These are suggestions regarding writing. I hope it is useful for authors.\n1. Better sectioning\n- Look at the summary I wrote and how I divided into subgroups. If you do so throughout the paper, it would be great. Start from the introduction where you can have these as different paragraphs with paragraph names as bold.\n- Then match them with same names (either as paragraphs or subsections) in the method section.\n\n2. In the method section, before bringing a theorem, you should try to motivate the theorem. More space should be spent of motivation and linking theorems to the whole them or theme of the subgroup (see point 1). The work addresses Certified Robustness, which is on the reliability line of work, thus positive societal impact.",
" This paper presents novel theoretical results and a novel architecture for designing Lipschitz constrained neural networks (with respect to the infinity norm). The theoretical results provide new insights on two prior approaches and give a strong justification for the method introduced by the authors. The authors also provide a novel stochastic approximation to train their model efficiently. If I understand correctly, this comes at with a cost to expressiveness but is seemingly insignificant. Overall, I found this paper to be clearly written and with significant theoretical and empirical results.\n\nThe theoretical results are thorough and exciting. Unlike some prior work (e.g. Anil et al.) the authors provide (lower) bounds on approximation quality in addition to exact completeness results. Theorems 3.5 and 3.6 show that standard Lipschitz neural networks fail to achieve robustness as input dimensionality increases for simple 1-Lipschitz functions. These results are not surprising but are more concrete than prior results that I am aware of. Theorem 3.8 then gives a finite-dimensional completeness result for the distance nets that were introduced in prior work (though universal approximation results have already been shown, e.g. Zhang et al. 2021). Theorem 3.9 then shows that the previously introduced MaxMin networks require depth that grows with input dimension to approximate the order statistics accurately (confirming a negative result that had been explored empirically by Anil et al. 2019 and theoretically by Huster et al. 2019).\n\nEmpirically, the introduced SortNet architecture shows excellent performance (relative to the class of models it is compared against). SortNet typically achieves clean accuracy that is comparable to the clean accuracy of other certified networks but achieves improved certified robustness across the board. Importantly, it is also faster to train and compute certified bounds than other methods. ## Strengths\n\nThe theoretical results are well presented and complete, providing upper and lower bounds on approximation error and highlighting order statistics as an important class of functions. There is more to do in this space. For example, the authors show that order statistics are hard to learn for MaxMin networks but why is this important for classification tasks? But I consider the paper to address all reasonable areas of interest in sufficient depth.\n\nI found the SortNet architecture compelling and the stochastic approximation of the linear function of the order statistics is very neat. The dropout connection is also interesting (and perhaps nested dropout is worth a mention).\n\nThe empirical results are thorough and compare across the appropriate metrics for the tasks considered. As always, including error bars would strengthen the empirical results further.\n\n## Weaknesses\n\nThe authors address only the infinity norm and make no claims on how these results may generalize to other l_p norms beyond section 3.1.\n\nThe authors results are only (obviously) applicable to fully-connected networks. This is a limitation which is shared by other work in the area, but not exclusively (Li et al. 2019 study 1-Lipschitz convolutions with respect to the 2-norm). It is unclear whether incorporating convolutional layers would improve performance on tasks which typically demand them (ImageNet).\n\nGenerally, it seems that the larger community has disengaged a little with norm-bounded threat models for adversarial robustness. This _is_ still an active area and I personally believe that certified robustness guarantees in this model remain an important topic of study. However, I reduce my score a little, in part, because I do not think the empirical results presented here would be enough to make waves in the deep learning community more broadly. \"Based on our knowledge of Boolean circuit theory\" --- as somebody without this knowledge, I'm keen to know a little more about this. Can you provide some brief intuition (in response, and in the paper) for why MaxMin nets require O(d) scaling while the distance nets appreciate constant depth.\n\nThe SortNet architecture introduces the weights as network parameters. To make the network training efficient, the weights are fixed to a geometric series. This contains the distance nets as a special case, but once the weights are fixed the networks are now, assuming an identity activation, less expressive than GroupSort networks that are able to learn the weights alongside using the order statistics. Is learning the weights entirely unimportant? Is it beneficial to fix them?\n\n\n### Minor comments\n\nIn line 227 you refer to the absolute value function as an example of a GNP element-wise activation function. This is just a small comment that absolute value and MaxMin are equivalent in terms of expressivity (at least in both the 2-norm and infinity-norm settings). See Appendix A.3 in Anil et al. (it is shown explicitly for 2-norm).\n\nL198-199: The negative answer is only shown for so-called Standard 1-Lipschitz neural networks.\n\nL170: \"Lipschitz\" -> \"1-Lipschitz\"\n\nL274: \"MinMax networks\" should be \"MaxMin\" networks as elsewhere in the paper The authors do discuss limitations but only very briefly. So far as I can tell, they only discuss:\n\n- Not proving a tight upper bound on MaxMin networks for general Boolean functions\n- Not treating rho as a learnable parameter\n\nI feel that there are more details to be discussed here, some of which I raise in review. A broader discussion of the limitations of this research relative to related work would be appreciated. All of these may not be necessary, but here are some areas that could be discussed:\n- Architecture limited to fully-connected networks\n- Results/architecture only applicable under l_p norm constraints [though I believe some of the same ideas could be transferred more generally]\n- Limitations of norm-bounded threat models for robustness in practice\n- Limitations of margin-based robustness guarantees (in terms of training and certification)\n- Fixed weights when using stochastic approximation (related to fixed rho, but more general)",
" The authors present a generalization of some recently proposed variants of $\\ell_\\infty$-Lipschitz networks, i.e., networks that by design are 1-Lipschitz with respect to the $\\ell_\\infty$-norm, that come with guarantees of expressivity. This type of networks are useful as they automatically come with certificates of robustness without requiring expensive additional procedures, however some previous proposed designs suffered from low expressivity or difficulties during training.\n\nThe authors also present a stronger versions of a negative results on approximation of Lipschitz functions by standard Lipschitz neural networks (those with bounded weight matrices and element-wise 1-Lipschitz activations), as well as new negative results on the expressivity of MaxMin networks.\n\nThe proposed architecture is expensive to evaluate and hence, the authors propose to replace the layers with a stochastic approximation that drastically reduces the computation. The performance of the proposed approach is compared with state-of-the-art baselines showing promising but inconclusive results. Remarkably, the proposed architecture can be trained in a reasonable amount of time on the TinyImagenet and Imagenet datasets, which previously proposed methods found to be challenging.\n\n*** after rebuttal ***\nAuthors have addressed most of my concerns. I am inclined to increase my score. Will expand later on why. **Originality**: The paper builds upon previous work that introduced the GroupSort activation and another work that introduced $\\ell_\\infty$-distance networks, but generalizes both approaches to a more expressive architecture. The expressivity claim is backed by formal approximation results that show, in particular, that the more computationally efficient version of the GroupSort-activation network, the MaxMin network still has limited expressivity. To my knowledge, such results are new. Another new result is that the $\\ell_\\infty$-distance network can approximate any discrete boolean function, a result that was not part of the original paper introducing such architecture. In contrast proposition 3.1 is well-known (or a slightly less general version) but is provided mostly for completeness.\n\n**Quality** + **Clarity**: The work backs up all claims with carefully presented proofs. Indeed the readability of the paper is vastly above the average subsmission and overall, the statements, proofs and high level ideas of the paper are easy to grasp, even though the proofs have a high level of technical difficulty. This is a major strength of this work. Nevertheless, we also find what in my opinion is the main weakness of this paper. Sadly, **the accuracy/robustness/certified-robustness numbers in the experimental evaluation are provided without a confidence interval**. Because some of these numbers appear to improve over the baselines by small amounts, it is not clear if such improvements are significant enough to warrant presenting them in **bold** and claiming a real improvement.\n\nPerhaps the authors have had time to do multiple runs between the time of submission and rebuttal period, and could clarify if the results do not appear to be better only by chance. One issue is that such experiments are costly to evaluate, but confidence intervals, at least for the smaller scale datasets like MNIST, CIFAR10, multiple runs should be standard and I think it is reasonable to expect them. On the other hand, the time numbers are clearly lower and one would not expect the randomness to drastically change them. Another good thing is that the biggest increase in performance appears to be in the Imagenet Dataset, which is usually considered more significant. The inclusion of confidence intervals at least for MNIST+CIFAR10 would definitely improve my score.\n\n**Significance**: The certified robustness problem is a long-standing problem and understanding the limits of what can be achieved is a significant problem. Even though $\\ell_\\infty$-robustness can be seen as a simplified model of real threats to computer vision systems, it can be seen as a way to devise more robust models in practice. Having a general architecture that subsumes previous promising Lipchitz networks and showing it is expressive enough to represent all boolean functions while being able to train it in a reasonable time seems significant enough. However, it was not so clear to me what is the precise theoretical advantage of SortNet (this work) vs $\\ell_\\infty$-distance network. I think not theoretical advantage was presented but rather it was shown through experiments that SortNet is somehow easier to train.\n\n**miscellaneous:**\n1. missing references [A] (for $\\ell_2$-metric) and [B] (for $\\ell_\\infty$-metric) which are methods to compute the Lipschitz constant of neural networks, that can be used to regularize the loss and obtain certified robustness.\n2. Proof between lines 729-730 in the appendix: \"With loss of generality\" -> \"Without loss of generality\" (I think this is what you meant)\n\n**References**:\n[A] Efficient and accurate estimation of lipschitz constants for deep neural networks.\nMahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas. NeurIPS 2019\n\n[B] Lipschitz constant estimation of Neural Networks via sparse polynomial optimization \nFabian Latorre, Paul Rolland, Volkan Cevher. ICLR 2020\n\n 1. can you provide confidence intervals for the numbers in the experimental section?\n2. could you better back up the claim that the improvements over $\\ell_\\infty$-distance nets are significant?\n3. Could you clarify if there is a theoretical advantage of SortNet over $\\ell_\\infty$-distance net?\n4. Could you clarify in line 34 what you mean by *far from satisfactory*? Is there a number that would be considered *satisfactory*? One limitation that I think is not mentioned, is that by introducing an stochastic estimation *between layers*, the gradient at a realization of the stochastic variables might not constitute an stochastic gradient of the full loss, and hence the optimization algorithms falls out of the SGD paradigm. Nevertheless in practice it looks like the performance obtained by this training procedure (similar to dropout) is enough to justify its heuristic nature.",
" The paper studies certifiably robust Lipschitz continuous neural networks. To this end they first prove a couple of impossibility results for standard Lipschitz networks, i.e., feedforward networks of the form $x_{k+1} = \\sigma(W_k x_k + b_k)$ where the activation $\\sigma$ is $1$-Lipschitz and the weight matrices have unit norm. Their first results deal with the approximation of Boolean functions $g^B : \\lbrace 0,1\\rbrace^d\\to\\lbrace 0,1\\rbrace$ and prove that no standard Lipschitz networks can achieve a robustness radius of more than $1/2d$, which obviously deteriorates in high dimensions. Consequently, these impossibility results are transferred to general Lipschitz functions by proving that the order statistics, the 1-Lipschitz function taking its input $x$ to the $k$-th largest component $x_{(k)}$, cannot be approximated by a standard 1-Lipschitz network.\n\nThe authors then continue the discussion by studying GroupSort and $\\ell_\\infty$-distance architectures, which are non-standard and are known to be universal Lipschitz function approximators. They prove that the simplified GroupSort architecture called MaxMin, which is commonly used since it's feasible to train, is NO universal Lipschitz function approximator. Furthermore, they prove that $\\ell_\\infty$-distance architectures can exactly represent certain Boolean functions and order statistics. \n\nMotivated by these insights they suggest a new architecture, SortNet, which has both GroupSort and $\\ell_\\infty$-distance networks as special cases. In particular, it is a universal Lipschitz function approximator. Then, they suggest a practical version of of SortNet which replaces the expensive sorting operation by an unbiased estimator which just evaluates the maximum of a masked vector. In contrast to the MaxMin simplifcation, their simplified SortNet is still an universal approximator.\n\nThey conclude with numerical results which show that the proposed SortNet architecture is consistently slightly to significantly better than $\\ell_\\infty$-distance networks and significantly better than GroupSort. \n\n######POST-REBUTTAL######\nI increased my score to strong accept. Strengths:\n\nThe paper is very well-written and organized. All proofs are deferred to an appendix but at the same time they are concise, clear, and correct (I check most of them, not all). The paper has a clear golden thread: first, impossibility results for standard networks are proved, then theoretical explanations for the success and failure of the known GroupSort and $\\ell_\\infty$-distance networks are proved, finally a novel and unifying architecture is suggested and evaluated.\n\nWeaknesses: \n\nN/A I only have very few minor comments which could further improve the presentation:\n\n- p.5, l.198: \"whether they can approximate all Lipschitz functions\" should be \"whether they can approximate all 1-Lipschitz functions\"\n- bibliography: capitalization should be fixed, e.g., lipschitz, Mnist, Mma, etc...\n- p.21, l.862: blank missing in \"some k\" N/A"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"SbNhFcz0IxZ",
"uTK-droDpzf",
"nips_2022_xaWO6bAY0xM",
"hi34jv2_VtB",
"CkaS5pPHcDr",
"CkaS5pPHcDr",
"xRwBlZjTNW9",
"xRwBlZjTNW9",
"xRwBlZjTNW9",
"1GbWfjtlNw",
"nips_2022_xaWO6bAY0xM",
"nips_2022_xaWO6bAY0xM",
"nips_2022_xaWO6bAY0xM",
"nips_2022_xaWO6bAY0xM"
] |
nips_2022_mMdRZipvld2 | Deconfounded Representation Similarity for Comparison of Neural Networks | Similarity metrics such as representational similarity analysis (RSA) and centered kernel alignment (CKA) have been used to understand neural networks by comparing their layer-wise representations. However, these metrics are confounded by the population structure of data items in the input space, leading to inconsistent conclusions about the \emph{functional} similarity between neural networks, such as spuriously high similarity of completely random neural networks and inconsistent domain relations in transfer learning. We introduce a simple and generally applicable fix to adjust for the confounder with covariate adjustment regression, which improves the ability of CKA and RSA to reveal functional similarity and also retains the intuitive invariance properties of the original similarity measures. We show that deconfounding the similarity metrics increases the resolution of detecting functionally similar neural networks across domains. Moreover, in real-world applications, deconfounding improves the consistency between CKA and domain similarity in transfer learning, and increases the correlation between CKA and model out-of-distribution accuracy similarity. | Accept | The paper makes the observation that neural network similarity indexes can be misleading when compared across domains with different examples. The paper presents a fix via covariate adjustment, which improves quality of similarity indexes across neural networks across domains. The approach is simple, and the reviewers unanimously agree that the paper is worthy of publication at NeurIPS. | train | [
"gkGGAo_r4uA",
"aA06dFgBtVRw",
"zHmJSMD4EYe",
"hZNkvikEBZN",
"vRkFaW95ewl",
"rnw-Fk6AJXI",
"sUVBcSyq78J",
"lZXnVHxcZPj",
"FtpbQqyvlzi",
"kPCJz0u1h9Z"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for clarifications.",
" Thank you for the good comments again!\n\n> Q2. Thanks for the clarification. Actually, I didn't understand that the authors \"averaged the evaluation metrics of layer-wise similarities\" when I first saw Figure 2, which is now very clear. I feel that this point (how to measure similarities between entire functions) can be stated clearly before going into the details of experiments (perhaps in Section 3) because the authors only provided a way to compute layer-wise similarities, while one of the final goals is to compare entire functions.\n\nA: Thanks for the valuable feedback. We will clarify the details of neural network comparison (i.e., layer-wise comparison) in the first paragraph of Section 4 on current page 5 in the camera-ready version.\n\n> Q4. Good point. I would prefer to see this explanation after the Propositions in the manuscript.\n\nA: Thanks. We will add the explanation to current Section 3.3 after the two propositions in the camera-ready version.",
" Many thanks for updating the manuscript and providing answers to my comments!\n\n> Q2\n\nThanks for the clarification. Actually, I didn't understand that the authors \"averaged the *evaluation metrics* of layer-wise similarities\" when I first saw Figure 2, which is now very clear. I feel that this point (how to measure similarities between entire functions) can be stated clearly before going into the details of experiments (perhaps in Section 3) because the authors only provided a way to compute layer-wise similarities, while one of the final goals is to compare entire functions.\n\n> Q4\n\nGood point. I would prefer to see this explanation after the Propositions in the manuscript.",
" Thank you for the insightful and positive feedback. We are encouraged that you found our paper to be solid and well-written, and our problem interesting and valuable for interpreting models. We are glad that you found our solution to be simple and elegant, and our experiments extensive and convincing. We believe we have been able to fix all concerns. We have provided a revised version for completeness, with changes marked in blue, and we will incorporate all feedback in the camera-ready version.",
" Thank you for the valuable feedback!\n\n> Q1. The motivation to measure the representational/functional similarity [...] we would not immediately see what such a functional similarity brings us to improve transfer learning and OOD generalization.\n\nA: As Reviewer pUes mentioned, the main use case of the proposed method is in model interpretation as in previous works on similarity measures between NNs [1,2], and it can provide insights into how models work in meta or transfer learning problems. Moreover, we believe that some insights can potentially inspire the development of better ML models. For instance, in (low-resource) transfer learning, we could encourage the dCKA between the fine-tuned and pretrained models to be correlated with the known domain similarity during training, as we observed a high correlation between domain similarity and PT-FT similarity in Section 4.3.\n\nWe will add the potential usages to the Discussion section (current page 9) in the camera-ready version.\n\n> Q2. To measure the similarity between the entire networks, the authors simply average the layer-wise similarity through the experiments [...] when networks are deeper; most of the high-level features are similar between two networks, and hence the layer-wise averaged similarity would not be sufficient.\n\nA: Actually we didn't average the layer-wise similarity to measure similarity between networks. Instead, by following the common practice, we compared NNs layer-by-layer [2], and we averaged the evaluation metrics, such as the rank correlation, of layer-wise similarities to compare different similarity metrics [3]. This is mainly because the evaluation metrics have been shown to be consistent across layers in [3] and we observed similar behavior in our experiments too (e.g., Figure 4). Even when the evaluation metrics are not consistent across layers, it is still sensible to measure the average performance of each similarity metric across layers.\n\nWhen multiple domains/tasks are involved (the case that we focus more on), the learned high-level features from different domains can be very different (low similarities for deep layers in Figure 4).\n\n> Q3. In Section 2.1 (prior work), [...] why the first stage (Eq. (1)) is needed.\n\nA: By keeping the first stage, it is clearer to see that CKA is really a similarity of inter-example similarities in the representation space, and to calculate this, one first needs to calculate the representation similarities, which are calculated in Eq.1. Moreover, we can easily observe that the deconfounding step only affects this first stage. Thus, dCKA can be interpreted as a special case of CKA where the representation similarities are adjusted according to Section 3.1.\n\n> Q4. [...] why the invariance properties discussed in Section 3.3 are important. [...] A good functional similarity should be independent of the inter-example similarity. If the aforementioned operations do not affect the inter-example similarity, the importance of the invariance would be questionable.\n\nA: A good functional similarity metric should be independent of the inter-example similarity in the input space, i.e., the input similarity (because that is dataset specific), but dependent on inter-example similarity in the representation space (because that's affected by the functional form of the NN). The invariance properties ensure that the deconfounding does not sacrifice the desirable properties of CKA regarding inter-example similarities in the representation space, which are essential to understanding NNs in many cases (e.g., Q4 of Reviewer KLP6) [2].\n\n> Q5. In Eq. (2), do we need the full generality to introduce the similarity between different layers such as $m_1$ and $m_2$?\n\nA: We decided to keep this, because we consider the similarity between different layers in Appendix G.\n\n> Q6. (minor) At l.88, \"decounfounded\" -> \"deconfounded\"\n\n> Q7. (minor) In Section 3.1, you may define $K^{0}$ properly right after Eq. (3).\n\n> Q8. (minor) In Figure 4 (right), having the averaged similarity in the legends (like Figures 2 and 3) should be better.\n\nA: Thanks, corrected.\n\n**Reference**\n\n1. Williams, Alex H., et al. Generalized shape metrics on neural representations. _NeurIPS_, 2021.\n2. Kornblith, Simon, et al. Similarity of neural network representations revisited. _ICML_, 2019.\n3. Ding, Frances, et al. Grounding Representation Similarity Through Statistical Testing. _NeurIPS_, 2021.\n",
" Thank you for your good feedback!\n\n> Q1. In 3.3 you do not mention/link to the proofs of the proposition provided in the Appendix.\n\nA: Thanks, added!\n\n> Q2. In figure 2 x-axis label is missing. Also, the colouring of Random NN-s is a bit difficult to see.\n\nA: We now added the x-axis label and darkened the colouring of random NNs.\n\n> Q3. In 4.1 you perform a lot of experiments that include hypothesis testing and statistical significance. Have you considered multiple testing corrections?\n\nA: Good point; we now reduced the p-value threshold of hypothesis testing from $0.05$ to $\\frac{0.05}{50}$ (50 hypotheses in\ntotal) according to the Bonferroni correction. The averaged proportions of identified NNs with CKA and RSA decreased from $0.24$ to $0.14$ and from $0.02$ to $0.01$ respectively, but dCKA ($0.5$) and dRSA ($0.25$) stayed unchanged, which highlights the benefit of deconfounding.\n\n> Q4. In Figure 5 (B) the colours for zoom blur and contrast are mislabeled.\n\nA: Thanks, corrected!",
" Thank you for the good comments!\n\n> Q1. The intuition behind deconfounding.\n\nA: We will fit the following paragraph into Section 2.2 (current page 3) in the camera-ready version.\n\nSuppose we have a set of data items with an inter-example similarity matrix $K^{0}$. Then, making predictions with NNs can be seen as modifying $K^{0}$ layer-by-layer, such that similarities between data items with different labels decrease and those with the same label increase. A comparison of two NNs, $f_1$ and $f_2$, with CKA/RSA is based on comparing $K_{f_1}^{m}$ and $K_{f_2}^{m}$, which are the inter-example similarity matrices on layer $m$ of $f_1$ and $f_2$. However, in practice both $K_{f_1}^{m}$ and $K_{f_2}^{m}$ are correlated to $K^{0}$: if two data items are very close in the input space, naturally, they are likely to be close in the latent space. Hence, CKA/RSA depends on the specifics of the dataset, and for different datasets (e.g., from different domains) with different $K^{0}$, comparing CKA/RSA across the datasets may lead to inconsistent comparison results. We fix this by regressing out the input similarity $K^{0}$ from $K_{f_1}^{m}$ and $K_{f_2}^{m}$. After deconfounding, if two data items are very similar in the NN latent space, it is because the NN made them so, and not because they were similar in the first place. In this way the deconfounded metric more directly focuses the comparison on the functional form of the NNs, and is less affected by the structure of the given dataset in the input space.\n\n> Q2. Some experimental details are not presented - how do you choose kernels in dCKA?\n\nA: We use the same kernel as CKA. More details below in the response to Q6.\n\n> Q3. eq. (4) what does vec(*) mean?\n\nA: Yes, it means matrix flattening. It is explained now.\n\n> Q4. The paper ``Similarity of Neural Network Representations Revisited\" describes an interesting experiment, where the goal is to find the most similar layer between two networks trained from different seeds. Ideally, the most similar layer should have the same number. It is very interesting to evaluate your dCKA metrics for this problem.\n\nA: Actually we provided a similar experiment in Appendix G (with a short discussion in line 308 of the original paper). As expected, with dCKA, the most similar layers of models trained with different initializations usually have the same layer number (17 out of 20 layers in ResNets trained on CIFAR-10). This is because CKA [1] and dCKA (Proposition 3.1) are both invariant to orthogonal transformation, and they are expected to behave similarly in this problem.\n\n> Q5. line 132: ``we can approximate $dK_{f_1}^{m_1}$ and $dK_{f_1}^{m_1}$\" - you repeated $dK_{f_1}^{m_1}$ two times.\n\nA: Thanks, fixed!\n\n> Q6. In eq (3) you use an additional kernel to \"regress out\" the input similarity. Which kernels you used in the experiments to \"regress out\" the input similarity?\n\nA: The additional kernel $K^{0}$ is the same as $K_{f}^{m}$ (i.e. the original CKA kernel; mentioned in line 128 and line 139 of the original paper), though obviously calculated in the input space, whereas the CKA kernel is calculated in the representation space.\n\n**Reference**\n\n1. Kornblith, Simon, et al. Similarity of neural network representations revisited. _ICML_, 2019.",
" The paper presents an approach to improve CKA and RSA, which are the representational similarity metrics. Such metrics are used to compare representations from different layers of the same/different networks given the same list of objects.\nAuthors propose to \"regress out\" inter-object similarity, the idea came from biostatistics.\nExperiments with comparing networks trained from different seeds, fine-tuned networks, transfer learning show that the \"deconfounded\" CKA (dCKA) is more sensitive than regular CKA and better aligns with the intuitive notion of functional similarity.\nThe paper is well written and easy to follow. Strengths:\n* interesting idea regarding deconfounding.\n* convincing experimental results from vision and text domains\n\nWeakness:\n* I wish authors explain the intuition behind deconfounding in more details. \n* some experimental details are not presented - how do you choose kernels in dCKA? 1. eq. (4) what does vec(*) mean? I suppose it is matrix flattening? Maybe it makes sense to explain it explicitly.\n2. The paper \"Similarity of Neural Network Representations Revisited\" describes an interesting experiment, \nwhere the goal is to find the most similar layer between two networks trained from different seeds.\nIdeally, the most similar layer should have the same number. \nIt is very interesting to evaluate your dCKA metrics for this problem.\n\n3. line 132: \"we can approximate dK_{f_1}^{m_1} and dK_{f_1}^{m_1}\" - you repeated dK_{f_1}^{m_1} two times.\n4. in eq (3) you use an additional kernel to \"regress out\" the input similarity. It should bw different from the original one in CKA. \nWhich kernels you used in the experiments? No potential negative societal impact from this paper, in my opinion.",
" The paper investigates the confounding effect of similarity metrics used to compare neural networks on a layer-by-layer basis. The authors study the effect of this confounding effect. They argue that such measures as CKA and RSA are affected, thus, miscommunicating the functional similarity information by showing that the measures can show high similarity even for random neural networks. The paper proposes to 'deconfound' the metrics using a simple fix by regressing the input similarity structure from the representation similarity structure. Next, the authors conduct extensive experimentation to show the deconfounded measure performance under various settings. Originality\nThe paper tackles an interesting problem, though, relatively narrow in scope, but still valuable for a meta or transfer learning domain. While the correction of the similarity metrics is not improving on models' performance, it can provide important insights into the inner workings of these models. \n\nQuality\nThe paper is well-balanced and has all the necessary components: it introduces the problem and supports it with experiments, it solves the problem and provides necessary proofs without overburdening the paper. A multitude of experiments is introduced showing various aspects of the problem. The code and appendix are also provided. While some experiments are not straightforward and nested (having multiple steps in them on top of each other), I appreciate the complexity of the validation of the similarity metrics and how 'functional' similarity is difficult to define. \n\nClarity\nIt is a very well-written paper that is easy to read. It does not overcomplicate the idea, and the solution is quite elegant. I liked the experiments that provide interesting insights into neural networks' inner workings. \n\nSignificance\nA relatively simple fix of the similarity metrics correction is not necessarily a major contribution that will lead to a groundbreaking impact in ML area, but it is a solid paper that helps to address a problem for future work on models' interpretation and lead to the better model training experience. I don't have many questions as mostly the paper is well-written and provides all the necessary information. Just a few comments:\n\n- In 3.3 you do not mention/link to the proofs of the proposition provided in the Appendix\n- In figure 2 x-axis label is missing. Also, the colouring of Random NN-s is a bit difficult to see\n- In 4.1 you perform a lot of experiments that include hypothesis testing and statistical significance. Have you considered multiple testing corrections?\n- In Figure 5 (B) the colours for zoom blur and contrast are mislabeled. No limitations detected",
" This paper aims at measuring the _functional_ similarity between two given neural networks.\nThe existing approaches naively compute some similarity measure of intermediate outputs of two neural networks, which may be largely entangled with input data, leading to spurious similarity.\nThe authors propose a method to deconfound the effect of input data by simply regressing out their effect.\nThe decounfounded similarity measures admit a good correlation with domain similarity and OOD accuracy, which are observed through experiments. ### Strengths\n\n1. The spurious correlation between input data and functional similarity is formulated as a linear regression model. Although this simplicity may discard several more intricate structures, the deconfounded solution can be obtained by simply solving the normal equation. This formulation would be sufficient for the first step towards the deconfoundation.\n2. The proposed method basically can be applied to any type of the existing representational similarity measures by applying the deconfoundation step. Through the experiments, we can observe that adding this simple fix is beneficial to improving the performance in most cases.\n\n### Weaknesses\n\n1. The motivation to measure the representational/functional similarity could be made a little bit clearer. For example, Section 4.3 provide the experiments to observe the correlation between the functional similarity and domain similarity, and Section 4.4 to see the correlation between the functional similarity and OOD accuracy. While these results may seem excellent, we would not immediately see what such a functional similarity brings us to improve transfer learning and OOD generalization.\n2. The functional similarity is basically measured between two layers of given neural networks, not the entire neural networks. To measure the similarity between the entire networks, the authors simply average the layer-wise similarity through the experiments. Whereas this works in some situations (for example, Figure 2 tells us the averaged similarity is sufficient to distinguish random and fine-tuned ResNet-18), I have a concern when networks are deeper; most of the high-level features are similar between two networks, and hence the layer-wise averaged similarity would not be sufficient. ### Questions\n\n1. In Section 2.1 (prior work), the two-staged similarity measurement method is introduced (Eqs. (1) and (2)), but it may be not that evident why the first stage (Eq. (1)) is needed. A few more explanations would be preferred.\n2. I am not sure why the invariance properties discussed in Section 3.3 are important. In the propositions, the proposed similarity measures are shown to be invariant against orthogonal transformations and isotropic scaling, which should not change the inter-example similarity. In my understanding, a good functional similarity should be independent of the inter-example similarity. If the aforementioned operations do not affect the inter-example similarity, the importance of the invariance would be questionable. Hence, I would like to see some clarification.\n\n### Suggestions\n\n1. In Eq. (2), do we need the full generality to introduce the similarity between different layers such as $m\\_1$ and $m\\_2$?\n2. (minor) At l.88, \"decounfounded\" -> \"deconfounded\"\n3. (minor) In Section 3.1, you may define $K^0$ properly right after Eq. (3).\n4. (minor) In Figure 4 (right), having the averaged similarity in the legends (like Figures 2 and 3) should be better. The authors adequately addressed the limitations in the discussion. The linearity assumption is one of the biggest limitations of this work, but the proposed method still works fairly in several experiments. This work may not be relevant to any negative societal impact."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"sUVBcSyq78J",
"zHmJSMD4EYe",
"vRkFaW95ewl",
"nips_2022_mMdRZipvld2",
"kPCJz0u1h9Z",
"FtpbQqyvlzi",
"lZXnVHxcZPj",
"nips_2022_mMdRZipvld2",
"nips_2022_mMdRZipvld2",
"nips_2022_mMdRZipvld2"
] |
nips_2022_kCU2pUrmMih | Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM | Many problems in machine learning can be formulated as optimizing a convex functional over a vector space of measures. This paper studies the convergence of the mirror descent algorithm in this infinite-dimensional setting. Defining Bregman divergences through directional derivatives, we derive the convergence of the scheme for relatively smooth and convex pairs of functionals. Such assumptions allow to handle non-smooth functionals such as the Kullback--Leibler (KL) divergence. Applying our result to joint distributions and KL, we show that Sinkhorn's primal iterations for entropic optimal transport in the continuous setting correspond to a mirror descent, and we obtain a new proof of its (sub)linear convergence. We also show that Expectation Maximization (EM) can always formally be written as a mirror descent. When optimizing only on the latent distribution while fixing the mixtures parameters -- which corresponds to the Richardson--Lucy deconvolution scheme in signal processing -- we derive sublinear rates of convergence. | Accept | All reviewers recommend the paper. The authors should think about ways to make the paper more accessible to a machine learning audience, but I recommend accepting. When preparing the camera-ready version, please take into account the reviewers comments and please also specifically address these two points raised in the discussion:
"I'm of the opinion that authors should try put more effort in making current submission more accessible to general audience helping the reader to understand why certain notions of differentiability have been chosen over others etc."
"Providing a concrete example where relative smoothness fails but the proposed approach applies would increase the potential audience among non-experts." | train | [
"qvNTOCB3IsD",
"VyMfOyUUHQm",
"8_1zrfFrkEp",
"HsZ0hPB2vvs",
"q_rVwSsRD-M",
"6YVLWYGeQAm",
"WGEwDh0LyR",
"7FWH3qvgih",
"Hmb8I3LSIE8",
"r4dSIzj-A5X"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the authors for answering our questions. It would be nice if the authors could include this discussion about the rates of convergence in the paper or in the supplementary material. We keep our rating unchanged.",
" We thank the reviewer his positive comments and interest.\n\nQuestion 1.: As written in the end of Section 4.1, \"The linear convergence of Sinkhorn for bounded costs c has been known since at least Franklin and Lorenz (1989) and has then been derived also in the non-discrete case and in multimarginal settings (see Carlier, 2022, and references therein). Léger (2020) first obtained sublinear rates for\nunbounded costs leveraging relative smoothness, using (12) formally and through dual iterations\non the potentials. Here we derive the same rate rigorously with a more direct proof using primal\niterations, and complete the picture by recovering linear rates of convergence.'' In all of Section 4.1, we assume that the cost is bounded (in $L^{\\infty}$) to manipulate well-defined quantities (first variations in $L^\\infty$). Using the primal/measure rather than a dual/potential viewpoint makes for an easier proof of the convergence of Sinkhorn. You may also be interested by our answer to Reviewer rqqG on this topic.\n\nQuestion 2. To the best of our knowledge, the rates we obtain for Lucy-Richardson are novel.\n\n\nQuestion 3. \"About the quantity $c_k=\\sup_{x}k(x,x)$ and one hidden layer neural networks''. Thank your for the interesting question. Consider a regression task where the labelled data $(z,y)$ is distributed according to $P$ some data distribution. For any input $z$, the output of a single hidden layer neural network parametrized by $w$ writes as:\n\\begin{equation*}\n f_w(z)=\\frac{1}{N}\\sum_{j=1}^N a_j \\sigma(\\langle b_j, z\\rangle) = \\frac{1}{N}\\sum_{j=1}^N \\phi(z,w_j) \\to \\int \\phi(z,w)d\\mu(w) \\text{ as }N \\to \\infty,\n\\end{equation*}\nwhere $a_j$ and $b_j$ denote output and input weights of neuron $j=1,\\dots,N$ respectively and $w_j=(a_j,b_j)$, see the references cited l24-27. In the infinite-width setting, the limiting risk is the MSE written for any $\\mu$ as $\\E_{(z,y)\\sim P}[\\|y - \\int \\phi(z,w)d\\mu(w)\\|^2]$. When the model is well-posed, i.e. there exists a distribution $\\mu^*$ over weights such that $\\mathbb{E}[Y|Z=z]=\\int\\phi(z,w)d\\mu^*(w)$, then the limiting risk writes as an MMD with $k(w,w')= \\mathbb{E}_{z\\sim P}[\\phi(z,w)^T \\phi(z,w)]$ (see [Arbel et al 2020], Prop 20 in Appendix F). Hence, the bound on $c_k$ depends on the choice of the activation function $\\sigma$ and the output weights. If $\\sigma$ is bounded (e.g. $\\sigma$ is the sigmoid) then the bound on $c_k$ is the bound on the output weights. If $\\sigma$ is relU, then the bound depends on the bound on input/output weights as well as on the data distribution $P$. \n\nArbel et al. (2019). Maximum mean discrepancy gradient flow (arXiv:1906.04370)",
" We acknowledge that the paper is mathematical, however mirror descent with relative smoothness/convexity is an important theme in optimization and machine learning, and the Sinkhorn and EM algorithm are of great interest to the ML community. We thought it was more appropriate to confront our results to the ML community. \n\n\nConcerning the link between our framework and the one given by the Wasserstein-2 ($W_2$) geometry, developped by Ambrosio, Gigli, Savaré and Otto, Villani, we are reasonably familiar with these references as well. Given an optimisation problem over the set of probability distributions over $\\R^d$, one can consider at least two frameworks. The one adopted in our paper casts the space of probability distributions as a subset of a normed space measures such as $L^2$ or Radon. The shortest distance paths between measures are given by their square-norm distance. One can consider the duality of measures with continuous functions and the mirror descent algorithm, as we do in this work. In contrast, the second framework given by $W_2$ geometry, restricts the search space to probability distributions with bounded second moments. Equipped with the $W_2$ distance, this space is a metric space equipped with a rich Riemannian structure (often referred to as \"Otto calculus\") where the shortest distance paths are given by the $W_2$ distance and associated geodesics. One can leverage the Riemannian structure to discretize ($W_2$) gradient flows and consider algorithms such as ($W_2$) gradient descent. \nWhile both frameworks yield optimisation algorithms on measure spaces, the geometries and algorithms are quite different. The notion of convexity differs (along $L^2$ versus $W_2$ geodesics), as well as gradients (first variation vs gradient of first variation) and consequently many definitions. In addition, mirror descent yields multiplicative updates on measures allowing for change of mass, while gradient descent corresponds to displacement of (fixed mass) particles supporting the measures. A third option, Fisher-Rao gradient flows, is closer in spirit to what we study here, as the space of measures is equipped with the Hellinger distance, whose geometry is similar to the one we consider and allows for local change of mass and discontinuous displacement of particles. \n\n\nQuestion 1: relative smoothness is particularly well illustrated by the case where the objective function is a Kullback-Leibler divergence $\\KL(\\mu|\\pi)$ (as in our examples Section 4). The latter is always relatively smooth to itself (hence to the KL Bregman divergence), a fact that we exploit extensively in Section 4. However the KL is typically not a smooth objective in the classical sense, i.e. with \"Lipschitz gradients\", as defined in Lemma 3.1 of [Chizat 2021]. Indeed the \"gradient of the KL\" $\\mu\\mapsto\\log(\\mu|\\pi)(.)$ typically does not belong to $L^{\\infty}$, in contrast to the functional $\\mu\\mapsto \\int \\Phi d\\mu$ as in [Chizat 2021] for given $\\Phi\\in L^\\infty$. A more direct argument is that traditional smoothness cannot hold because $\\KL$ diverges for Dirac masses, thus does not have subquadratic growth with respect to any norm on measures. This makes $\\KL$ an unsuitable objective for traditional analysis despite being ubiquitous, and indeed \"relative smoothness indeed fixes the problem''.\n\nChizat (2021) Convergence rates of gradient methods for convex optimization in the space of measures. (arXiv:2105.08368)\n\nQuestion 2. Thank you for the interesting question about the relation of our work to the line of work you mention, which we also cite l24-27. The latter studies (stochastic) gradient descent as a time-discretization of some continuous dynamics corresponding to the $W_2$ gradient flow (WGF) of some objective functional (a limit risk) minimized by infinite-width neural networks. However, the mirror descent scheme we consider is very different in nature to the gradient descent scheme considered in these works, due to the different geometries at stake described earlier in this answer. Another way to see it is that (Wasserstein) gradient descent can be written as proximal iterations, similarly to mirror descent (see Eq 7 in our Section 3), but with the difference that the Wasserstein distance is not a Bregman divergence. Hence, the conditions needed for convergence greatly differ in these two settings for the mirror descent versus gradient descent schemes. While we can obtain global convergence of mirror descent thanks to the convexity and smoothness of the risk in our geometry (the limit risk takes the form of an MMD, which is relatively smooth with respect to the KL, see our Prop 13); the same risk is typically not convex with respect to the Wasserstein geometry (see Prop 5 in [Arbel et al 2020] for instance), whence exponential convergence cannot be shown through this argument.\n\nArbel et al. (2019). Maximum mean discrepancy gradient flow (arXiv:1906.04370)",
" We thank the reviewer for his very positive and encouraging comments, as well as for acknowledging our work on the non-trivial extension of the notions of relative smoothness and convexity to measure spaces. \n\n\"In Proposition 6, you define the quantity $D_c$ and explain how is it related to the relative strong convexity property of $F_S$. In practice, this is only finite if the cost function c is bounded. In Proposition 7 equation (22), it is not entirely clear whether the second bound also holds for unbounded cost functions (i.e. this was not an assumption), would you be able to state this explicitly?\" In all of Section 4.1 we assume that the cost is bounded (in $L^{\\infty}$). This indeed guarantees that $D_c$ in finite in Proposition 6. Then Prop. 6 allows us to obtain the first inequality in bound (22) in Prop 7. The other inequality of (22), \n\\[\\KL(\\mu_n|\\mu_*) \\le \\frac{\\KL(\\pi_*|\\pi_0)}{ n}\\]\nis inherited from relative smoothness (see Lemma 5) which comes from direct computations. We do know from [Leger 2020, Nutz 2021] that this bound holds for unbounded costs. However in the context of our general mirror descent framework and in order to manipulate finite quantities in the computations we needed to restrict ourselves to bounded costs in our article.\n\n\"It is not clear whether the sub-linear bound of $O(1/n)$ dependence is the best we can hope for unbounded cost functions. Could the authors include an example or a reference to illustrate what happens in this case?\"\nLet us first say that the tightness of this bound is an open question in the OT community. We will now write what is known up to our knowledge. First, $(KL(\\mu_n|\\mu_*))n$ is a decreasing summable nonnegative sequence and as such satisfies $\\KL(\\mu_n|\\mu_*) = o(1/n)$ (see for instance Lemma 6.11 from [Nutz 2021]), so we have a $o(1/n)$ instead of a $O(1/n)$. But the $O(1/n)$ has here an explicit constant so it is often preferred in practice. Second, we know of simple examples with infinite costs that can be computed explicitly and that are $O(1/n^2)$. Here is one of them: take $\\X=\\Y=\\{0,1\\}$ a set with two elements and $c$ given by $c(0,0)=c(0,1)=c(1,1)=0$ and $c(1,0)=\\infty$. Take $\\mu=\\nu=(\\frac12,\\frac12)$ the uniform measure. Then there exists a solution $\\pi_*$ and Sinkhorn produces iterates satisfying $\\KL(\\mu_n|\\mu_*)\\sim \\frac{1}{n^2}$. It is an open question whether $O(1/n^2)$ always holds for unbounded or infinite costs.\n\n",
" \"the motivation and significance of the presented work could be made clearer by a concrete example'': Let us recall that in the paper we develop in a generic way the theory for Sinkorn and Latent EM. As described in Section 4.2, the general goal of EM is to fit, through the objective function $F_{\\text{EM}}$, a parametric distribution to some observed data $Y$ (e.g. a mixture of Gaussians approximating the data), where one needs to estimate both the latent variable distribution on $X$ (e.g. weights of each Gaussian) and parameters of conditionals $P(Y|X=x)$ (e.g. means and covariances of each Gaussian). Latent EM (and its associated objective $F_{\\text{LEM}}$) focuses on learning the mixture weights, since it consists in optimizing over the nonparametric latent distribution, which can be continuous or discrete. A more concrete or familiar example in machine learning is the following: taking a discrete latent distribution $\\mu$ supported on $\\{1,\\dots, N\\}$, the goal is to learn the weights of $N$ Gaussians fitting the data distribution $\\bar \\nu$. \n\nTaking the limit of $N\\rightarrow \\infty$, we obtain general deblurring problems where the goal is to deblur a signal $Y$ given a filter $K$ (which cause a blur) and one aims at recovering the latent distribution of states $X$. \nThis is further emphasized by the fact that \"Latent EM'' iterations correspond to those of Lucy-Richardson algorithm, a commonly used denoising tool. Replace $\\R^d$ with any Hilbert space, for instance of signals or trajectories, and you obtain a problem of finding the best distribution of input signals matching a distribution of output observations for known filter $K$.\nSuch problems and schemes have already attracted a lot of interest in the statistics and signal literature previously, but the analysis of these schemes through relative smoothness and the results we obtain are novel to the best of our knowledge.\n\n\n\"This statement might benefit from a qualification to the KL divergence \"on discrete distributions\" or measures.'': Indeed, the multiplicative updates only occur in measure space of the parameters, not in the parameters themselves (as shown with the EM algorithm). We agree with your remark - we will make this point clearer: \"mirror descent yields multiplicative updates in the space of measures''",
" We thank the reviewers for their interest, positive comments and their relevant suggestions. We replied to each reviewer in a dedicated answer and will incorporate these points into the text. We hope our clarifications answer their questions and will improve their confidence and ratings concerning the paper.",
" The submission presents an infinite dimensional extension of relative measures of smoothness and strong-convexity to handle non-Euclidean notions of regularity. The theoretical guarantees cover the finite dimensional results recently obtained for Sinkhorn's algorithm in optimal transport and the Expectation Maximization algorithm for probabilistic models with latent variables.\n The submission is well presented and well contextualized within the line of work on relative smoothness and its application to probabilistic models. My only major concern is the approachability of the submission to an audience familiar with the optimization literature, including the recent developments in relative smoothness and its application to probabilistic models, but unfamiliar with measure spaces. An exhaustive introduction to the mathematical background is of course out of scope, and I appreciated the references the submission already provides. But the motivation and significance of the presented work could be made clearer by a concrete example, not covered by prior work, where the presented framework applies. The links with Optimal Transport and EM are beneficial, but abstract. An instantiation of an infinite dimensional transport problem or a probabilistic model with infinite dimensional latent variables, to serve as equivalents of the toy problems of transporting an histogram or a mixture model of 2 Gaussians.\n (Minor points)\n\n> 36: When using the KL divergence as Bregman divergence, mirror descent yields multiplicative updates, such as Sinkhorn’s algorithm\n\nThis statement might benefit from a qualification to the KL divergence \"on discrete distributions\" or measures. This is to avoid the confusion that \"mirror descent + KL divergence = MWU\", as is can also model divergences between parametrized measures (as the KL divergences between Gaussians with fixed identity covariance leads to Euclidean gradient descent on the parameters).\n No concerns",
" The paper studies mirror descent on measure spaces under relative smoothness and relative convexity assumptions. This setting is a significant generalisation of the classical $\\mathbb{R}^d$ setting and basic definitions such as the definition of Bregman divergence needs to be reworked (using directional derivatives). The main results are similar convergence results as in the classical setting (such as Lu et al. 2018). As application, explicit convergence rates are obtained for the Sinkhorn and the latent EM algorithms. The paper introduces a beautiful new theory of relative smoothness and convexity of functionals acting on measure spaces. The study of mirror descent on measure spaces has been gaining interest in recent years, and this paper offers a significant contribution to the understanding of this method. It is quite remarkable that two widely used algorithm, the Sinkhorn and latent EM algorithm are actually special cases of the mirror descent algorithm, showing the generality of the results.\n\nExtending the theory of relative smoothness and convexity to the measure space setting was not at all straightforward, and required a significant amount of work, including the change of some basic definitions.\n\nSince both the theory and the applications of the paper are very convincing, we do not feel that there are any notable weaknesses. In Proposition 6, you define the quantity $D_c$ and explain how is it related to the relative strong convexity property of F_S.\nIn practice, this is only finite if the cost function c is bounded.\nIn Proposition 7 equation (22), it is not entirely clear whether the second bound also holds for unbounded cost functions (i.e. this was not an assumption), would you be able to state this explicitly?\n\nIt is not clear whether the sub-linear bound of O(1/n) dependence is the best we can hope for unbounded cost functions. Could the authors include an example or a reference to illustrate what happens in this case? The authors have adequality addressed the limitations of their work.",
" This is a mathematical paper on calculus of variation. The main contribution is an extension of the paper by Lu et al where in finite dimensional setting authors introduced notion of relative smoothness and relative convexity using notion of Bergman divergence and proposed corresponding Mirror Descent algorithm for which linear convergence holds. The extension to infinite dimensional space of measures is not trivial and there are a few different notions of differentiability that one may consider. Authors work with generic Gâteaux and Fréchet derivatives which are then used to define Bergman divergence. As an application of the theory they recover linear convergence rate for Sinkhorn algorithm in the case when the cost function c is bounded and a special setting for abstract EM algorithm. Strengths:\n- The paper is carefully written with the theoretical parts well flesh out. \n- Adaptation of finite dimensional optimisation techniques to infinite spaces of measures allow for elegant analysis of many of the machine learning algorithms and so is of value. \n- Connection between Sinkhorn algorithm and Mirror descent while not entirely new is interesting \n\nWeaknesses:\n- The paper is purely mathetmical and I’m not convinced that NeurIPS is the best venue for this type of work. I believe mathematical journal on (variational) analysis would be more appropriate. \n- There is a large body work on differential and sub differential calculus on the spaces of measures (e.g book by Ambrosio, Luigi and Gigli, Nicola and Savaré, Giuseppe) and differentiation along the gradient flow as first proposed by Otto and Otto In Villani in their seminar paper about HWI inequality e.g W_2 or/and Fisher-Rao gradient flows . I would appreciate more through discussion and comparisons to these works. Though I suspect that the main results would remain true for these other notions (modulo technical assumptions) - In Lu et al. (Which I was not aware until now) Authors did a descent job presenting in which situations one cannot except to prove convergence of classical mirror descent but the modification using notions of relative smoothness and convexity fixed the problem. In the current submissions a thorough discussion is missing. \n - One popular example of optimising functions of a measure is the story of one-hidden layer neural network in mean filed regime (works of Montanari et al. Chizat and Bach, Hu et al.). Most of the works use W_2 gradient flows perspective. It would be interesting to shed some light how ideas developed by authors could help with establishing exponential convergence in that setting. The work is purely theoretical and hence will not have negative social impact. ",
" When working optimizing over measures, authors analyze the infinite-dimensional mirror descent algorithm and provide O(L/t) convergence rates, when the functional is $l$ strongly convex and L-smooth, relative to another functional $\\phi$, where t is the iteration count and the big O contains an initial measure of dissimilarity between the optimal measure and the initial guess. This framework is applied to optimal transport, by which using it with the KL divergence (and resulting in Sinkhorn's algorithm) and convergence results are recovered. Expectation maximization is also shown to be equivalent to the mirror descent in their framework and it is shown that one particular case is convex and thus their framework applies and guarantees sublinear convergence. Part of the contribution lies in technical generalizations of the operations and analysis to the infinite-dimensional case via the use of directional derivatives as opposed to assuming a setting in which derivatives of some kind (Gateaux, Frechet...) exist. I am not an expert in the topic of measure spaces and infinite dimensional analysis of mirror descent algorithms or other algorithms. For this reason, my evaluation of this work is somewhat limited.\n\nStrengths: \n\n+ Good discussion of related work, when justifying where some ideas come from in the analysis or when some theorems' proofs are inspired by others. Still, I would have liked to see a couple of comments regarding related work, see the questions section.\n\n+ The convergence rates for Richardson-Lucy deconvolution.\n\n+ General framework of mirror descent with its convergence rates (Theorem 3)\n\nWeaknesses:\n\n+ See the questions section.\n\n\nMinor:\n\nL119 As a direct consequence of Lemma 11 in *the* Appendix\nL184 \"which proof\" -> \"whose proof\"\n I would have liked to see more discussion on the motivation of the problem, in comparison to what has been studied. The results applying to Sinkhorn for optimal transport were already known, am I right? It is nice that the framework allows to recover this, but I want to understand whether there was some generalization made with respect to assumptions or any other thing in this problem.\n\nAlso, was there any previous analysis for Richardson-Lucy deconvolution yielding convergence rates?\n\nRegarding the motivating problem mentioned in the introduction, in which the functional is identified to an MMD, and regarding the implications of the results of this work to this setting, the relative smoothness 4 c_k in line 587 seems to be a very large quantity. Can you quantify / are there quantifications of this value for the application mentioned regading the infinite-width one hidden layer neural network?\n\n yes"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
9,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"HsZ0hPB2vvs",
"r4dSIzj-A5X",
"Hmb8I3LSIE8",
"7FWH3qvgih",
"WGEwDh0LyR",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih",
"nips_2022_kCU2pUrmMih"
] |
nips_2022_CmD5z_2DVuM | Learning Energy Networks with Generalized Fenchel-Young Losses | Energy-based models, a.k.a. energy networks, perform inference by optimizing
an energy function, typically parametrized by a neural network.
This allows one to capture potentially complex relationships between inputs and
outputs.
To learn the parameters of the energy function, the solution to that
optimization problem is typically fed into a loss function.
The key challenge for training energy networks lies in computing loss gradients,
as this typically requires argmin/argmax differentiation.
In this paper, building upon a generalized notion of conjugate function,
which replaces the usual bilinear pairing with a general energy function,
we propose generalized Fenchel-Young losses, a natural loss construction for
learning energy networks. Our losses enjoy many desirable properties and their
gradients can be computed efficiently without argmin/argmax differentiation.
We also prove the calibration of their excess risk in the case of linear-concave
energies. We demonstrate our losses on multilabel classification and
imitation learning tasks. | Accept | This paper introduces a new notion of regularized energy function using generalized Fenchel conjugates. Reviewers were leaning towards accept, the least convinced reviewer discussed at length with the authors the contribution of the paper and the comparison of the proposed method to prior work, and leaned also towards accept after rebuttal and paper revision. Accept. | train | [
"h-1r2zIdMh",
"KaYiyvqWIs",
"lF71vlTyTpl",
"Fe-5NDgJet",
"Yck7SSI2X9A",
"GCMdkezCfkYH",
"Ogrfa51rSisl",
"3Cq1AVJSScm",
"vF9xjiQrZ7",
"hBBeano50Rr",
"hfd6j3Xlzac4",
"U2kZSpPghLk",
"U2JYUW2mcvw",
"yedKwILEYkh"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for making these amendments. I have now raised my score to 5 to account for this",
" Thank you very much for the constructive comments. We hope that your concerns are now addressed satisfactorily. \n\n> I think the references should be discussed earlier in the intro\n\nThis is now addressed in the revised manuscript. We now mention envelope theorems **before** introducing our contribution so it should be clear that we are not the first to use them. Instead, when introducing our contribution, we decided to mention regularization, which can be used to ensure that the unicity assumption of envelope theorems is satisfied. We note that papers [44] and [11] do not explicitly mention envelope theorems / Danskin's theorem, even though this is what they are doing (potentially without knowing it). We agree that the introduction is now indeed better, thanks!\n\n> but please just be precise about how different they are (in L565)\n\nThis is now addressed on lines 570-573.\n\n> It would be great to have this clarified in the paper.\n\nThis is now addressed on line 110.",
" Thank you for the clarifications, here are a few remarks:\n - **Argmax diff** \"If we follow the reviewer’s reasoning, any paper using envelope theorems is automatically not novel\". No, the point is to provide more context and to acknowledge prior works. As currently written, the reader has the impression that the present paper is the first to exploit the envelope thm in the context of ML, especially in the intro L27-42. The sentence the authors included: \"For other envelope theorem usecases in machine learning, see, e.g., [21].\", when discussing the assumptions does not really provide enough context for this. I think the references should be discussed earlier in the intro (L27-42). Please note that the losses arising in these references can be expressed an energy loss, so the connection to the present work is actually strong.\n - **Proposition 1 item 6:** It is good to mention that the authors mentioned the earlier result. I have a few comments though: the assumptions Ghadimi et al requires $\\nabla_y\\nabla_{x} g(x,y)$ to be bounded, this can arguably be easily relaxed to \"$\\nabla_{x} g(x,y)$ is $\\beta$ Lipschitz\" which does not need to make any smoothness assumption on $\\Omega$. One can still say the assumptions are different, but please just be precise about how different they are (in L565).\n - \"The most general setting while still getting an optimization problem solvable in polynomial time (leading to exact or very accurate gradients) is when the energy is parametrized with an ICNN. We believe our proposed loss is a clear advance in this setting.\" It would be great to have this clarified in the paper.\n \n\n ",
" Thank you for engaging in the discussion. We have revised the paper to take into account your remarks.\n\n> Regarding the author's response to reviewer 8vqD: The authors claim that the proposed method does not require MCMC sampling unlike existing approaches to EBM. Often MCMC is used when using an EBM with a complex model which yields a non-convex objective and high-dimensional sampling problem. It is unclear to me what choice of \\phi-functional would result in a ‘’more” tractable EBM objective that still results in an equally expressive model. (Knowing that high dimensional sampling and non-convex optimization are both NP hard problems.)\n\nThe comment on high-dimensional sampling vs nonconvex optimization is a very valid one. We note however that MCMC techniques are typically used for probabilistic EBMs while we focus on EBMs in the original sense of LeCun (2006), i.e., networks with an argmax output layer. When the argmax objective is nonconcave in p, it can indeed be NP hard to solve the problem exactly and typically we can only get approximate gradients, as we clearly stated in the paper. The most general setting while still getting an optimization problem solvable in polynomial time (leading to exact or very accurate gradients) is when the energy is parametrized with an ICNN. We believe our proposed loss is a clear advance in this setting.\n\n> Avoiding argmax differentiation: It seems that one of the main arguments for the proposed losses is that they bypass the need for differentiating wrt the argmin. While this is a nice property, there are many prior works that proposed to rely on the envelope theorem, thus bypassing the need for differentiating through the argmax, especially in the context of generative models: [Bottou 2017, Geometrical insights for implicit generative modelling, Nowozing 2016 f-GANs]. Therefore, the discussions, for instance the one in L140-150 should also be more nuanced about this.\n\nWe added the references in the revised manuscript for completeness but again we do not claim to be the first to use envelope theorems. We claim that our loss function is constructed in such a way that envelope theorems *can* be applied. When using arbitrary loss functions, envelope theorems can’t be applied and argmax differentiation is required instead. **If we follow the reviewer’s reasoning, any paper using envelope theorems is automatically not novel…**\n\n> Statement of the assumption: I still couldn’t find where the assumptions are clearly stated (separately from the proof), they are still distilled inside the proof, which makes the reading difficult. The authors should state the assumptions outside of the proofs and refer to them.\n\nWe now added the assumptions directly in the main text, see revised manuscript. We also added more background information and the above suggested citation.\n\n> Proposition 1 item 6: see lemma 2.2 (b) in Ghadimi 2018. This considers a gamma-strongly convex function g(v,p) in and that the hessian Nabla_{x,y} g(x,y) is bounded by beta. Can be applied to g(v,p)= Omega(p)-Phi(v,p) to directly obtain the lipschitz smoothness of p(v) and then deduce that Omega(v)^{\\phi} is (beta + beta^2/gamma) smooth.\n\nThis is an interesting reference that we added to the revised manuscript. However, the assumptions and the proof are different. Indeed, we do not require twice differentiability and we exploit the particular expression g(v,p)= Omega(p)-Phi(v,p). Indeed, applying the result of Ghadimi et al directly would require both Omega and Phi to be smooth in p, while we do not require the smoothness of Omega in p. This is important, as for example the negative entropy is not smooth. **In other words, we state a stronger result specialized for generalized conjugate functions.** We remind the reviewer that the smoothness property is essential to prove our key mathematical contribution, the calibration guarantees in Proposition 4.\n\n> Overall, I am not convinced by the significance of the contribution and in particular what is gained by such a level of generalization.\n\nThis seems to be a criticism of energy networks in general rather than of our proposed loss function, i.e., does replacing the bilinear pairing with a more general energy function provably allow better generalization capability? We agree that studying this question is interesting but it’s clearly out-of-scope of this paper and our paper shouldn’t be rejected on such grounds.\n\nFrom a more philosophical point of view, we argue that one of the quests of mathematics is to aim for generality. It is enlightening to see that many properties of regular Fenchel-Young losses can be extended to a much more general setting.",
" I have read the author's response and the other reviews. \n\nRegarding the author's response to reviewer 8vqD: The authors claim that the proposed method does not require MCMC sampling unlike existing approaches to EBM. Often MCMC is used when using an EBM with a complex model which yields a non-convex objective and high-dimensional sampling problem. It is unclear to me what choice of \\phi-functional would result in a ‘’more” tractable EBM objective that still results in an equally expressive model. (Knowing that high dimensional sampling and non-convex optimization are both NP hard problems.)\n\n\n- Avoiding argmax differentiation: It seems that one of the main arguments for the proposed losses is that they bypass the need for differentiating wrt the argmin. While this is a nice property, there are many prior works that proposed to rely on the envelope theorem, thus bypassing the need for differentiating through the argmax, especially in the context of generative models: [Bottou 2017, Geometrical insights for implicit generative modelling, Nowozing 2016 f-GANs]. Therefore, the discussions, for instance the one in L140-150 should also be more nuanced about this.\n\n\n- Statement of the assumption: I still couldn’t find where the assumptions are clearly stated (separately from the proof), they are still distilled inside the proof, which makes the reading difficult. The authors should state the assumptions outside of the proofs and refer to them.\n\n\n\n- Novelty: \n - Proposition 1 item 6: see lemma 2.2 (b) in Ghadimi 2018. This considers a gamma-strongly convex function g(v,p) in $p$ and that the hessian Nabla_{x,y} g(x,y) is bounded by beta. Can be applied to g(v,p)= Omega(p)-Phi(v,p) to directly obtain the lipschitz smoothness of p(v) and then deduce that Omega(v)^{\\phi} is (beta + beta^2/gamma) smooth.\n\n\n- Overall, I am not convinced by the significance of the contribution and in particular what is gained by such a level of generalization.\n",
" We noticed that reviewer jRDG still has not acknowledged our rebuttal. We understand that this is the summer break but we would very much like to engage with the reviewer. We believe that our work strongly advances the field of energy-based models / energy networks and that the reviewer score is disproportionately low compared to our contributions (which we listed in our rebuttal). We stress once again that generalized conjugates are not a well-known tool at all in the ML community. We therefore believe that the claim of \"lacking novelty\" and \"being straightforward\" is not justified. We thank again the reviewer for their time.",
" I acknowledge that the changes have been done and have no further comments.",
" We thank all reviewers for their comments, as well as the AC and senior AC for their editorial work. We have posted a revised manuscript incorporating the reviewer comments (modifications are highlighted in blue color).\n\nWe have already done so to Reviewer jRDG but we would also like to recall our key (mathematical) contributions:\n- Introducing the new notion of a regularized energy network.\n- Using generalized conjugates, which are not well-known at all in the ML community. Our paper will bring awareness to this new tool.\n- The smoothness result (Proposition 1, item 6) is new and not straightforward.\n- The lower bound result (Proposition 3, item 5) is more general than the existing one for regular FY losses and uses a simpler proof.\n- The calibration guarantees (Proposition 4) are more general than the existing ones for regular FY losses. They use a novel and not straightforward proof technique.\n- Generalized Bregman divergences (Appendix A) are completely new and not straightforward.\n\nOverall, we believe that our paper advances the field of energy networks by introducing a principled loss construction with theoretical guarantees.\n\nWe are happy to make further clarifications if needed.",
" Thank you for the positive assessment and constructive comments. We have taken into account your comments in the revised manuscript (modifications are highlighted in blue) and hope you will consider increasing your score.\n\n> The work can be considered a novel combination of existing techniques (Fenchel-Young losses and \\Phi-conjugates). It is not strikingly original but still valuable.\n\nTechnically, we do not combine Fenchel-Young losses but generalize them. We emphasize that, unlike classical conjugate functions (a.k.a. Legendre-Fenchel transforms), Phi-conjugate functions are not well-known, studied or used in the ML community. Therefore, we believe that our paper is original and will bring some awareness to this powerful tool. \n\n> the paragraph in lines 140-148 (“Existing loss functions for energy networks”) is too condensed and hard to parse, it should be expanded\n\nWe agree and have added more details in the revised manuscript. \n\n> section 6 is also hard to follow, it would be useful to explain at the beginning of the section what calibration is in this setting\n\nMany times, notably for differentiability reasons, the loss used at training time (here, our generalized Fenchel-Young loss) is used as a surrogate / proxy for the loss that we use at test time (e.g., zero-one loss, precision at k, etc). Calibration guarantees ensure that we still minimize the excess of risk of the test loss (a.k.a. target loss), even though we use a different loss at train time. We added a clarifying paragraph.\n\n> In Proposition 3, the list of properties is a bit long. Some of the properties are commented on below the proposition, but the attention of the reader gets lost, as it is not apparent which properties are most relevant. One suggestion is to defer to the appendix the properties that do not add much to the explanation.\n\nThank you for the feedback. We decided to keep the entire Proposition but have improved the connection between the Proposition and the explanations. Please check the revised manuscript.\n\n> Computationally, how do generalized FY losses compare to the existing alternatives? It would be good to compare time.\n\nIn Table 5, we compared 4 loss functions : the proposed generalized FY loss, the cross-entropy loss, the generalized perceptron loss and the energy loss. The big-O complexity of the first 3 losses is dominated by the cost of solving the argmax problem (Equation 4 in the paper). Therefore their big-O complexity is the same. The energy loss is cheaper because it doesn’t require solving the argmax but it is known to perform poorly (LeCun 2006), as also confirmed by our empirical results in Table 5. ",
" Thank you for taking the time to review our paper. We believe your score mainly stems from the claim of \"lacking novelty\" and \n\"being straightforward\". We believe this is not justified, as we hope to convince you below.\n\n> The idea of extending Fenchel-Young loss, which uses an energy function given by a scalar product is rather direct and straightforward and does not represent a significant technical challenge. For instance, the proofs are often direct consequences of well-known results such as the envelop theorem in proposition 1.\n\nIt seems that this review ignores all of our key (mathematical) contributions:\n- We introduced the new notion of a regularized energy network.\n- Generalized conjugates are not well-known at all in the ML community. On the contrary, our paper will bring awareness to this new tool.\n- The smoothness result (Proposition 1, item 6) is new and not straightforward.\n- The lower bound result (Proposition 3, item 5) is more general than the existing one for regular FY losses and uses a simpler proof.\n- The calibration guarantees (Proposition 4) are more general than the existing ones for regular FY losses. They use a novel and not straightforward proof technique.\n- Generalized Bregman divergences (Appendix A) are completely new and not straightforward.\n\nThe gradient computation indeed follows from envelope theorems (Danskin’s theorem if Phi is convex in v or Rockafellar’s theorem otherwise). We do not claim that this is difficult. Our main point is that our loss enjoys easy-to-compute gradients without argmax differentiation, thanks to this property.\n\nOverall, we believe that our paper advances the field of energy networks by introducing a principled loss construction with theoretical guarantees.\n\n> Given that most of the experimental results show only a marginal improvement compared to more standard objectives, I am not fully convinced by the significance of the results. Do the authors have situations in mind where one requires such more general losses and where simpler approaches fail?\n\nOur contribution is first and foremost a mathematical one. On the empirical side, while the improvements are indeed not big for the multilabel classification experiment, we argue they are significant in the imitation learning experiment. The pairwise model presented in the experiments is precisely a good example of an energy network enabled by our loss. Note that the goal of this paper is not to argue that energy networks are state-of-the-art for the tasks considered in our experiments but to introduce a better loss for learning them. This is confirmed empirically in Table 5, where we compared our loss with existing losses. \n\n> The paragraph in L140-148 is a bit confusing: what do the authors mean by measuring the discrepancy between p_{\\Omega} and without regularization Omega? Isn’t p_{\\Omega} defined by a choice of a function Omega?\n\nThank you, we agree this was confusing. By “without regularization”, we meant that Omega is 0. The regularization term Omega is independent of v, so it can be omitted in the energy loss. We added a clarification to the revised manuscript.\n\n> Envelope thm: The authors refer to some assumptions in the appendix under which the envelope theorem holds. These assumptions are distilled in the text of the proof. They should be explicitly stated somewhere (ideal in the main text).\n\nWhile we agree this should ideally be the case, this choice was made for space reasons because the assumptions are too long to state fully in the main text. Since we already mention that the precise assumptions are deferred to the appendix, we believe that this is an acceptable compromise. ",
" > The most common approaches to training EBMs are approximate approaches, where the approximation comes from using MCMC to sample from the EBM. This is because training an EBM using the exact gradients is intractable, due to differentiation through argmax/argmin. This paper constructs new loss functions that can be used to tractably train an EBM using exact gradients.\n\nThis is a good summary of our paper and indeed we do not require MCMC techniques, unlike existing approaches for EBMs.\n\n> The experiments are done with great thoroughness. The details for reproducibility are included, and the experiments are done in a robust manner (e.g. averaging over multiple seeds and properly tuning hyperparameters on validation sets).\n\nThank you very much for the very positive review.",
" The main contribution of the paper is to introduce and study generalized Fenchel-Young losses, which are losses for regularized energy networks that mirror Fenchel-Young losses, but where the Euclidean inner product is replaced by the energy function \\Phi. They compare their losses with existing losses in the settings of multilevel classification and imitation learning. Originality: The work can be considered a novel combination of existing techiques (Fenchel-Young losses and \\Phi-conjugates). It is not strikingly original but still valuable. It is clear how the work differs from previous works, which are appropriately cited for the most part.\n\nQuality: The work seems technically sound (I haven’t checked most proofs in the appendix).\n\nClarity: Some parts need to be clearer. See questions and suggestions.\n\nSignificance: The results provided are relevant as energy networks trained with generalized FY losses have better test accuracy in 4 of the 6 multilevel classification settings, and on 3 out of 4 tasks in imitation learning. - There are certain points which could be made clearer and should be expanded: (a) the paragraph in lines 140-148 (“Existing loss functions for energy networks”) is too condensed and hard to parse, it should be expanded, (b) section 6 is also hard to follow, it would be useful to explain at the beginning of the section what calibration is in this setting. On the other hand, sections 4 and 5 are reasonably clear.\n\n- In section 6, Proposition 3, the list of properties is a bit long. Some of the properties are commented on below the proposition, but the attention of the reader gets lost, as it is not apparent which properties are most relevant. One suggestion is to defer to the appendix the properties that do not add much to the explanation.\n\n- Computationally, how do generalized FY losses compare to the existing alternatives? It would be good to compare time.\n Everything ok.",
" The paper introduces a class of losses based on a generalized Fenchel-Young inequality for learning energy models. The idea is to maximize the agreement between the target $y$ and a feature $v = f(x)$ obtained using a model $f$ applied to an input $x$. The agreement is measured by minimizing an energy function $\\phi(v,y)$ to which a regularization $\\Omega(y)$ is added. Hence, for a given a feature $v$, the corresponding prediction $p(v)$ is given as the maximizer of the objective $\\max_p F(v,p) := \\phi(v,p)-\\Omega(p)$. The model $f$ learned by minimizing the discrepancy between $F(v ,y)$ and the optimal value $F(v,p(v))$. In other words, minimizing $L(v,y) = F(v,p(v))-F(v ,y)$.\n\nThe paper shows, amongst other properties, that such objectives $L$ can be optimized without the need to differentiate wrt the optimal $p(v)$ and can thus be implemented easily. \n\nSimple experiments show that the approach allows to consider more general losses that can yield improvement on classification and imitation learning tasks.\n Strenghts:\nThe paper is clearly written and the results and derivations are sound. The experiments, although basic, show a marginal improvement, especially in the context of imitation learning. \n\nWeaknesses: Originality and Significance\nOriginality: The idea of extending Fenchel-Young loss, which uses an energy function given by a scalar product $ \\phi(v,y)= <v,y>$ is rather direct and straightforward and does not represent a significant technical challenge. For instance, the proofs are often direct consequences of well-known results such as the envelop theorem in proposition 1.\n\nSignificance: Given that most of the experimental results show only a marginal improvement compared to more standard objectives, I am not fully convinced by the significance of the results. Do the authors have situations in mind where one requires such more general losses and where simpler approaches fail? \n\n\n\n \n\n\n\n\n\n - The paragraph in L140-148 is a bit confusing: what do the authors mean by measuring the discrepancy between p_{\\Omega} and $y$ without regularization Omega? Isn’t p_{\\Omega} defined by a choice of a function Omega? \n\n\n- Envelope thm: The authors refer to some assumptions in the appendix under which the envelope theorem holds. These assumptions are distilled in the text of the proof. They should be explicitly stated somewhere (ideal in the main text).\n\n\n\n\n\n\n\n The proposed approach has a rather limited novelty (both technical and conceptual). However, this can be mitigated if the authors provide more evidence for the significance of such generalization in practice.",
" The most common approaches to training EBMs are approximate approaches, where the approximation comes from using MCMC to sample from the EBM. This is because training an EBM using the exact gradients is intractable, due to differentiation through argmax/argmin. This paper constructs new loss functions that can be used to tractably train an EBM using exact gradients.\n\nThe authors apply their approach to imitation learning and multilabel classification.\n\nNote: I am not very familiar with theory, so I did not read the math sections closely.\n The experiments are done with great thoroughness. The details for reproducibility are included, and the experiments are done in a robust manner (e.g. averaging over multiple seeds and properly tuning hyperparameters on validation sets).\n\nThe authors show that they are able to achieve competitive performance on multilabel classification, despite using energy networks that aren’t arbitrarily nonlinear. In other words, they show that the losses arising from the energy networks they use in these experiments are expressive enough to be accurate.\n None at this time I do not see any potential negative societal impact of this work. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"KaYiyvqWIs",
"lF71vlTyTpl",
"Fe-5NDgJet",
"Yck7SSI2X9A",
"hBBeano50Rr",
"U2JYUW2mcvw",
"vF9xjiQrZ7",
"nips_2022_CmD5z_2DVuM",
"U2kZSpPghLk",
"U2JYUW2mcvw",
"yedKwILEYkh",
"nips_2022_CmD5z_2DVuM",
"nips_2022_CmD5z_2DVuM",
"nips_2022_CmD5z_2DVuM"
] |
nips_2022_wcBXsXIf-n9 | Reaching Nirvana: Maximizing the Margin in Both Euclidean and Angular Spaces for Deep Neural Network Classification | The classification loss functions used in deep neural network classifiers can be grouped into two categories based on maximizing the margin in either Euclidean or angular spaces. Euclidean distances between sample vectors are used during classification for the methods maximizing the margin in Euclidean spaces whereas the Cosine similarity distance is used during the testing stage for the methods maximizing margin in the angular spaces. This paper introduces a novel classification loss that maximizes the margin in both the Euclidean and angular spaces at the same time. This way, the Euclidean and Cosine distances will produce similar and consistent results and complement each other, which will in turn improve the accuracies. The proposed loss function enforces the samples of classes to cluster around the centers that represent them. The centers approximating classes are chosen from the boundary of a hypersphere, and the pairwise distances between class centers are always equivalent. This restriction corresponds to choosing centers from the vertices of a regular simplex. There is not any hyperparameter that must be set by the user in the proposed loss function, therefore the use of the proposed method is extremely easy for classical classification problems. Moreover, since the class samples are compactly clustered around their corresponding means, the proposed classifier is also very suitable for open set recognition problems where test samples can come from the unknown classes that are not seen in the training phase. Experimental studies show that the proposed method achieves the state-of-the-art accuracies on open set recognition despite its simplicity. | Reject | This paper proposed to use least-squares loss functions in training deep neural networks. The main idea is to encode class means, whose mutual distances are equivalent. The method is simple but efficient. However, the similar idea has been widely used in multi-class classification (SVM and Fisher discriminant analysis) and spectral clustering. More specifically, one reviewer commented that this work encodes class labels as high-dimensional vectors similar one-hot, and then uses a least-squares loss. Although the authors did not admit this comment, but essecially this comment is indeed right. This idea has been used such as in the following references
1) Multicategory Support Vector Machines: Theory and Application to the Classification of Microarray Data and Satellite Radiance Data
Yoonkyung Lee, Yi Lin & Grace Wahba
2) Prevalence of neural collapse during the terminalphase of deep learning training Vardan Papyana, X. Y. Hanb, and David L. Donoho
| val | [
"fm6zlb8uua2",
"cE2g4BAvUQ2",
"K6tCupmTs3u",
"sT30yYcve5T",
"OUkyecs1Aw",
"tUpt67GoRh4",
"RJQQj8jhBNX",
"YHIPBIj_v6",
"P18NlXgHQ2s0",
"KFdIL9E9_gu",
"dvZ5zJaXgjZ",
"q5Me_f0m9Mmi",
"rja5nFr-Ol7",
"NohUECiqEous",
"SMG52nWZOrj",
"LsSQQVxKxxE",
"i7iN_0Svvc",
"zgf4E5Q8Uw6",
"Lm2GrdZbsZl",
"_PIdOxESwc2",
"6dmycsk2Of2"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1) We have written a Motivation subsection to explain our motivation. There are theoretical proofs showing that the data samples lie on the vertices of a regular simplex (equivalently on the boundary of a hypersphere) in high-dimensional spaces. Therefore, it makes perfect sense to map the class-specific data samples to the centers chosen as the vertices of the simplex. Experimental studies support these claims. Our proposed methods outperform other methods using Euclidean distances only, and the reason for this is simple. None of these methods choses the class specific centers from the vertices of a regular simplex and attempts to minimize the distances between the samples and corresponding centers as in the proposed method. Please read our Motivation subsection for more details.\n\n2) Regarding our alternative solution to DAM, we just wanted to point out that there are alternative solutions to increase dimensionality if we use our special architectures since the reviewers complained about the complexity of the DAM module. DAM already achieved satisfactory results and we did not need to use that alternative solution. The reviewer is also wrong regarding the paper [1] and it has the same dimensionality problem as in the proposed method. The authors use orthogonal vectors to represent the classes (see 3.2. Centroid Generation in [1]). This is basic linear algebra. If there are C classes to represent, this means that the feature dimension must be at least C in order to get an orthogonal set of vectors. Furthermore, the authors of [1] also accept this restriction and they propose to increase the dimension from D to C by adding a fully connected layer to project features from D dimensions to C dimensions (see Section 3. Discriminative Loss in [1]). However, this is not enough since using linear combinations does not increase the dimension, they have to use nonlinear activation functions as in our proposed DAM module.\nRegarding [3], that method does not have the dimension problem, but the same authors publish a recent paper and mention methods using orthogonal/orthonormal weight vectors for hyperspectral uniformity. All these variants will also have the dimension problem similar to [1] since the dimension D must be larger than or equal to C to represent the C classes with linearly independent orthogonal basis vectors.\n\n3) We explicitly indicated that there are also methods using larger networks for the Tiny ImageNet dataset as in the proposed method. In addition, open set recognition methods also utilize background class samples by creating samples via GANs. Therefore, we believe that our comparisons are fair. We did not implement open set recognition methods by ourselves, we just reported the accuracies from the literature. Therefore, reimplementing all these methods by using background class is not feasible. But, we can also report our accuracies without using background class. \n\nRegarding m parameter, we did not have any trouble for fixing it since our centers are fixed to certain positions. We already know the distances between the class centers chosen as simplex vertices. All distances are equal, we simply checked the largest intra-class distances within classes and determined a margin based on this. Setting margin term to half of the radius worked well for all cases.\n",
" Thanks for the response. The authors have addressed some of my concerns, but I still think the weaknesses of this paper outweigh the strengths.\n1. The motivation of this paper is still not clear enough. The authors claim that they maximize the margins in both the angular and Euclidean spaces to make the proposed method suitable to any kinds of classification problems. However, this cannot explain why the proposed method works better than those who only optimize the Euclidean space in general object classification problems. Besides, the proposed method doesn’t show superiority in face recognition as in Table 3.\n2. Considering the problem of DAM, the authors provide a new solution to flatten the final feature map of ResNet to get a 25088-d feature vector to avoid a large number of parameters in DAM. However, the authors haven’t evaluated this solution empirically and the number of classes are still restricted by the spatial size and the number of channels of the final feature map. Besides, the methods in [1] and [3] don’t have severe dimension problems as the proposed method since the number of parameters of the final linear layer in [1] is only 256 \\times C and [3] doesn’t project the dimension of feature vectors to C.\n3. The comparisons are unfair as the proposed DSC uses a large amount of data from 80 Million Tiny Images dataset as the background class. Besides, a deeper network is employed for the Tiny ImageNet dataset. These settings are inconsistent with the other methods, leading to unfair comparisons. When the settings of background class of some existing methods are unknown, I think reimplementing the methods with 80 Million Tiny Images dataset as the background class is a fair comparison approach. In addition, it is unknown how to set the hyperparameter m in Equation 5.",
" Thank you for constructive comments. Suggested related papers are highly appreciated and we will add those references and comparisons to these methods to the final version.",
" Thanks for the response. Despite some limitations, I think the paper proposes a simple yet useful idea. I hope my comments can help the authors improve their paper.",
" 1) The reviewer is right, and most methods such as ArcFace or UniformFace, etc. do not have the feature dimension restriction as in our proposed method. However, many methods targeting uniform distributions [R1,R2,R3] on hyperspheres also have the same dimension restriction. But, we already proposed a method to handle this problem and it partially solved it. As we explained in our responses, we can provide a better solution if we change the network architectures.\n\n2) Regarding our response “[1] is different from UniformFace”, sorry for the misunderstanding. We did not mean this and there is not such a comment in our response. We just wanted to emphasize that our proposed method is more similar to UniformFace method since it introduces a classification loss function as in our proposed method. On the other hand, [1] considers the layer regularization problem and apply hyperspherical uniformity to the learned weights in all layers. Therefore, this method is more complex (in some sense it is also more sophisticated since it applies the hyperspherical uniformity to all neural network layers). From this point of view, UniformFace is more like a special case of the [1] where hyperspectral uniformity is applied to only classification layer.\n\n3) Using DAM module introduces more weights to learn as indicated by the reviewer. However, please note that we do not need DAM module for open and closed set recognition experiments conducted in our study. For closed set recognition experiments, architectures are identical for all tested methods (we used the same dimensional feature spaces for all tested methods), therefore they are directly comparable. We only used DAM module for face verification results given in Table 3.\n\n[R1] Do, Thanh-Toan, et al. \"A theoretically sound upper bound on the triplet loss for improving the efficiency of deep distance metric learning.\" In CVPR. 2019.\n\n[R2] Nitin Bansal, Xiaohan Chen, and Zhangyang Wang. Can we gain more from orthogonality regularizations in training deep networks? In NeurIPS, 2018.\n\n[R3] Weiyang Liu, Yan-Ming Zhang, Xingguo Li, Zhiding Yu, Bo Dai, Tuo Zhao, and Le Song. Deep\nhyperspherical learning. In NIPS, 2017.\n",
" Unfortunately, for both methods, we cannot use the learned representation from recent self-supervised learning method such as BYOL or MoCo on some dataset with proposed loss function. Actually, MoCo already uses a distance metric learning function called contrastive loss function (it is given in Eq. (1) in MoCo paper) for classification. As we explained at the Introduction part of our paper, contrastive loss minimizes the Euclidean distances between positive sample pairs and maximizes the distances between negative samples pairs. Therefore, it learns feature embeddings based on this contrastive loss function, and it is much better to compare this method directly to the proposed one in this case. Regarding BYOL, it uses two deep CNN architectures, referred to as online and target networks, that interact and learn from each other. BYOL simply starts from an augmented view of an image, and then it trains its online network to predict the target network’s representation of another augmented view of the same image. Therefore, it is hard to integrate our proposed loss function to such a specially designed network.",
" The reviewer has read the rebuttal, and most of the concerns are resolved.\n\nThe reviewer is not sure whether the authors understood the evaluation of the proposed method on representation learning settings. The point was to evaluate the learned representation from recent self-supervised learning method such as BYOL or MoCo on some dataset (e.g., ImageNet) with proposed loss function (Since original MoCo or BYOL did not use the proposed loss function for downstream classification task), not directly comparing with MoCo or BYOL. ",
" I appreciate the authors for the their response. \n\n1. The major concern that I have is that its feature dimension scales linearly with the number of classes, while the standard methods do not. For example, learnable classifiers can have 1024 feature dimension for million-class classification (meaning that the feature dimension is independent of the number of classes). This is the major limitation that I am talking about.\n\n2. I have no doubt that the proposed method encourages hyperspherical unformity and can be viewed as a solution in (d+1 x d) cases. My point is just to clarify that the original response \"[1] is different from UniformFace\", since this contradicts what I understood.\n\n3. There are multiple aspects for the additional parameters. First, the DAM module will inevitably consume more model parameters. Second, to obtain the same performance, it is likely that the standard training can do it without using that many parameters. I think this concern can be well addressed, if the authors can conduct closed-set experiments on more standard settings, say the same CIFAR-100 or ImageNet settings as [4] (also with reported number of model parameters). In that case, we are very familiar with what the performance of standard ResNet looks like. \n\n[4] Identity Mappings in Deep Residual Networks, ECCV 2016",
" 1) We do not agree with the reviewer regarding the comment our proposed method is largely limited to small number of classes, and this is in general a huge limitation for many applications (for example, face recognition). The typical feature dimension is 2048 or 4096 in deep CNN architectures. Therefore, we can apply our proposed method to large datasets including 2049 or 4097 classes. For the most of the datasets considered as large-scale datasets, e.g., ImageNet, MS COCO, NUS-WIDE datasets, etc., the number of class categories is much smaller than these values. Therefore, we can apply our proposed to most of the large-scale classification problems without needing DAM module. The number of class categories is much larger than these values for mostly large-scale face recognition datasets. In such cases, we can use DAM module or our own architectures that do not use the last fully connected layers as we described before.\n2) Regarding [1], we rechecked the paper again. As the reviewer stated, this method does not enforce the distances to be the same for all classes. Instead, it encourages uniform distributions on a hypersphere (in a way that the weights will not concentrate on certain areas and entire sphere shell will be used). This makes perfect sense especially for regularization of layer weights since the diverse weights will carry more information and reduce redundancy. When this idea is used as the final classification layer, the resulting method is similar to the UniformFace method as the reviewer pointed out. The authors also explicitly state that their method does not enforce orthogonality among the learned weights, otherwise it would be a completely different story and they would have the same restriction as in our proposed method. But, the same authors publish a more recent paper [2] recommended by the reviewer. In this paper, the authors show close ties between the hypersperical uniformity and orthogonality. More precisely, they show that orthogonal or orthonormal weight vectors are uniformly distributed on a hypersphere. As we indicated in our previous replies, any method that will enforce the orthogonality for hyperspherical uniformity will have the same dimension problem as in our proposed method. Lastly, please check the top of the page 5 in [2]. The authors state that the vertices of a regular (d + 1)-simplex (i.e., (d + 1)-dimensional convex hull of d+ 2 distinct vectors with equal pairwise distances) are universally optimal. This is exactly what we proposed in our paper. This clearly shows that our proposed method provides a global optimal solution for hyperspherical uniformity.\n3) Regarding “The comparison to the other method may not be fair. I think there are more model parameters for the proposed method”, we are not sure what the reviewer meant with more model parameters. Which parameters are the reviewer referring to? For closed set recognition, we need to fix only u parameter and we already explained how we fixed this parameter. We also provided our ablation study results showing the accuracy changes based on different u values. For open set recognition, we need to fix m and \\lambda values. Regarding m parameter, we did not have any trouble for fixing it since our centers are fixed to certain positions. We already know the distances between the class centers chosen as simplex vertices. All distances are equal, we simply checked the largest intra-class distances within classes and determined a margin based on this. Setting margin term to half of the radius worked well for all cases. For \\lambda values, we fixed it based on cross-validation and come up with a general formulation for it.\n\n",
" Thanks for the response.\n\n1. Whether open-set recognition tasks have large or small number of tasks still does not change the fact that this method is largely limited to small number of classes. I think this is in general a huge limitation for many applications (for example, face recognition).\n\n2. [1] is in fact a superset of UniformFace. If I understand it correctly, its experiment on face recognition (sphereface+ as in the paper) is exactly to apply the uniform loss to the last layer. Please correct me if I misunderstood the paper.\n\n3. The comparison to the other method may not be fair. I think there are more model parameters for the proposed method. It seems to be hard to evaluate where the experimental gain is from. It may be better to also show the number of model parameters in the experiment.",
" Here we answer the questions raised by the reviewer. For Weaknesses part, see our first response. \n\nQuestion:\n1) Regarding parameters, selection of u is not very important as long as it is not fixed to small values such as 1. Theoretically, the data samples lie on the surface of a growing hypersphere as the dimension increases. For smaller dimensions, we can choose smaller values of u as we did for illustrations experiments (we fixed u to 5 for 2-dimensional inputs). But, for larger dimensions we need higher values. Also, after some value, increasing u value does not change the results much. These are the accuracies we obtained for Cifar100 dataset for various u values:\n\nu=32, accuracy = 76.2%\n\nu=64, accuracy = 79.5%\n\nu= 100, accuracy = 79.4%\n\n u= 150, accuracy = 79.9%\n\nu= 200, accuracy = 79.0%\n\nRegarding m parameter, it is only used for open set recognition problems. Moreover, we did not have any trouble for fixing it since our centers are fixed to certain positions. We already know the distances between the class centers chosen as simplex vertices. All distances are equal, we simply checked the largest intra-class distances within classes and determined a margin based on this. Setting margin term to half of the radius worked well for all cases. For all experiments, we did not fine-tune our classification network from a pre-trained network and started the network weights from scratch by initializing with random weights which is the common practice used for initializing network weights. \n\n2) Regarding [1], it does not directly solve the problem. It simply allows to use 2d+4 class centers instead of d+1 centers in d-dimensional spaces. If 2d+4 is smaller than C, we cannot use it for classification. In contrast, DAM module allows us to increase the dimension of the feature space to any desired number without any restriction. \n",
" First of all, we would like to correct a misunderstanding. We do not map any label to the vertices of a regular simplex (in fact, in the proposed method, we treat the labels as scalars, therefore such an approach does not make any sense in our setting. It may be possible if the labels are encoded as one hot vectors, but we do not treat them as vectors in the proposed method). Instead, we map the feature vectors of the samples in the classes to the vertices of the regular simplex where each class is approximated with a simplex vertex. The size of the feature samples is d, and it is required that it is larger than or equal to C-1. Also, all distances between the classes are same, not just the distances between two classes.\n\nWeaknesses:\nRegarding the weakness “Although the paper explains the benefits of the setting, it may need more explains or experiments or theorems to support the method. It's hard to convince me that the method works well now”, as we clearly indicated in our Motivation subsection in the paper, there are theoretical studies proving that the data samples lie at the vertices of a regular simplex in high-dimensional spaces. Therefore, a classifier mapping the class specific samples to the vertices of a simplex makes perfect sense. In addition, since the class samples are compactly clustered around their corresponding class centers, this makes the proposed method ideal for open set recognition tasks where one needs to reject the unknown class samples during testing phase. Experimental results also verify theoretical findings since the proposed method typically achieves the state-of-the-art accuracies as seen in Table 1. More precisely, although we proposed a general classification method for closed set recognition settings, the proposed method outperforms all existing sophisticated open set recognition methods. Our accuracies are now new state-of-the-art accuracies on most of the tested datasets. Moreover, our proposed method also beats related state-of-the-art loss functions on closed set recognition experiments. The performance difference is significant especially on Cifar-100 dataset as seen in Table 2. Our proposed method could not beat the state-of-the-art only for large-scale face recognition problems since we had to use DAM module in these experiments. However, our accuracies are still encouraging in the sense that they are generally closer to the best reported accuracies.\nRegarding DAM module, as we explained for other reviewers, we wanted to design a plug and play module that can be used with any desired deep CNN architecture without any changes. DAM module partially solved our problem and yielded satisfactory accuracies closer to the state-of-the-art. We can provide a better solution by designing our own architecture instead of using our proposed plug and play DAM module. To this end, we can avoid the fully connected layers that are used for dimension reduction in the last layers of deep CNNs. For example, in ResNet architectures, the dimension of the feature space is 25088 just before fully connected layers, and it is reduced to 512 after fully connected layers. We can avoid the last fully connected layers and use high-dimensional outputs of these earlier layers. For example, using 25088 dimensional feature space is enough for training the large-scale MS1MV2 dataset we used in our tests without any need for dimension increase. At this point, we would like to point out the fact that all methods that target uniformly distributed class centers on the hypersphere have the same dimension problem. The authors simply did not realize it or they did not conduct experiments on large-scale as we did.\n",
" This paper proposed a method to combine the margin maximization in both Euclidean and angular spaces. More specifically, the method maps the labels to the vertices of a regular simplex as new labels.\n\nSection 2: \n1) explains the motivation of the method, i.e. the two distances are the same once the data lie on the boundary of a hyper-sphere. \n2) more details about the method, including the vertices of a regular simplex, the case that uses the background class samples and the Dimension Augmentation Module(which is used for increasing the dimension, since the method requires the dimension of data is nearly the data size).\n\nSection 3 and Appendix provide more details about the experiments:\n1) feature representations learned by different loss function.\n2) results of Open Set Recognition and Closed Set Recognition.\n *The proposed method is interesting(also the DAM is interesting) and simple, and the simplicity brings several advantages (few hyper-parameters, proper acceptance regions, suit for unbalanced datasets), thus can be easily applied for classification problems. The experiments show that the method achieves good accuracy in several cases (especially the Open Set Recognition).\n*The paper is well organized(especially the Section 2, which explains the main idea of the paper).\n\n*Although the paper explains the benefits of the setting, it may need more explains or experiments or theorems to support the method. It's hard to convince me that the method works well now.\n*The DAM seems need large size when the C is large. 1. Is there more explains about the hyperparameters such as, how there were chosen? Especially the u, which is the size of the hypersphere(it seems the u doesn't affect the classification?) . In contrast, is the m very sensitive? And what's the initialization?(what do the paper means by 'completely random'). \n2. Does DAM work better than the method in [1], or just the method in [1] is not suitable?\n\n[1]Almost-equidistant sets. No. The authors partly mentioned the limitations in the experiments (Section 3, especially 3.3.2) and the summary (Section 4). The limitation focus on the large-scale problem and the DAM.",
" Here, we would like to add additional comments since we could not fit some details in our first response.\n\nRegarding comparison to [1], although this paper focuses on distance metric learning, it uses class centers chosen as the basis vectors of C-dimensional space as anchors. Then, as in triplet loss, it attempts to minimize the distances between the data samples and the corresponding class centers (anchors) and to maximize the distances between the samples and rival class centers. Therefore, this method is quite different form our proposed one, and it can be seen as a quantized distance metric learning approach, where the anchors are set to some fixed centers. However, the authors make 2 critical mistakes: The first mistake is to choose the centers from the surface of a unit hypersphere (a hypersphere with radius 1). As we discussed in our paper, the data samples lie near the surface of a growing hypersphere as the dimension increases. Therefore, setting the hypersphere radius to 1 is wrong, and similar findings and discussions are given in ArcFace [16] and CosFace [15] papers. The second mistake is to use a fully connected layer alone for increasing the dimensionality. A fully connected layer just uses the linear combination of existing features and the resulting space has the same dimensionality. Therefore, this method will not work for large-scale problems. They have to use activation functions to introduce nonlinearity and increase the dimension. If we theoretically compare this method to ours, our proposed method is much simpler and run-time complexity of the proposed method is significantly less. For empirical comparison, we will run this method on the same datasets (with the exception of large-scale face recognition datasets) we used in our paper and report the results in the final version. \n\nRegarding comparison to the paper [2], the method is similar to Uniformface which proposes a classification loss function for learning uniformly distributed representations on the hypersphere. There are such approaches in the literature, but they are complex since they need many hyperparameters such as different weights for loss terms and margin terms. In contrast, our proposed method does not have such limitations as stated in our paper. In addition, we would like to point out the fact that, all these methods will end up an approximation solution whereas our proposed method uses the result of the optimal solution given in Equations (1-2) in our paper.\n\nLastly, regarding the scale parameter u, we have already conducted experiments by selecting different values. Experiments verify that the selection of u is not very important as long as it is not fixed to small values such as 1. Theoretically, the data samples lie on the surface of a growing hypersphere as the dimension increases. For smaller dimensions, we can choose smaller values of u as we did for illustrations experiments (we fixed u to 5 for 2-dimensional inputs). But, for larger dimensions we need higher values. Also, after some value, increasing u value does not change the results much. These are the accuracies we obtained for Cifar100 dataset for various u values:\n\nu=32, accuracy = 76.2%\n\nu=64, accuracy = 79.5%\n\nu= 100, accuracy = 79.4%\n\n u= 150, accuracy = 79.9%\n",
" We would like to thank the reviewer for revision of his/her review with a more fair one. \n\nWeaknesses:\n1) As we indicated in our first response, we have proposed a novel classification loss function for deep neural networks and compared it to other state-of-the-art loss functions on the same architecture. We did not propose a novel deep neural network architecture. Therefore, we can compare our proposed method to MoCo since it also proposes a similarity learning function that can be used for classification. But, comparing to BYOL is irrelevant since it introduces a completely novel architecture. We will add results of MoCo method in the final version.\n2) Regarding robustness to unbalanced datasets, we conducted some simple tests on Cifar-10 dataset examples plotted in Fig. 3. In our proposed method, the distances between the samples and their corresponding centers are minimized independently of each other, thus the proposed method worked well and returned similar embeddings as in Fig. 3(a) even one of the class samples are significantly reduced. We will try to fit these results to Section 3.1 in the final version.\n3) Regarding DAM module, we wanted to design a plug and play module that can be used with any desired deep CNN architecture without any changes. DAM module partially solved our problem and yielded satisfactory accuracies closer to the state-of-the-art. We can provide a better solution by designing our own architecture instead of using our proposed plug and play DAM module. To this end, we can avoid the fully connected layers that are used for dimension reduction in the last layers of deep CNNs. For example, in ResNet architectures, the dimension of the feature space is 25088 just before fully connected layers, and it is reduced to 512 after fully connected layers. We can avoid the last fully connected layers and use high-dimensional outputs of these earlier layers. For example, using 25088 dimensional feature space is enough for training the large-scale MS1MV2 dataset we used in our tests without any need for dimension increase. At this point, we would like to point out the fact that all methods [1,2,3,4] (recommended by other reviewers) that target uniformly distributed class centers on the hypersphere have the same dimension problem. The authors simply did not realize it or they did not conduct experiments on large-scale as we did. Furthermore, [4] provided a wrong solution just using fully connected layers to increase the dimension.\n4) Regarding distance matrix computation, we believe that there is no need to compute the distance matrices for other tested methods such as Softmax, ArcFace or CosFace since it is well known that they already return semantically meaningful embeddings. In our case, since the proposed method enforces the same distances between training classes, semantic relations are ignored among the training classes. We just wanted to demonstrate that our proposed method returns meaningful feature embeddings for open set recognition settings where there are unseen classes during testing stage. The distance matrix given in Fig. 4 demonstrates that the proposed method returns feature embeddings that respect semantic relations among the training classes and unknown test classes.\n\nQuestions:\n1) Experiments verify that the selection of u is not very important as long as it is not fixed to small values such as 1. Theoretically, the data samples lie on the surface of a growing hypersphere as the dimension increases. For smaller dimensions, we can choose smaller values of u as we did for illustrations experiments (we fixed u to 5 for 2-dimensional inputs). But, for larger dimensions we need higher values. Also, after some value, increasing u value does not change the results much. These are the accuracies we obtained for Cifar100 dataset for various u values:\n\nu=32, accuracy = 76.2%\n\nu=64, accuracy = 79.5%\n\nu= 100, accuracy = 79.4%\n\n u= 150, accuracy = 79.9%\n\nu= 200, accuracy = 79.0%\n\n2) The dimensions of the vectors match, both vectors come from d-dimensional feature space. There is no need to fix the dimension to C-1 to apply the proposed method. The proposed method can be used as long as d is larger than or equal to C-1.\n\n[1] Learning towards Minimum Hyperspherical Energy, NeurIPS 2018\n\n[2] Learning with Hyperspherical Uniformity, AISTATS 2021\n\n[3] Regularizing Neural Networks via Minimizing Hyperspherical Energy, CVPR 2020\n\n[4] Do, Thanh-Toan, et al. \"A theoretically sound upper bound on the triplet loss for improving the efficiency of deep distance metric learning.\" In CVPR. 2019.\n\n",
" 1) We thank the reviewer for pointing out [1] which uses a similar idea as in the proposed method. This paper focuses on distance metric learning, and it uses class centers chosen as the basis vectors of C-dimensional space as anchors. However, there are 2 critical mistakes: The first mistake is to choose the centers from the surface of a unit hypersphere. As discussed in our paper, the data samples lie near the surface of a growing hypersphere as the dimension increases. Therefore, setting the hypersphere radius to 1 is not suitable for most problems, and similar findings and discussions are given in ArcFace paper. The second mistake is to use a fully connected layer alone for increasing the dimensionality. A fully connected layer just uses the linear combination of existing features and the resulting space has the same dimensionality (they need to use activation functions to introduce nonlinearity). Therefore, this method will not work for large-scale problems. Moreover, our proposed method is much simpler and run-time complexity of the proposed method is significantly less. For empirical comparison, we will run this method on the same datasets we used and report the results in the final version. \nRegarding the paper [2], the method is similar to Uniformface which proposes a classification loss function for learning uniformly distributed representations on the hypersphere. These methods are complex since they need many hyperparameters such as different weights for loss terms and margin terms, and provide approximate solutions. On the other hand, our proposed method uses the result of the optimal solution given in Equations (1-2) in our paper. Also, enforcing the distances between pairwise class centers to have the same value eliminates the need for introducing margin terms and yields a much simpler method. Setting margin terms is a difficult problem especially for deep learning methods since the feature representations also change during training. \nFor comparison between our proposed method and [3], see our reply for Reviewer 2 (a4BR). \n2) Regarding the importance of maximizing the margin in both the angular and Euclidean spaces, this helps to apply the proposed method to various object classification problems. As stated on page 2, the methods maximizing the margin in angular spaces are used only for face recognition problems, where the classes can be approximated with linear/affine subspaces (in this case ArcFace and similar methods estimate the most discriminative directions spanning the subspaces as the class-specific weights and use them for classification). However, this approximation does not work well for more general object classification problems such as Cifar100 or ImageNet. For such problems, methods that minimize the within-class variances and maximize the inter-class separation based on Euclidean distances work better. As a result, maximizing the margin in both spaces makes our proposed method well-suited to any kind of classification problem such as face recognition or more general object classification problems. We will make it more clear in the final version.\n3) We can provide a better solution by designing our own architecture instead of using our proposed plug and play DAM module. To this end, we can avoid the fully connected layers that are used for dimension reduction in the last layers of deep CNNs. For example, in ResNet architectures, the dimension of the feature space is 25088 just before fully connected layers. Using 25088 dimensional feature space is enough for training the large-scale MS1MV2 dataset without any need for dimension increase. At this point, we would like to point out the fact that all methods ([1] and [3] ) that target uniformly distributed class centers on the hypersphere have the same dimension problem. The authors simply did not realize it or they did not conduct experiments on large-scale as we did.\n4) We already conducted experiments by selecting different values for u and \\lambda parameters. We will add these results to the Appendix in the final version. Regarding u parameter, experiments verify that the selection of u is not very important as long as it is not fixed to small values such as 1. For larger dimensions, we need higher u values. Also, after some value, increasing u value does not change the results much. Regarding using 80 million Tiny Images, it is used as outlier exposure dataset for anomaly detection problems. For open set recognition, some studies state that they employ background class, but they do not specify the dataset or some studies use GANs to create background samples. Therefore, we decided to use 80 million Tiny Images as background class as in anomaly detection problems. Regarding using a deeper network for Tiny ImageNet dataset, some studies prefer deeper networks as in our study. For example, [38] employs a deeper network Wide-ResNet-40-4 for better accuracies. For all compared methods, we tried to tune the hyperparameters. ",
" We would like to thank the reviewer for the nice comments given at the Strengths part. \n\nWeaknesses:\nRegarding the first weaknesses, the biggest limitation of the proposed method is its feature dimension restriction. We cannot apply it to datasets having very large number of classes without using dimension augmentation module. However, the number of classes is typically small especially in open set recognition tasks, therefore our proposed method is a perfect match for such problems and moderate sized datasets. Therefore, it can be still used in many classification problems.\nRegarding the second limitation, we will definitely add these references to the final version of the paper. Although these methods are related to our study, there are big differences: First of all, we just use the proposed loss function as the final layer for classification purposes. Therefore, our proposed method bears more similarity to UniformFace method [18] given in our paper. This method also proposes a classification loss function for learning uniformly distributed representations on the hypersphere manifold through potential energy minimization as in [1,2,3]. However, the studies given at [1,2,3] consider the layer regularization problem and apply hyperspherical uniformity to the learned weights. Therefore, these methods are more complex (in some sense it is also more sophisticated since it applies the hyperspherical uniformity to all neural network layers). Consequently, there are many hyperparameters that must be fixed in the resulting method. In contrast, our proposed method is simple and there is no hyperparameter to tune. We also would like to point out the fact that these methods will have the same feature dimension restriction as in our proposed method if the methodology is applied to last classification layer. This can be seen in the arguments provided in [2]. More precisely, [2] demonstrates the close ties between the hyperspherical uniformity and orthogonality. In order to obtain an orthogonal (or orthonormal) set for d weight vectors, the dimension of the feature space must be higher than or equal to d. If there are one million classes for separation, one has to learn one million weight vectors for each class, which in return requires at least one million dimensional embedding space. Therefore, all methods given in [1,2,3] have the same dimension problem as in the proposed method. \nLastly, we would like to point out that our proposed method significantly outperforms all these methods. For example, our error rate is 4.1% on Cifar10 and 20.5% on Cifar100 datasets. In contrast, the method proposed in [1] yields 6.21% error rate on Cifar10 and 25.61% error rate on Cifar100; the method proposed in [2] produces 20.97% error rate on Cifar100 dataset with a deeper network, and finally [3] results in 24.33% error rate on Cifar100 dataset.\nRegarding the empirical gain of closed set recognition, the gain is not significant for easy datasets such as Mnist and Cifar10, but the gain is very significant for Cifar-100 dataset. It is 3.4% better than the closest best performing method, (the method using the Center Loss).\n\nQuestions:\n1) For closed set recognition, we applied the simple loss function given in Eq. (4). Since the hypersphere is centered at the origin, it does not matter whether the angles or Euclidean distances are used for classification. Both metrics yield the same accuracy. ArcFace and all related method mentioned in the paper work well for face recognition problems, but they do not perform well on more general object classification problems where there are large intra-class variations. The reasons for this are given on page 2 (lines 85-92). More precisely, face class samples in specific classes can be approximated by using linear/affine spaces, and the similarities can be measured well by using the angles between sample vectors in such cases. However, the subspace approximation does not work for many general classification problems, such as Cifar-100 or ImageNet datasets, where there are large intra-class variations and the subspace approximations do not fit to the classes.\n2) The feature dimension is 512 for the Cifar10 dataset since we use the ResNet-18 architecture. We did not use a 9-dimensional feature space. As class centers, we used the first 10 simplex vertices obtained by using the formulation given in Equations (1-2). Please note that, we do not need the reduce the dimension to the C-1 in order to use the proposed method. In Cifar10 dataset, we simply created 10 class centers whose dimension is 512.\n",
" 1) Regarding weakness 1, we do not combine any existing methods (If the reviewer knows similar methods, we will be glad if he/she shares those methods with us). In contrast, we propose a novel deep neural network classifier loss function that enforces the samples of classes to cluster around the centers that are chosen from the vertices of a regular simplex. This is a completely new methodology. The existing methods we discussed in Section 2.1 (Motivation) simply show that the high-dimensional data concentrate on the vertices of a regular simplex. There is one method using this information for clustering, yet it is a traditional unsupervised machine learning method using hand-crafted features. Regarding motivation, we allocated a complete subsection (Section 2.1 Motivation) to explain our motivation. Please read this subsection again. \n2) Claiming that classification task itself a small contribution is a complete nonsense. The classification is very important and active research area in machine learning and artificial intelligence fields. In fact, the reviewer admits this fact with his/her own words written in Limitations part. This is reviewer’s sentence: “The applicability of the proposed method is too limited. The recent trend is to learn good representations from the data and use those representations in various downstream tasks such as classification problems.” On one hand, the reviewer claims that classification is not very important , yet on the other hand he/she advises us a to apply our method for classification. We have done exactly the same thing. Our proposed method is a deep neural network classifier. We doubt that the reviewer have read a different paper.\n3) Regarding the choice of the neural network architecture, we have proposed a novel classification loss function for deep neural networks and compared it to other state-of-the-art loss functions on the same architecture. Here, the main goal is to demonstrate the superiority of the proposed loss function over other loss functions. From this point of view, architecture type is irrelevant since we use the same architecture for all tested loss functions. Regarding comparison to other two methods, we can compare our proposed method to MoCo since it also proposes a similarity learning function that can be used for classification. But, comparing to BYOL is irrelevant since it introduces a novel architecture. In this study, we only propose a new classification loss function, not a different neural network architecture.\nRegarding Limitations, the reviewer contradicts with himself/herself. The reviewer advises us to show the use of the learned features for classification. This is what we have done in the paper. Regarding advantages of the proposed method, we clearly indicated them in the Contributions part of the paper. Here, we repeat them again since it seems the reviewer did not read the paper carefully. The advantages of the proposed method over existing methods can be summarized as follows:\ni) The proposed loss function does not have any hyper-parameter that must be fixed for classical\nclassification problems, therefore it is extremely easy for the users. For open set recognition, the user has to set two parameters if the background class samples are used for learning.\nii) The proposed method returns compact and interpretable acceptance regions for each class,\nthus it is very suitable for open set recognition problems.\niii) The distances between the samples and their corresponding centers are minimized independently of each other, thus the proposed method also works well for unbalanced datasets.\n",
" Sorry for misunderstanding the paper. Revised review is here.\n\nThis paper proposed novel loss function for training deep neural network classifier, which can be generalizably applied to many domains such as evaluation for representation learning such as MoCo, BYOL or Masked Auto-Encoder. The loss function does not need any hyper-parameter tuning since the class centers are given as vertices of simplex and each data samples are trained to map to its corresponding class center. This way, traditional way of maximizing margin between samples from different classes and minimizing margin between samples from same class can be achieved in simple and effective way. While the limitation is the dimension of output feature of deep neural net should be larger or equal to C-1, authors mitigated the issue using DAM which can be further improved. In the experiment section, the authors gave intuition on how DSC clusters the data samples compared to other method, and how the learned representation can be semantically meaningful on unknown classes. The method achieved SOTA on most classification dataset which seems promising. strengths: \n1. The paper is well-written and easy to follow.\n2. The proposed method DSC generalizes to domains where the classification is required as a downstream task.\n2. Proposed method achieves SOTA on various dataset\n\nweaknesses:\n1. More experiments on validating the proposed method seems required. For example, by getting learned representations from self-supervised pre-trained model (e.g., MoCo, BYOL, etc), the evaluation on downstream classification task with various loss functions will provide more strength on the proposed method. \n2. As authors pointed out in the contribution, experimental evidence on how this method is robust to unbalanced dataset needs to be addressed.\n3. The solution of limitation seems not practical, since simple multi-layer perceptron is used between d-dimensional and C-1 dimensional vector which requires a lot of parameters of d and C-1 is large.\n4. It would be better to show distance matrix of other baselines. Although baselines' distance between class centers from known classes exhibit difference, semantic property would be preserved. Reviewer is not sure if the learned representation with DSC is actually semantically meaningful since this is not preserved for known classes. Giving more explanations on this would be nice. 1. Does the length of the center vector u affect the performance of the DSC (or need tuning)? How does this value affect the performance?\n2. In equation (4), does the dimension of f_i, and s_{y_i} match? Since it seems that f_i is d-dimensional vector while s_{y_i} is C-dimensional vector. Please clarify this part. None.",
" The paper studies an interesting problem of fixing a set of equi-distance classifiers and then learning features to the corresponding class center (i.e., a vertice in a simplex). The specific form of the loss function is simple and aims to minimize the Euclidean distance between learned feature and the class center (on a hypersphere). This will automatically guarantee maximum separation of class centers and also lead to discriminative feature representation. The paper uses this method in both closed-set and open-set recognition, and shows some improvements. I general, I think this is an interesting idea by taking advantages of the simplex and using it as a set of classifiers to guide the feature learning.\n\nStrengths:\n\n- The core idea is interesting. Connecting the simplex and maximum separated classifiers is interesting. Using it as an inductive bias to guide the feature learning is intuitive and seems to make lot of senses.\n\n- The paper also considers the weakness of this method due to the limitation in dimension. The propose dimension augmentaion module aims to address this limitation (however, it inevitably introduces additional trainable parameters and it might not be a fair comparison to the other baselines any more).\n\n- The paper considers both closed-set and open-set recognition and show some improvements over existing baselines. \n\nWeaknesses:\n\n- As the authors have mentioned, the biggest weakness of this method is its restriction to feature dimension. Although the dimension augmentation module can partially address this, it does not solve the problem. For example, if you have a million-level classification (e.g., many face recognition datasets have this size), you have to map the feature to a similar scale of dimension, which is computationally intractable and introduces many additional overhead. This limitation largely restricts the application of the proposed method.\n\n- Some important references are missing from the paper. What the simplex is actually doing is to encourage hyperspherical uniformity. This have a large body of works on this, e.g. [1], [2], [3]. I think they are closely related, since the simplex classifier is essentially a special case of hyperspherical uniformity (I understand that the construction procedure could vary).\n\n- The empirical gain seems to be somewhat marginal, especially on closed-set recognition.\n\n[1] Learning towards Minimum Hyperspherical Energy, NeurIPS 2018\n\n[2] Learning with Hyperspherical Uniformity, AISTATS 2021\n\n[3] Regularizing Neural Networks via Minimizing Hyperspherical Energy, CVPR 2020 - I am wondering whether the authors only uses the proposed Euclidean loss in closed-set recognition (essentially center loss on a simplex)? It is a bit difficult to believe that it can actually outperform Softmax-based margin losses (say ArcFace). Using only Euclidean losses is actually ineffective in classification problems in my experience (it may work for clustering problems like face verification).\n\n- In the CIFAR-10 experiment, what is feature dimension in the proposed method? How the simplex is constructed if you are using the same feature dimension as the others? Are the authors using 9-dimension features? n/a",
" This paper introduces a Deep Simplex Classifier (DSC) that maximizes the inter-class margins in both Euclidean and angular spaces for the open set recognition problem. Specifically, this method regards the vertices of a regular C-simplex in the (C-1)-dimensional feature space as the class centers of known C classes in the dataset and then encourages the class samples to be as close to their corresponding centers as possible. In cases that the feature dimension is smaller than C-1, a Dimension Augmentation Module (DAM) is further proposed to expand the feature dimension to C-1 to make it possible to construct the C-simplex. The open set and closed set recognition experiments are conducted on Mnist, Cifar10, SVHN, Cifar100, and Tiny ImageNet datasets while the face recognition experiments are conducted on MS1Mv2 dataset to show the effectiveness of the proposed method. Strengths:\n1. The practice of regarding the vertices of a regular C-simplex in (C-1)-dimensional space as the C classes’ centers in the training dataset is new in open set recognition task, which has a clear geometric interpretation. \n2. The proposed method is clearly explained and easy to follow.\n3. The authors conduct the illustration experiment to ease the understanding of the effect of DSC and evaluate the proposed method extensively on open set recognition, closed set recognition, and large-scale face recognition tasks.\n\nWeaknesses:\n1. The idea of maximizing the inter-class margins by explicitly manipulating the class centers is not new. For fixed centers, in Section 3.2 of [1], Do et al. propose to regard the basis vectors of the C-dimensional space as the C classes’ centers which are also the vertices of a C-simplex. The only difference is that the feature dimension in [1] is one larger than that in this paper. For learnable centers, Hayat et al. [2] also encourage the pairwise distances between any two class centers to be similar in a learnable way to bring uniformly separated centers in Euclidean space. Besides, Liu et al. [3] propose a minimum hyperspherical energy (MHE) regularizer to uniformly distribute the class centers on a hypersphere for maximizing the inter-class separability. These two methods share similar goals with Deep Simplex Classifier (DSC) but do not require that the feature dimension is larger than C-2. As the relations between each pair of classes are different, is it necessary to make the pairwise distances between any two centers identical as in DSC or what is its advantage? I think more discussions and empirical comparisons are needed considering these strongly related existing works. \n2. The motivation of this paper is not clearly expressed yet. The authors claim that the Euclidean and Cosine distances would complement each other in abstract and finally choose to maximize the margin in both the Euclidean and angular spaces in introduction. However, the authors mostly emphasize the complexity of hyperparameters of existing methods in introduction while do not explain how would two kinds of distances complement each other and what is the key advantage of considering both spaces in open set recognition? Besides, the authors simply list and explain a series of existing works one by one in introduction with few summaries, which is kind of redundant, lacks logical coherence, and could be further improved. \n3. The proposed DSC is strictly restricted by the requirement that the feature dimension is not less than C-1. The Dimension Augmentation Module (DAM) would only alleviate this issue when C is small while with a large C, DAM will introduce a large number of parameters. In the face recognition experiment, with 18.6K identities, the number of parameters of the last linear layer in DAM is about 18.6K*18.6K \\approx 346M, which is seven times more than the number of parameters of the ResNet-101 backbone (45M) and hinders the practical usage. I think a better solution is needed and essential for the general usage of DSC.\n4. The authors claim that DSC does not have any hyperparameters but actually there is a hyperparameter u denoting the radius of the hypersphere. The authors simply set it to 64 following the practice in face recognition, which lacks theoretical and empirical guarantee as this paper does not focus on the task of face recognition and uses datasets beyond face recognition datasets. For this reason, an ablation study is needed to investigate the effect of the value of u. Moreover, the ablation study for the hyperparameters \\lambda and m in Equation 5 in open set recognition is also needed. Besides, I also have some concerns about the experiments. First, for the open set recognition task, the proposed DSC uses a large amount of data from 80 Million Tiny Images dataset as the background class in Section 3.2.1, but the existing methods do not adopt this setting, for which I think the comparison in Table 1 is unfair. In line 269, a deeper network is employed for the Tiny ImageNet dataset, is it the same with the settings in existing works? Second, for the closed set recognition task in Table 2, do authors tune the hyperparameters in SphereFace, CosFace, and ArcFace for current classification datasets or directly follow the settings in original papers? Besides, the methods mentioned in the first comment should also be compared in the experiments since they are quite related with the proposed method.\n\n[1] Do, Thanh-Toan, et al. \"A theoretically sound upper bound on the triplet loss for improving the efficiency of deep distance metric learning.\" In CVPR. 2019.\n[2] Hayat, Munawar, et al. \"Gaussian affinity for max-margin class imbalanced learning.\" In ICCV. 2019.\n[3] Liu, Weiyang, et al. \"Learning towards minimum hyperspherical energy.\" In NeurIPS. 2018.\n 1. The novelty of the proposed method needs further clarification. More discussions and empirical comparisons between the proposed method and [1], [2], [3] are needed since they are quite related in methodology or goal. As the relations between each pair of classes are different, is it necessary to make the pairwise distances between any two centers identical as in DSC or what is its key advantage?\n2. The motivation should be expressed more clearly. The authors mostly emphasize the complexity of hyperparameters of existing methods in introduction while do not explain how would two kinds of distances complement each other and what is the key advantage of considering both spaces in open set recognition? Besides, the presentation of introduction could be further improved.\n3. How to better alleviate the feature dimension restriction for the proposed method without introducing too many parameters, especially when there are a large number of classes as in face recognition?\n4. The ablation studies for hyperparameters u, \\lambda and m in Equation 5 are needed. \n5. For the open set recognition task, the proposed DSC uses a large amount of data from 80 Million Tiny Images dataset as the background class in Section 3.2.1, but the existing methods do not adopt this setting, for which I think the comparison in Table 1 is unfair. In line 269, a deeper network is employed for the Tiny ImageNet dataset, is it the same with the settings in existing works? \n6. For the closed set recognition task in Table 2, do authors tune the hyperparameters in SphereFace, CosFace, and ArcFace for current classification datasets or directly follow the settings in original papers? \n No. The authors do not adequately address the feature dimension restriction of the proposed DSC. The proposed DAM introduces a large number of parameters when the number of classes is large."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"cE2g4BAvUQ2",
"NohUECiqEous",
"sT30yYcve5T",
"OUkyecs1Aw",
"YHIPBIj_v6",
"RJQQj8jhBNX",
"SMG52nWZOrj",
"P18NlXgHQ2s0",
"KFdIL9E9_gu",
"i7iN_0Svvc",
"rja5nFr-Ol7",
"rja5nFr-Ol7",
"nips_2022_wcBXsXIf-n9",
"LsSQQVxKxxE",
"Lm2GrdZbsZl",
"6dmycsk2Of2",
"_PIdOxESwc2",
"Lm2GrdZbsZl",
"nips_2022_wcBXsXIf-n9",
"nips_2022_wcBXsXIf-n9",
"nips_2022_wcBXsXIf-n9"
] |
nips_2022_bBgNsEKUxmJ | Universally Expressive Communication in Multi-Agent Reinforcement Learning | Allowing agents to share information through communication is crucial for solving complex tasks in multi-agent reinforcement learning. In this work, we consider the question of whether a given communication protocol can express an arbitrary policy. By observing that many existing protocols can be viewed as instances of graph neural networks (GNNs), we demonstrate the equivalence of joint action selection to node labelling. With standard GNN approaches provably limited in their expressive capacity, we draw from existing GNN literature and consider augmenting agent observations with: (1) unique agent IDs and (2) random noise. We provide a theoretical analysis as to how these approaches yield universally expressive communication, and also prove them capable of targeting arbitrary sets of actions for identical agents. Empirically, these augmentations are found to improve performance on tasks where expressive communication is required, whilst, in general, the optimal communication protocol is found to be task-dependent. | Accept | Reviewers found the paper's connections between MARL and GNNs interesting and well-written, and the experiments convincing. Given the unanimous support, I recommend acceptance. That said, I encourage the authors to integrate reviewer feedback, including trying to move some of the details and plots requested to the main text. | test | [
"JDPuNP7Rgc3",
"8YkyOlHW_6_",
"VRMD6kucME",
"8mO8WHlr7v8",
"pYBDaOY7Mc4",
"Fq3c9mnSWwr",
"H5XMYr-mRn",
"Pln4yoSCYB",
"YoOhHXtN1Rm",
"oslt6Bl9tXy",
"fIX7FyNJ4v",
"ru_MwuRprS",
"8ZO0JO9dJ_q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. If the purpose of including epochs was to illustrate the convergence rates of different experiment setups, I still believe that training curve figures is better than adding additional rows indicating the best performing epoch. Best performing epoch information can be rather deceptive in case evaluations in previous epochs produce slightly worse performance (i.e. not significantly worse based on the evaluated metric's confidence interval) before the best performing epoch. Nonetheless, it is still better than not including this information at all.\n\n2. In general, it seems that the experiments conducted here varies across (a) MARL algorithm, (b) environment, and (c) observation augmentation method. While the in-depth analysis of the proposed methods across (a) is definitely one of the strengths of this paper, the authors can save space in the main text by only reporting the results in 2-3 MARL algorithms, reporting the results for the remaining algorithms in the appendix, and referring to these additional results in the main text. I do not have a specific preference over the reported algorithms as long as it is not MAGIC (i.e. because the main text mentioned that the results with this algorithm can be unstable). I leave the remaining selection over the algorithms to the authors.\n\nIn terms of (b), you can limit the number of training curve images to 6-9 images (assuming results from 2-3 algorithms are reported) if only results from three environments are reported. These three environments will ideally be one of the benchmark environments (choose one), drone scatter, and box pushing. \n\nOverall, displaying the selected figures will take approximately 1 page. Also note that including some of the training curve figures will allow you to remove the tables from the main paper (i.e. information provided in the table can be inferred from the figures). ",
" We appreciate your feedback on this. There are two questions which we would like your further feedback on:\n1. What are your thoughts on the extra information we provide in the tables of the revision we uploaded, and do these address your concerns about having further results in the main section of the paper?\n2. If you were to select some of the training curves from the appendix to show in the main section of the paper, which ones would they be? We cannot include all of them, since they collectively take up 9 pages",
" I thank the authors for their detailed response to my questions and the concerns I raised. It seems many of the answers are provided in the appendix. In light of this, I will raise my score to Weak Accept. The main thing holding me back from a higher score is that I find the specific augmentations considered (Unique ID and RNI) not particularly insightful.",
" Thank you for providing answers to the questions that have been raised in the reviews. I am satisfied with most of the authors' answers.\n\nI still have the same concerns as reviewer Kpp5 regarding the lack of empirical results provided in the main text. While the authors argued for the importance of current theoretical results regarding the expressivity of GNNs for universally expressive communication, I believe that theoretical/empirical results on learning such GNNs via RL-based optimisation is a more impactful contribution to the research community (i.e. it not only demonstrates that a GNN exists to approximate any communication protocol, but also shows that such GNNs can be discovered via RL). \n\nNevertheless, as suggested in my original review score, I still maintain a positive view about this work.\n\n",
" Based on the feedback received from all reviewers so far, we aim to make the following changes:\n\n**Hypothesis for Superior Performance of RNI for Expressivity:**\n\nOur hypothesis for the superior performance of unique IDs when it comes to symmetry breaking is stated in lines 296 and 297 of the paper. With respect to why RNI performs better when it comes to expressivity, we postulate that it’s much easier for agents to overfit on the particular unique IDs given to them, since they are deterministically assigned. On the other hand, using RNI encourages the agents to learn policies which respect the permutation invariance between agents, since agents will receive different random observation augmentations at each time step.\n\nWe will add the latter hypothesis into our paper, just after line 287.\n\n**Remove ‘a’ from Graph Definition:**\n\nTo clarify what ‘a’ is in our definition of the graph on line 84: this ‘a’ refers to the attribute function of the graph, which is defined in the appendix when we introduce attributed graphs. However, we recognize that this can be confusing without the information provided in the appendix, so we will remove it from the definition on line 84.\n\n**Segway into Theorems:**\n\nWe will add a further sentence after line 142 explaining that, in the following sections, we will provide theorems which prove that the 3 properties are satisfied by the augmentations.\n\n**Parameter Sharing Clarification:**\n\nWe will aim to clarify better in the paper what we mean by parameter sharing.\n\n**Extra Information in Tables (included in revision):**\n\nWe have added an extra line to each table in the paper, specifying, for each method on each environment, what the average number of epochs is until the best performing model is found. We have included this change specifically in the uploaded revision, so that we may hear feedback on how it is presented and whether it sufficiently answers the questions raised.\n\n**Background on non-GNN-based Communication Models:**\n\nIt is true that some MARL communication models do not fall within the GDN paradigm, e.g. RIAL, DIAL, and ETCNet use a fixed message-passing structure, ATOC and BiCNet use an LSTM for combining messages, and SchedNet concatenates messages. We will add information to our background about some of these other methods and why they do not fall within the GDN paradigm, just after line 29.\n",
" We thank the reviewer for a thorough and insightful review, and for the effort that went into it: we really appreciate you engaging with our work. Please see our responses to your queries below.\n\n**Strengths and Weaknesses 3:**\n\nWe greatly appreciate your feedback on this. We will make efforts to make the theoretical aspects of the paper more understandable from the perspective of MARL practitioners, given that the paper relies heavily on also having a background in GNNs.\n\n**Strengths and Weaknesses 4:**\n\nWe agree that our work only applies to MARL communication that uses GNN-style message passing communication. However, there is a significant trend within MARL communication research towards such models (https://arxiv.org/pdf/2203.08975.pdf). Furthermore, as we note on lines 28 and 29, not all of these models are stated explicitly in terms of GNNs, but they can be captured within the framework of GDNs (for example: CommNet).\n\nIt is true that some MARL communication models do not fall within the GDN paradigm, e.g. RIAL, DIAL, and ETCNet use a fixed message-passing structure, ATOC and BiCNet use an LSTM for combining messages, and SchedNet concatenates messages. We will add information to our background about some of these other methods and why they do not fall within the GDN paradigm, just after line 29.\n\nWith respect to research about emergent languages: we find such work to have fundamentally different aims to the models we are augmenting, even though it is very interesting. The sub-field aims to study how language emerges: the models are typically applied to simplistic examples to study whether shared languages can be developed (https://arxiv.org/pdf/2006.02419.pdf). Many of the environments do not even have multiple time steps or a dynamic environment. On the other hand, the models we are augmenting are frequently applied to state of the art MARL environments, with the aim of providing communication that enables agents to solve the tasks.\n\n**Questions:**\n\nThank you for this feedback: we will aim to clarify better in the paper what we mean by parameter sharing.\n\nMost state of the art MARL algorithms do use, or at least allow for parameter sharing between agents. To provide a correction to the reviewer: parameter sharing is in fact used in MAPPO / IPPO (https://arxiv.org/pdf/2103.01955.pdf section 3), QMIX (https://arxiv.org/pdf/1803.11485.pdf appendix B.1 and appendix C.2), and MAVEN (https://arxiv.org/pdf/1910.07483.pdf appendix C.1).\n\n**Limitations:**\n\nWe required parameter sharing to be able to use our theoretical analysis, and our empirical results were a demonstration of that theory in practice: as such, we used shared weights for all evaluations. However, using shared weights is already very common and well established in MARL.\n\nWe appreciate the reviewer’s suggestion though, and do acknowledge that considering the case of non-shared weights for agent networks and GNNs which allow for communication between heterogeneous agents is very interesting; in future work, we are aiming to consider heterogeneous GNNs and agent networks that do not have shared parameters.\n",
" We thank the reviewer for a thorough and insightful review, and for the effort that went into it: we really appreciate you engaging with our work. Please see our responses to your queries below.\n\n**Weakness 1:**\n\nWe appreciate that the main body of the paper is very light on background information due to the space constraints. We provide a full background in the appendix, which is very thorough and introduces topics for both MARL and GNN practitioners. However, given that the main audience of this paper is MARL researchers, we will attempt to include more background information on GNNs in the main body of the paper.\n\nTo clarify “what \\alpha is in our definition of the graph on l.84”: this ‘a’ refers to the attribute function of the graph, which is defined in the appendix when we introduce attributed graphs. However, we recognize that this can be confusing without the information provided in the appendix, so we will remove it from the definition on l.84. Thank you for bringing this to our attention.\n\n**Question 1:**\n\nTheorems 2-5 prove that these properties hold and the computational efficiency of the augmentations is stated in lines 141-142. Furthermore, we explicitly state that these properties are satisfied on lines 173-174 and lines 195-196, using more intuitive language than the theorems. Our experimental results demonstrate that these augmentations provide the 3 properties in practice as well.\n\nHowever, we acknowledge that these may not be as accessible to MARL readers without a background in GNNs, so we will signpost to the upcoming theorems more clearly. Specifically, we will add a further sentence after line 142 explaining that, in the following sections, we provide theorems which prove that the properties are satisfied by the augmentations.\n\n**Question 2:**\n\nWe have no theoretical results about learning efficiency, only empirical ones. Due to space constraints and our extensive evaluations, we could not present our learning curves in the main body of the paper, so they are provided in the appendix. Our empirical results show that the augmentations mostly converge at a similar rate, albeit sometimes a bit slower. There are cases in which the augmentations even end up converging faster than the baseline, for example: TARMAC-IC3Net on Predator-Prey.\n\nThe properties of our methods in this dimension are stated in the main section of the paper on line 274 and fully shown by the learning curves in the appendix. The answer to the subsequent question may also be found by analyzing the result curves in the appendix: convergence speeds are typically comparable between unique IDs and the baseline. However, we appreciate that this information would be nice to see in the main section of the paper.\n\nAs such, we have added an extra line to each table in the paper, specifying, for each method on each environment, what the average number of epochs is until the best performing model is found. We have included this change specifically in the uploaded revision, so that we may hear feedback on how it is presented and whether it sufficiently answers the questions raised.\n\n**Question 3:**\n\nWe agree that it’s useful to know the number of training samples at which the best values were reached. However, we believe that this information is best represented in the learning curves provided in the appendix.\n\nTo provide clarification about the total number of training samples: each epoch in one of our models consists of training for 5000 episodes, then evaluating for 100 episodes. There are 2000 epochs in every experiment run.\n\nFinally, with respect to the claims we make on lines 269-270 and lines 286-287: these are supported by the result curves provided in the appendix.\n",
" We thank the reviewer for a thorough and insightful review, and for the effort that went into it: we really appreciate you engaging with our work. Please see our responses to your queries below.\n\n**Quality 3:**\n\nWe agree that theory on the ability of RL-based optimisation techniques to learn such GNNs would be an amazing achievement. However, existing literature on convergence guarantees for RL algorithms is already limited and very complex, and GNN papers on models with universal expressivity similarly do not provide theory on the ability of training paradigms to converge to the models they prove exist. We are grateful to the reviewer for pointing out that our empirical results demonstrate the ability of our proposed augmentations to learn the GNNs which we prove to exist theoretically.\n\nFurthermore, as you request, we would have liked to be able to provide the learning curves in the main section of the paper. However, due to how extensive our evaluations are and how much space the curves take up, we are forced to provide them in the appendix instead and refer the reader to them on line 207, to respect the space constraints of the paper.\n\n**Question 1:**\n\nWe appreciate the suggestion to analyze messages that RNI/CLIP passes to highlight how they aid the symmetry-breaking process. For example, one could measure and report the diversity in the set of messages, expecting higher diversity when the policy has better learned to perform symmetry-breaking. However, this is something that we feel is best left for future work.\n\n**Question 2:**\n\nOur experiments showed that using lower amounts of RNI resulted in faster and more reliable convergence. To further improve the slow convergence, we could try even lower amounts of RNI, since the universality results hold for any amount of RNI. However, the results of Abboud et al. [1] show that when using only a single RNI value, the performance is poor, so there is a point at which the amount of RNI can become too low. Abboud et al. [1] found good performance with RNI values ranging from 12.5% to 87.5%, so lower values of RNI could be promising to check in future evaluations. In our evaluations, we could only use two values from this range due to our limited computational budget.\n\nFurthermore, convergence speed depends greatly on the baseline being augmented. For example: IC3Net consistently converges quickly and reliably. Thus, augmenting a good baseline is also a promising way to ensure that RNI methods converge well.\n\n**Suggestion 1:**\n\nWe appreciate the reviewer’s suggestion here, but we find the theoretical analysis to be crucial to present in the main body of the paper; it establishes how the augmentations are able to solve problems that the baselines are proven to be unable to solve, and provides principled reasons for using the augmentations in practice.\n",
" We thank the reviewer for a thorough and insightful review, and for the effort that went into it: we really appreciate you engaging with our work. Please see our responses to your queries below.\n\n**Weakness 2:**\n\nNo, a single unique identifier will always be sufficient to perfectly distinguish the agents, yielding universal expressivity and symmetry breaking. Our theoretical guarantees still hold in environments with more diverse agents, provided that the agents use shared weights. Diverse agents can be represented in a shared-weight setting by allocating a portion of their observations to specify their attributes / skills, like we state on lines 102-104 (arbitrary unique IDs will be concatenated in addition to this, one for each agent). Having a unique ID for each agent will still perfectly distinguish all of them from one another, and we see no reason why this would change performance in practice.\n\nAs an aside, if we are considering the same craftworld as the reviewer, it appears not to be a multi-agent environment (https://arxiv.org/pdf/2011.00517.pdf).\n\n**Question 1:**\n\nWe first note that there is a distinction that has to be made between the order in which messages are received from multiple agents, and the order in which tokens appear within a message. The order of words matters for natural language, and analogously in our case, the order of numbers in message vectors matters during message passing. However, when we state that “an agent’s policy should often not depend on the order in which messages are received at a given time step”, we are referring to the order in which messages from different agents are processed.\n\nThere are some environments where an existing order is present, but we consider cases where it is not (i.e. parallel multi-agent environments). In our scenarios, any order imposed on the messages would be completely arbitrary. We prefer respecting the natural permutation invariance in our architectures, in alignment with much of the established work in MARL communication.\n\n**Question 2 / Weakness 1:**\n\nWe do: our hypothesis for the superior performance of unique IDs when it comes to symmetry breaking is stated in lines 296 and 297 of the paper. With respect to why RNI performs better when it comes to expressivity, we postulate that it’s much easier for agents to overfit on the particular unique IDs given to them, since they are deterministically assigned. On the other hand, using RNI encourages the agents to learn policies which respect the permutation invariance between agents, since agents will receive different random observation augmentations at each time step.\n\nWe will add the latter hypothesis into our paper, just after line 287.\n\n**Limitations:**\n\nThe theory we present suggests that improved performance should be expected in scenarios where expressivity beyond 1-WL or symmetry breaking is required, and that performance should typically not degrade in scenarios where they are not. Our empirical results demonstrate this as well.\n\nWe agree with the reviewer that it would be really interesting to see further results on other environments / scenarios, and we hope that this paper will serve as a starting point for others to incorporate these methods into their research.\n",
" This paper attempts to improve communication in multi-agent systems by drawing from graph neural network techniques—they consider agents as nodes in a graph that allows techniques such as node labeling to apply in this setting. They introduce two augmentations to GNNs i.e., adding randomisation into node initialisation and a unique identifier method to differentiate agent ids in their observations,, and show how these algorithms allow more expressive communication. They evaluate this on standard benchmarks (3 different environments) and show that this improves performance in instances of symmetry breaking, but results in policies that are less than optimal in others.\n Strengths:\n1. This paper is well written and clearly explained—nearly every question I had was answered immediately and was easy to follow.\n2. They introduce a nice analogy between MARL communication methods and graph neural networks, where agents form the nodes in the graph (labeled with agent observations) and communication occur along the edges.\n3. For the two augmentations to the GNNS that they propose, they clearly state out the procedure as well as prove why this should work.\n4. They evaluate these augmentations empirically, in comparison to a baselines on several different standard MARL environments and show that both augmentations yield improved performance over baselines, for different types of models that they augment.\n\nWeaknesses:\n\n1. I like that there are definite gains in performance on adding the two augmentations over baselines, but I’m wondering if the gains might only be prevalent in the simplistic MARL environments used here? Do the authors have insights on the complexities of the different environments and whether some augmentations work better/worse for the different types of environments?\n2. E.g., specifically for the unique node identifier augmentation, I’m curious to hear the authors thoughts on how this might impact performance if the agents are more diverse in terms of the attributes/skills/inventory items they possess (e.g., see environments like craftworld), and one unique identifier might not be enough information to be useful?\n 1. In line 23: why should the order of messages not matter? Even in some of the newer symbolic emergent communication settings, the ordering of distinct tokens matters in terms of comprehension of the message, and especially when moving to natural language, the history of messages/ordering of context makes a large difference to the meaning of the input. I understand that this paper does not deal with natural language, but it would be good to add to line 23 to explain why ordering should not matter, to clarify the distinction between permutation invariance in this setting, as opposed to other settings with richer language, where this should not hold.\n2. Do you have insights on when the RNI vs. Unique ID should/shouldn’t yield improvements for different models/environments? Based on the results tables there seems to be clear difference in improvements of these two (they seem to work / not work symmetrically) and it would be helpful to try to pick those results apart to understand why this happens, to give us insight into the underlying working of these augmentations to the GNNs.\n It would be good to have a larger discussion on when these GNN (with augmentation) methods should not be expected to yield better performance, since that’s helpful to our understanding of these newer methods. Overall I like this paper a lot, I think I would just like to understand it more!\n",
" This work tackles the challenge of enabling GNNs to learn richer communication protocols for MARL. To this end, the authors initially formalise previous GNN-based approaches to learn communication protocols in MARL as a node prediction problem. The authors subsequently propose RNI and CLIP, two input preprocessing methods that improve the functional expressivity of previous GNN-based MARL approaches. The authors provide theoretical analysis to prove that combining GNNs with RNI/CLIP ensures the existence of GNNs capable of learning communication protocols that require (i) symmetry breaking and (ii) functional expressiveness beyond 1-WL. Empirical evaluation of the returns from combining previous GNN-based MARL approaches and RNI/CLIP shows the proposed approach produces improved performance in environments requiring symmetry breaking and functional expressiveness beyond 1-WL. **Originality**\n\n**1. (Minor weakness) The proposed approach is incremental relative to previous works.**\n\nDespite the novelty of the proposed approaches' application to learning communication protocols in MARL, these techniques have previously been proposed for other prediction problems, as alluded to by the authors in Lines 144 and 175. Line 155 also indicates that some theoretical contributions are minor extensions of theorems proven in previous work. Nevertheless, this weakness is outweighed by the remaining theoretical and empirical studies that the authors have conducted to show that the proposed approach can learn communication protocols requiring symmetry breaking and going beyond 1-WL expressivity in MARL.\n\n**Quality**\n\n**1. (Major Strength) Result reproducibility.**\n \nThe authors have provided adequate descriptions of the model architecture and hyperparameters, environments, and experiment protocols that ensure their results are reproducible.\n\n**2. (Major Strength) The experiment design adequately demonstrates the claims made in by the authors.**\n\nThe environment and baseline selection empirically highlight the claims of this work regarding RNI/CLIP's ability to learn improved communication protocols in environments requiring (i) symmetry-breaking or (ii) function estimators beyond the 1-WL expressivity. Note that the evaluation in Predator-Prey and Easy Traffic Junction also highlights the potential weaknesses of the proposed approach when applied to environments not requiring (i) or (ii). This provides valuable insights that can inform the readers when applying the proposed technique to other environments.\n\n**3. (Minor weakness) Missing theoretical/empirical analysis to strengthen the claims of this work further.**\n\nDespite ensuring the existence of a learned GNN that can help with (i) symmetry-breaking, I believe the theoretical analysis on the ability of RL-based optimisation techniques to learn such GNNs is lacking. Nonetheless, the empirical demonstration of RNI/CLIP's performance indicates that this is not a significant issue for the evaluated environments. Furthermore, since the work mentioned convergence as a potential issue with a few proposed approaches, it is crucial to report the learning curve from the experiments in the experiment section (although this is provided in the Appendix).\n\nThat aside, it will also be interesting to analyze messages that RNI/CLIP passes in environments requiring (i) or (ii). Specifically, in terms of proving the usefulness of RNI/CLIP in addressing (i), inspecting the passed representation may highlight how they aid the symmetry-breaking process.\n\n**Clarity**\n\n**1. (Strength) The paper is well-written.**\n\nThe document is generally well-written. The method description and related theoretical analysis were clear and concise. Furthermore, the experiment section clearly states the intention behind the design/selection of various baseline/environments for demonstrating the central claims of the work.\n\n**Significance**\n\n**1. (Strength) The work provides significant results for people working in applying GNNs for learning communication protocols for MARL.**\n\nDespite the proposed approach mainly being incremental compared to previous work, the theoretical and empirical analysis provides findings useful for people working in communication for MARL. The way the authors highlighted the potential limitations of the work is specifically helpful for future research. **Questions**\n\n1. Are there any interesting insights that can be obtained from the messages passed by agents equipped with RNI/CLIP (particularly in terms of symmetry-breaking)?\n2. What are the possible solutions to improve RNI's slow convergence in environments not requiring symmetry breaking? \n\n**Suggestions**\n1. Despite the theoretical analysis on RNI/CLIP being useful, I believe most of the space allocated to this is better served for (i) demonstrating the convergence/learning progress of RNI/CLIP and (ii) showing RNI/CLIP learns useful messages for symmetry breaking or going beyond 1-WL expressivity. After all, these theoretical guarantees are mostly limited to only guaranteeing the existence of GNNs that can approximate the optimal communication protocol.\n\n2. Although helping strengthen the claims made in this work, the wide-range of evaluated communication learning methods can be reduced to only include fewer algorithms to make the presented results more compact. Additional comparisons of RNI/CLIPs performance for other algorithms may subsequently appear in the Appendix. The authors have adequately highlighted the limitations of their work through their experiments and the resulting analysis. In terms of societal impact, I do not think that this work requires additional information regarding its negative societal impact since its application is still rather limited to simple MARL environments. ",
" The paper proposes to represent the problem of learning communication for multi-agent systems RL using Graph neural network representations. In addition to the formalization, the paper explores the use of 2 techniques from the GNN literature: random noise, and unique node IDs, towards better expressivity in MARL. (+) The idea of exploring the GNN paradigm for learning communication protocols for MARL is intriguing, and brings some interesting new perspective on the problem. There is some interest in evaluating the effectiveness of techniques such as random noise and unique IDs in the MARL setting.\n(+) The paper has good coverage of related work.\n(+) The paper includes a good number of baselines and domains in the empirical results.\n \n(-) In terms of clarity the paper is at times hard for a reader with more RL expertise, but less GNN expertise, to fully understand. E.g. 1-WL graph colouring, which appears to be a key concept, is defined very briefly. What is \\alpha in your definition of graph (l.84)?\n(-) The specific techniques used, namely random noise and unique node IDs, are not particularly insightful. They seem to bring expressivity, but perhaps at the expense of learnability (more on this below).\n - You propose 3 desideratas for MARL communication (l.139-141): In what way do the 2 proposed techniques (random noise and unique IDs) help achieve each of these?\n- Much of the motivation seems to be focused on expressivity of the graph (with the goal of “universal expressivity”). However I would think it’s as important (or more) for that level of expressivity to be efficiently learnable (i.e. learnable in a relatively small number of samples). Can you explain your view on efficient learnability in the context of this work? And what are the properties of the proposed methods in this dimension? You point out (l.184) that many samples may be needed for unique node IDs; is this the case in your results?\n- Your results are based on “value of a metric for a run to be the best value achieved during training” (l.246). While this may be standard in the literature, it seems to obfuscate the question of learning efficiency and stability, which I view as important properties. Can you include in your results for each technique what was the #training samples at which this best value was reached? And total #training samples that you ran, in the case that this is not 5000 (as per l.243). This would help get more nuanced understanding of learning efficiency, and also support some of your claimes, such as “RNI methods typically take longer to converge than the baseline and unique IDs” (l.269-270), and “unique IDs tend to yield less stable solutions” (l.286-287).\n - The authors provide reasonable discussion of the technique limitations of their work, but I would like more detail on the empirical results, as per questions above.\n- The authors do not raise ethical or social aspects of the work. I don’t see any major ethical concerns related to this work.\n",
" This work studies the expressivity of GNN when used as a communication function within multi-agent reinforcement learning systems (MARL) and defines a formal framework called graph decision networks (GDN) and observation extension procedures to improve expressivity. Authors provide a sound contribution. First, theoretically with their analysis of communication expressiveness in exiting techniques while introducing their framework. second with an extensive empirical evaluation with multiple baselines and environments. \n\nRegarding originality, I believe the paper may be labelled as incremental since the framework and analysis are about augmenting expressivity of existing baselines.\n\nAbout clarity, due to the theoretical nature of the paper it is hard to read. In this regard I believe that brief paragraphs explaining the directions of each section would greatly improve readability. Also at first read is not clear what are the significances of the GDN and what is analysing existing work, some check on that would help.\n\nLast, I believe that the significance of the work is low, and this is my biggest concern with this work. It is concerned about communication in MARL but only with GNNs, I believe authors could do more to highlight the relevance of this study. There are a plethora of works about MARL communication and emergent languages which are not concerned about GNNs. Given the large scope of this conference I believe some comparison with those should be included. \n\n\nGiven all of this, since I believe that evaluating significance is inevitably a subjective topic, I lean towards acceptance despite having my reservations towards novelty and significance.\n Please read above\n\n* In section 2 background the MARL introduction assumes a lot of previous knowledge from the reader, I understand that there is limited space, but authors could elaborate a bit on parameter sharing. \n\n\nI believe this is specially important for stabilising a \"Universally Expressive\" framework. As the authors point out they are not the first to use parameter sharing with the end of stabilise training, but it is an assumption that is not present on most of SOTA MARL algorithms, e.g. MAPPO, IPPO, QMIX, MAVEN....\n\n Authors do set clearly the limited scope of their review and analysis, although I believe that having parameter sharing in all the empirical evaluation gives an additional limitation of the analysis that could have been avoided,"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"8YkyOlHW_6_",
"8mO8WHlr7v8",
"ru_MwuRprS",
"Pln4yoSCYB",
"nips_2022_bBgNsEKUxmJ",
"8ZO0JO9dJ_q",
"ru_MwuRprS",
"fIX7FyNJ4v",
"oslt6Bl9tXy",
"nips_2022_bBgNsEKUxmJ",
"nips_2022_bBgNsEKUxmJ",
"nips_2022_bBgNsEKUxmJ",
"nips_2022_bBgNsEKUxmJ"
] |
nips_2022_Q-HOv_zn6G | Efficient and Modular Implicit Differentiation | Automatic differentiation (autodiff) has revolutionized machine learning. It
allows to express complex computations by composing elementary ones in creative
ways and removes the burden of computing their derivatives by hand. More
recently, differentiation of optimization problem solutions has attracted
widespread attention with applications such as optimization layers, and in
bi-level problems such as hyper-parameter optimization and meta-learning.
However, so far, implicit differentiation remained difficult to use for
practitioners, as it often required case-by-case tedious mathematical
derivations and implementations. In this paper, we propose
automatic implicit differentiation, an efficient
and modular approach for implicit differentiation of optimization problems. In
our approach, the user defines directly in Python a function $F$ capturing the
optimality conditions of the problem to be differentiated. Once this is done, we
leverage autodiff of $F$ and the implicit function theorem to automatically
differentiate the optimization problem. Our approach thus combines the benefits
of implicit differentiation and autodiff. It is efficient as it can be added on
top of any state-of-the-art solver and modular as the optimality condition
specification is decoupled from the implicit differentiation mechanism. We show
that seemingly simple principles allow to recover many existing implicit
differentiation methods and create new ones easily. We demonstrate the ease of
formulating and solving bi-level optimization problems using our framework. We
also showcase an application to the sensitivity analysis of molecular dynamics. | Accept | The reviewers have discussed the paper at length and have reached a consensus after the authors have clarified the applicability and limitations of their proposed method. I recommend that the authors continue to polish their manuscript with the points they raised in their summary to the Area Chairs and congratulate them on the acceptance of their submission. | train | [
"DmpwX8N11Qo",
"pcOCXCWHCN",
"QXlvhjTeuoE",
"soN-FgXiK-Y",
"gr2cBxZPV5",
"Slsogjob9mS",
"WVFuaMLbkdV",
"gTQmnHprbDi",
"g5mXEZ20yjH",
"VUl0iP3eZSH",
"F7mteGJtY50",
"JOFhKKzifSJ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" >While we agree that the hypothesis of the smooth implicit function theorem may be challenging to check for general nonsmooth optimization problems, we would like to clarify that they hold at least for lasso regression, under mild hypothesis over the design matrix. To support this claim, we added Appendix E with a formal statement and proof that the Jacobian of the lasso solution with respect to the regularization parameter can be computed with the implicit function theorem wherever it is differentiable (i.e., everywhere except on a finite number of “kinks”), for general design matrices. We believe this new result clarifies that our setting encompasses more than bilevel optimization with smooth prox operators, and would like to thank the reviewer for their comments that led us to this new result.\n\nI stress that these challenging conditions must be checked for all points visited by the algorithm during bilevel optimization (which could change depending on the algorithm used to optimize the outer loss, hyperparameters of that algorithm, initialization, etc). The new result does not fix this since you can still land on a kink during training. Unless there is a way to prove that this doesn't happen, saying the blueprint applies is misleading - the assumptions of the blueprint are not guaranteed to be met during training.\n\nBesides this, when its written \"nonsmooth problems like the lasso\" (or similar) in the paper I interpret this to mean fixed point equations with function F which are almost everywhere differentiable but not everywhere differentiable, or more concretely for the sake of example: prox operators that are not everywhere differentiable. In this class of functions, there are for instance prox operators that have sets of nondifferentiable points that are dense (combine the \"Remarque\" on page 176 of [Zahorski 1946] with proposition 2.3 in [Combettes et all 2019]). For these functions, it can be shown that the blueprint doesn't work even for a single point. The smooth implicit function theorem can never be applied here, despite being a prox operator, almost everywhere differentiable, etc.\n\n\n\"Sur l’ensemble des points de non-dérivabilité d’une fonction continue\" - Zygmunt Zahorski 1946\n\n\"Deep Neural Network Structures Solving Variational Inequalities\" - Patrick L. Combettes and Jean-Christophe Pesquet 2019",
" > I still contend that the additional assumption fixes the theoretical soundness at the expense of applicability since the assumption is so strict. Realistically, the blueprint can only be used for bilevel optimization with prox operators that are everywhere differentiable (since otherwise it will be challenging/impossible to verify that all points visited by the algorithm admit a neighborhood that avoids the nonsmooth points; almost everywhere differentiability does not fix this). To be very clear, this means nonsmooth problems like the lasso are not actually covered by the blueprint presented.\n\n> That being said, many papers include a mix of examples which do and do not fall into the scope of theoretical analyses, the authors are very right in this regard. The soundness has been improved by adding the assumption and the clarity has been improved by the revisions + adding the current limitations paragraph, which more clearly outlines the boundary of what's presented (except for the lasso statement, which is false). I will raise my overall score and the soundness and presentation scores. I remain the contribution score because the paper is still very limited in scope and the scope is effectively the same after the assumption (smooth prox operators).\n\nWe thank the reviewer for acknowledging the modifications we have made and for raising their score.\n\nWhile we agree that the hypothesis of the smooth implicit function theorem may be challenging to check for general nonsmooth optimization problems, we would like to clarify that they hold at least for lasso regression, under mild hypothesis over the design matrix. To support this claim, we added Appendix E with a formal statement and proof that the Jacobian of the lasso solution with respect to the regularization parameter can be computed with the implicit function theorem wherever it is differentiable (i.e., everywhere except on a finite number of “kinks”), for general design matrices. We believe this new result clarifies that our setting encompasses more than bilevel optimization with smooth prox operators, and would like to thank the reviewer for their comments that led us to this new result.",
" I still contend that the additional assumption fixes the theoretical soundness at the expense of applicability since the assumption is so strict. Realistically, the blueprint can only be used for bilevel optimization with prox operators that are everywhere differentiable (since otherwise it will be challenging/impossible to verify that all points visited by the algorithm admit a neighborhood that avoids the nonsmooth points; almost everywhere differentiability does not fix this). To be very clear, this means nonsmooth problems like the lasso are **not** actually covered by the blueprint presented.\n\nThat being said, many papers include a mix of examples which do and do not fall into the scope of theoretical analyses, the authors are very right in this regard. The soundness has been improved by adding the assumption and the clarity has been improved by the revisions + adding the current limitations paragraph, which more clearly outlines the boundary of what's presented (except for the lasso statement, which is false). I will raise my overall score and the soundness and presentation scores. I remain the contribution score because the paper is still very limited in scope and the scope is effectively the same after the assumption (smooth prox operators).",
" The author's response addressed most of my concerns. I tend to accept this paper.",
" >Yet, if I am a practitioner trying to solve a bilevel optimization problem, how can I know if my problem fits into your famework if I cannot verify this additional assumption holds at all necessary points?\n\nChecking that the implicit function theorem assumptions hold in theory can indeed be challenging, but is often not necessary in practice, as there is often a metric one cares about. For bilevel optimization, a natural way for a practitioner to check if our framework works is to check that the outer objective value is decreasing. \n\nAll the root objectives and fixed points mentioned in the paper (gradient descent, projected gradient, proximal gradient, mirror descent, KKT, …) are implemented in the library and have been tested successfully either through experiments in the paper or examples in the library. \n\nIt would be possible to detect when the matrix A is singular and to issue a warning to the user if this happens. We chose not to as, again, we have not observed any issues in practice.\n\nWe emphasize once more that it is unclear whether the theory of Bolte et al will lead to any practical algorithmic improvement compared to what we are already doing. Many successful methodologies in ML have theoretical guarantees in restricted theoretical settings but are applied more broadly in practice.\n\n> Small note: In the current limitations sections it's written that the approach applies when x* is differentiable at theta but it should be written that the approach applies when x* is differentiable in a neighborhood of theta since, under the assumptions that are now in the paper, the solution x* will always be differentiable in a neighborhood of theta by the smooth implicit function theorem.\n\nWe agree with you. We removed “x* is differentiable at theta” and now simply write “we note that the approach developed in this section theoretically only applies to settings where the implicit function theorem is valid, namely, where optimality conditions satisfy the differentiability and invertibility conditions stated in Section 2.1”.\n",
" >We simply mentioned that the prox operator is a.e. differentiable to remind the reader that differentiability of the prox operator is rather frequent, even for non-smooth optimization problems...\n\nI agree that the additional assumption that the prox is continuously differentiable on a neighborhood of the solution addresses the issue of applying the smooth implicit function theorem. The catch is in ensuring that this assumption holds, since being differentiable almost everywhere is not a sufficient condition for this. It's not obvious if one can guarantee this assumption will hold unless the prox is simply differentiable everywhere. \n\n>We agree with the reviewer that the projection is not differentiable everywhere, but would like to clarify that it is the case almost everywhere. This follows from the fact that a projection is the gradient of a smooth function (with Lipschitz gradients) and by Rademacher’s theorem. In fact, as shown in Appendix C (l.635), the Jacobian exists almost everywhere and is piecewise constant. In practice, this implies our approach is valid and provides the correct derivative of the objective function for almost all values of theta, which we find sufficient to optimize numerically in theta since in practice we never encounter a point of non-differentiability.\n\nI disagree that a piecewise constant Jacobian is sufficient. Because of the assumption you have added, to do bilevel optimization with your method it's necessary to ensure that every point visited by the algorithm admits a neighborhood on which the function F is C1. This is not guaranteed by almost everywhere differentiability nor a piecewise constant Jacobian. This speaks to the difficulty of trying to ensure the additional assumption in a bilevel optimization setting.\n\n>Overall, we believe that the reviewer misinterpreted the positioning of our paper. We do not claim that we tackle the theory of nonsmooth implicit differentiation, and do not think that we should be evaluated on this ground. We acknowledge that there is currently a gap between theory and practice. However, we still think that this paper nevertheless provides a worthy contribution to practitioners, with some simple theoretical results when they apply; and hope the reviewer will not block its publication.\n\nI understand that you do not claim to tackle the theory of nonsmooth implicit differentiation. However, in the blueprint and the applications, you are using implicit differentation on nonsmooth functions which is where the issue comes from. Regarding the contribution to practitioners, I agree that the software package is of a high quality. Yet, if I am a practitioner trying to solve a bilevel optimization problem, how can I know if my problem fits into your famework if I cannot verify this additional assumption holds at all necessary points?\n\nSmall note: In the current limitations sections it's written that the approach applies when x* is differentiable at theta but it should be written that the approach applies when x* is differentiable in a neighborhood of theta since, under the assumptions that are now in the paper, the solution x* will always be differentiable in a neighborhood of theta by the smooth implicit function theorem.",
" > The paper does not always specify if a function is continuous, differentiable, twice differentiable\n\nWe agree that it’s better to make the assumptions clear rather than implying that the assumptions of the implicit function theorem that we invoke apply everywhere. We clarified the assumptions everywhere in the manuscript (modifications are highlighted in blue color and assumptions already present before submission are highlighted in olive color). \n\n> For ex, around l98 there is no specification on the regularity of the function F \n\nWe believe the reviewer missed the sentence l.100-102, where we clarify the regularity conditions that the function F must satisfy for the smooth implicit function theorem to hold, namely, a ““continuously differentiable F with invertible Jacobian”.\n\n> Many of the functions involved in Table 1 are nonsmooth [...]\n\nFirst we would like to clarify that we did not write “because the prox if differentiable a.e., the smooth implicit function theorem can be applied a.e.”, which would indeed have been wrong. We simply mentioned that the prox operator is a.e. differentiable to remind the reader that differentiability of the prox operator is rather frequent, even for non-smooth optimization problems. But we agree that it can induce confusion and decided to disambiguate the conditions where the implicit function theorem holds, by adding that our framework applies if, in addition to being a.e. differentiable, the prox is continuously differentiable in a neighborhood of the solution and the invertibility condition holds (l.186-188).\n\n> While nonsmooth implicit function theorems exist, e.g., [Clarke 1990], [Bolte et al 2022]\n\nAs explained above, our focus is on situations where the (smooth) implicit function theorem holds. We clarified the assumptions for this in Definition 1 (l. 202) and in Theorem 1 (l.213). In addition, we added a sentence in the new “Current limitations” paragraph to mention that extending the approach by using a nonsmooth implicit function theorem is an interesting future work, citing the references [Clarke 1990] and [Bolte et al 2021].\n\n> The euclidean projection onto the simplex isn't smooth (line 257) \n\nWe agree with the reviewer that the projection is not differentiable everywhere, but would like to clarify that it is the case almost everywhere. This follows from the fact that a projection is the gradient of a smooth function (with Lipschitz gradients) and by Rademacher’s theorem. In fact, as shown in Appendix C (l.635), the Jacobian exists almost everywhere and is piecewise constant. In practice, this implies our approach is valid and provides the correct derivative of the objective function for almost all values of theta, which we find sufficient to optimize numerically in theta since in practice we never encounter a point of non-differentiability. \n\n> The nonsmoothness also raises questions for the numerical comparisons regarding unrolling\n\nAs mentioned in the paper, projection and proximal operators are differentiable a.e., therefore so is unrolling of the projected / proximal gradient algorithm.\n\n> In line 262, what is meant by unrolling if the algorithm is nonsmooth?\n\nWe backpropagate through the computational graph generated by the algorithm, including proximal and projection operators (which are differentiable a.e.).\n\n> The results are ultimately relegated to the smooth setting\n\nThere seems to be a fundamental disagreement: the reviewer thinks the paper limitations are blocking publication while we (as well as other reviewers and many readers) think the paper is valuable despite them: it presents a reduction to differentiating roots / fixed points, easy-to-use software in JAX, a large variety of experimental results and new Jacobian precision guarantees.\n\nWe also emphasize that implicit differentiation of the solution of non-smooth optimization problems is a very recent field of research. The recent reference of Bolte et al develops some new theory but it’s not clear if it has a practical impact. Likewise, it is only recently that backpropagation on differentiable almost everywhere functions has been theoretically investigated (again, by Bolte et al). Yet, ReLus have been routinely used in deep learning pipelines for years. \n\nOverall, we believe that the reviewer misinterpreted the positioning of our paper. We do not claim that we tackle the theory of nonsmooth implicit differentiation, and do not think that we should be evaluated on this ground. We acknowledge that there is currently a gap between theory and practice. However, we still think that this paper nevertheless provides a worthy contribution to practitioners, with some simple theoretical results when they apply; and hope the reviewer will not block its publication. \n\nWe have clarified assumptions everywhere in the paper and have acknowledged limitations in an explicit paragraph at the end of Section 2. We hope that this will convince the reviewer to increase their score.",
" We thank the reviewer for the positive review and constructive comments. We believe that the runtime comparison request is already addressed in the paper. We have addressed your other comments below and in the revised manuscript.\n\n> The main contribution of this paper is about the software, but the theoretical contribution is overstated. The proof of the theorem is quite standard and I do not get some new insight from it.\n\nWe provide a general result that is easier to apply in our context than [Higham 2002, Theorem 7.2]. It is used to provide a theoretical explanation for the phenomenon shown in Figure 3, when comparing the precision of implicit differentiation and unrolling, which is insightful and novel to our knowledge. Simplicity of the proof and insight are in our opinion not incompatible. We feel that the description of this experimental phenomenon would not be complete without this theoretical insight.\n\n> Direct runtime comparisons with existing methods are missing. The proposed approach is based on implicit differentiation which usually requires additional computational costs. Thus, the direct runtime comparison is necessary to demonstrate the efficiency of the proposed approach.\n\nFigure 4 in the paper already shows a runtime comparison of implicit differentiation vs. unrolling on CPU for 3 algorithms (mirror descent, proximal gradient and block coordinate descent) and Figure 13 in the Appendix shows the same comparison on GPU.\n\n> Recently, implicit deep learning has attracted many attentions, which is very relevant to the topic of this paper. An implementation example of implicit deep neural networks should be included. Moreover, many Jacobian-free methods e.g., [1-3] have been proposed to reduce the computational cost. \n\nOur software contains an example of deep equilibrium network (DEQ) with Anderson acceleration. We added the suggested 3 references to the revised manuscript.",
" We thank the reviewer for the extremely positive feedback and constructive comments.\n\n> As a minor issue, I was surprised to not see any example applications in the context of deep networks. Reference [55] implements dataset distillation on deep networks, yet the authors seem to be focused on a logistic regression case (see Questions). The code provided in the supplementary material also seems to include deep learning examples, but the authors make no note of this in the appendix. I don't see the inclusion of deep learning examples in the article as a necessity at all, but was a little surprised about the lack thereof.\n\n>Returning to the dataset distillation example, reference [55] only uses a few gradient steps for the inner loop to make their algorithm scalable. Do the authors think their approach towards dataset distillation for logistic regression would scale towards deeper networks?\n\nIn this particular example, we strove for simplicity: we wanted to show that it was possible to implement a dataset distillation example in less than 100 lines of code (the current example counts 67 lines counting comments). \n\nAs shown in the other deep learning examples, the implicit differentiation mechanism can scale to objectives with a deep network, and so it would be possible to extend this example to use a deep network instead of a linear one, with a modest increase in runtime and code complexity.\n\n> l. 224: What do the authors mean by \"implicit differentiation gains a factor of t compared to automatic differentiation\"? Moreover, does automatic differentiation in this context refer to automatic differentiation through the t iterations rather than implicit automatic differentiation?\n\nOur remark about the “gains a factor of t” refers to the observation that we prove in Theorem 1 that the error in Jacobian estimate using the implicit differentiation is upper bounded (up to a multiplicative constant) by the error in the solution estimate, while [1, Proposition 3.2] shows that when the Jacobian is instead estimated by unrolling, then the error in the Jacobian estimate is upper bounded (up to a multiplicative constant) by the error in the solution estimate multiplied by t, the number of iterations performed. Hence our upper bound for the Jacobian estimation by implicit differentiation is, up to the constants, better than the known upper bounds for Jacobian estimation by unrolling by a factor of t. Regarding the second question, yes, automatic differentiation refers to automatic differentiation through the t iterations.\n\n> I did not understand the illustration in figure 6; since θ∈Rk specifies the diameter of each particle, isn’t ∂x∗(θ)∈Rk×(k×2)? Is the figure depicting the diagonal elements of that Jacobian?\n\nθ is the diameter of the blue particles and is therefore in R, not R^k (we do not assume that each individual particle has its own diameter). Therefore, x*(θ) is a function from R to R^{2k}, i.e., it outputs the 2-dimensional coordinates of the k particles at equilibrium, for a given diameter θ. The Jacobian is then a vector of size 2k, which gives the 2-dimensional coordinates of the k particles. The diameter of the orange particles is fixed to 1. \n\n> Where is digit ‘9’ in figure 5?\n\nWe just wanted a 3 x 3 figure for saving space. The full example can be generated from the code in the supplementary material /source_code/examples/implicit_diff/plot_dataset_distillation.py\n\n> One typo: l. 324: 'converge, due to the'\n\nFixed, thank you!\n",
" EDIT: After the discussion with the authors and the revisions they submitted, I modified my overall score from 3 to 6, the presentation score from 3 to 4, and the soundness score from 1 to 3.\n\nThe paper presents a blueprint for automatic implicit differentiation of solutions to optimization problems, along with its implementation in the JAX library. They claim that their blueprint is widely applicable by listing several common optimization problem templates (e.g., those solvable by mirror descent, proximal gradient descent, Newton's method, conic programming, etc) that they claim one can apply their blueprint to. The main idea of the blueprint is to find a fixed point equation representing the optimality conditions and then apply the implicit function theorem to this fixed point equation. They also prove a theorem regarding the precision of the estimated implicit Jacobian in terms of the precision of the estimated fixed point, which they give numerical support for. Finally, they report numerical experiments for four problems, primarily comparing their method to unrolling. One strength of the paper is the software implementation in JAX, which is user-friendly, designed to be broadly applicable, and seemingly efficient; later theoretical flaws limit the justified use of this software and thus make this contribution less significant.\n\nThe paper is ambiguous when defining functions, not specifying if a function is continuous, differentiable, twice differentiable, etc, at key points, like in the blueprint and the theorem about Jacobian precision. For example, around line 98 when differentiating the root is introduced as the main principle which the rest of the paper relies on, there is no specification on the regularity of the function F besides that it should be \"a user-provided mapping, capturing the optimality conditions of a problem.\" However, the argument that follows relies on the smooth implicit function theorem, which in this context requires the function F to be C1 (continuously differentiable) in a neighborhood. This continues in line 118 where the chain rule is used without specifying that F is smooth, in line 168 when the regularity of f, G, and H are not specified, in line 199 for the definition of Jacobian estimate, in line 210 for the main/only theorem, etc.\n\nThis ambiguity becomes important for applying the results. Many of the functions involved in Table 1 on line 161, for instance, are frequently nonsmooth in machine learning contexts (e.g., the prox, the projection operator, etc), and thus the results developed in the paper are not applicable to them. There is mention in line 183 that, because the prox is differentiable a.e., that the smooth implicit function theorem can be applied a.e.. This is false - stronger assumptions than differentiability at a point are needed to apply the smooth implicit function theorem, i.e., F must be C1 on an open neighborhood of the solution (along with an invertibility condition). While nonsmooth implicit function theorems exist, e.g., [Clarke 1990], [Bolte et al 2022], they are nowhere mentioned and their requirements to be applied are different than those of the smooth implicit function since the generalized gradients involved are set-valued. This issue comes up again in line 202 where it's assumed that the solution xstar is differentiable, which is not true in general (e.g., the Lasso solution can have kinks as a function of the l1 weight).\n\nSome numerical experiments are also affected by this, for example the euclidean projection onto the simplex isn't smooth (line 257) and cannot be treated by the blueprint. While it's true that sometimes a smooth fixed point equation can be associated to a nonsmooth problem, such as in section 4.1 where the mirror descent formulation was used with the smooth KL projection on the simplex, there is no discussion of this phenomenon in the paper, nor a way to systematically construct such smooth fixed equations in general. The nonsmoothness also raises questions for the numerical comparisons regarding unrolling, which is mentioned in the questions section.\n\nThere is also a concern about the originality of the ideas in the paper. The main idea of the blueprint, to associate an equation modeling optimality to the problem instead of using the set-valued optimality conditions, is trivial in the smooth case since the optimality conditions will no longer be set-valued. Since little beyond line 183 is said about implicitly differentiating nonsmooth functions, the contribution of the blueprint here reduces to the fact that we can associate a fixed point equation to optimality conditions, which is well-known, e.g., for maximal monotone inclusions in [Bauschke, Combettes 2011].\n\nBecause the results are ultimately relegated to the smooth setting, I find the paper to be lacking in originality and significance. The main result that remains is the Jacobian precision theorem for smooth functions which, while compelling, is not sufficient on its own nor in conjunction the numerical experiments for smooth problems to deserve publication here. I think an integration of the nonsmooth case is necessary to make the contribution significant enough to worthy of publication, especially due to the prevalence of nonsmooth optimization problems in machine learning.\n\n\"Optimization and nonsmooth analysis\" - FH Clarke 1990\n\n\"Nonsmooth Implicit Differentiation for Machine Learning and Optimization\" - Jérôme Bolte, Tam Le, Edouard Pauwels, Antonio Silveti-Falls 2021\n\n\"Convex analysis and monotone operator theory in Hilbert spaces\" - HH Bauschke, PL Combettes, 2011 In line 262, what is meant by unrolling if the algorithm is nonsmooth? A major limitation of the work is that it does not apply to the nonsmooth setting in which the fixed point equation associated to the optimality conditions is not continuously differentiable. Many of the optimization problems coming from machine learning are nonsmooth and so this severely limits the impact of the proposed blueprint.",
" Implicit differentiation can be used to through the solution to an optimization problem without having to backpropagate through the method by which this solution was determined. While this approach has been used in many different contexts, implicit differentiation must often be tailored towards the specific problem in question. This article introduces a library that implements automatic implicit differentiation, addressing this limitation. It goes on to provide precision estimates if an optimization problem has not been exactly solved, and concludes with several examples showcasing the library. This submission fills an important gap by providing a library implementing automatic implicit differentiation. In my view, this promises to make the use of implicitly defined layers (regardless of their specific form) more accessible for practitioners, in particular making it easier to try out a particular idea. The library implementation is well documented and seems to integrate well with other Jax packages. The article itself is well written, providing a nice balance of motivation, code, theory, and examples. Theorem 1 is closely related to existing results about inverse stability (as the authors note), but it is helpful to have it stated explicitly in the context of implicit differentiation. The proof seems to be correct.\n\nAs a minor issue, I was surprised to not see any example applications in the context of deep networks. Reference [55] implements dataset distillation on deep networks, yet the authors seem to be focused on a logistic regression case (see Questions). The code provided in the supplementary material also seems to include deep learning examples, but the authors make no note of this in the appendix. I don't see the inclusion of deep learning examples in the article as a necessity at all, but was a little surprised about the lack thereof. - Returning to the dataset distillation example, reference [55] only uses a few gradient steps for the inner loop to make their algorithm scalable. Do the authors think their approach towards dataset distillation for logistic regression would scale towards deeper networks?\n- l. 224: What do the authors mean by \"implicit differentiation gains a factor of t compared to automatic differentiation\"? Moreover, does automatic differentiation in this context refer to automatic differentiation through the t iterations rather than implicit automatic differentiation?\n- l. 227: What is missing to apply these results to hypergradients? Would that simply be a matter of applying the chain rule to compute the gradients with respect to the objective and obtain corresponding precision guarantees?\n- I did not understand the illustration in figure 6; since $\\theta\\in\\mathbb{R}^k$ specifies the diameter of each particle, isn’t $\\partial x^{\\ast}(\\theta)\\in\\mathbb{R}^{k\\times(k\\times 2)}$? Is the figure depicting the diagonal elements of that Jacobian?\n- Where is digit ‘9’ in figure 5?\n\nOne typo: l. 324: 'converge, due to the' I believe the authors have adequately addressed the limitations of their work. ",
" This paper introduces a Jax package for automatic implicit differentiation. Specifically, the authors propose an efficient and modular approach for implicit differentiation of optimization problems. Their approach combines the benefits of autodiff and implicit differentiation. The authors also provide precision guarantees of the proposed approach. Moreover, the author demonstrate the effectiveness of the proposed approach in bi-level optimization and the sensitivity analysis. My detailed comments are given as below.\n\nStrength:\n\n1 The motivation of this paper is clear. I believe that this paper makes an important contribution to very relevant topics, e.g., the bi-level optimization, implicit deep neural networks.\n\n2 This paper is well written and the proposed approach is easy to follow. The authors provide some illustrative figures to demonstrate their approach. I feel this is good.\n\n3 The authors implemented four illustrative examples to demonstrate the effectiveness of their approach.\n\nWeakness:\n\n1 The main contribution of this paper is about the software, but the theoretical contribution is overstated. The proof of the theorem is quite standard and I do not get some new insight from it.\n\n2 Direct runtime comparisons with existing methods are missing. The proposed approach is based on implicit differentiation which usually requires additional computational costs. Thus, the direct runtime comparison is necessary to demonstrate the efficiency of the proposed approach.\n\n3 Recently, implicit deep learning has attracted many attentions, which is very relevant to the topic of this paper. An implementation example of implicit deep neural networks should be included. Moreover, many Jacobian-free methods e.g., [1-3] have been proposed to reduce the computational cost. The comparisons (runtime and accuracy) with these methods are preferred.\n\n[1] Fung, Samy Wu, et al. \"Fixed point networks: Implicit depth models with Jacobian-free backprop.\" (2021).\n\n[2] Geng, Zhengyang, et al. \"On training implicit models.\" Advances in Neural Information Processing Systems 34 (2021): 24247-24260.\n\n[3] Ramzi, Zaccharie, et al. \"SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models.\" arXiv preprint arXiv:2106.00553 (2021). Please see weakness 2 and 3. If the authors can address these two concerns, I will improve my scores. Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
9,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"pcOCXCWHCN",
"QXlvhjTeuoE",
"gr2cBxZPV5",
"gTQmnHprbDi",
"Slsogjob9mS",
"WVFuaMLbkdV",
"VUl0iP3eZSH",
"JOFhKKzifSJ",
"F7mteGJtY50",
"nips_2022_Q-HOv_zn6G",
"nips_2022_Q-HOv_zn6G",
"nips_2022_Q-HOv_zn6G"
] |
nips_2022_Z4kZxAjg8Y | Autoregressive Search Engines: Generating Substrings as Document Identifiers | Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive language models are emerging as the de-facto standard for generating answers, with newer and more powerful systems emerging at an astonishing pace. In this paper we argue that all this (and future) progress can be directly applied to the retrieval problem with minimal intervention to the models' architecture. Previous work has explored ways to partition the search space into hierarchical structures and retrieve documents by autoregressively generating their unique identifier. In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers. This setup allows us to use an autoregressive model to generate and score distinctive ngrams, that are then mapped to full passages through an efficient data structure. Empirically, we show this not only outperforms prior autoregressive approaches but also leads to an average improvement of at least 10 points over more established retrieval solutions for passage-level retrieval on the KILT benchmark, establishing new state-of-the-art downstream performance on some datasets, while using a considerably lighter memory footprint than competing systems. Code available in the supplementary materials. Pre-trained models will be made available. | Accept | This paper proposes a method (SEAL) for document retrieval where a language model (LM) conditioned on a question generates n-grams as document identifiers. This is done by training BART on question and n-gram pairs, where the n-grams are sampled from the gold passages, and at test time constraining generation to output valid n-grams that correspond to document identifiers. Experiments on Natural Questions (NQ) Open dataset and the KILT tasks obtain strong results.
Overall, all reviewers agree that this is a strong paper that proposes a simple but effective approach. I agree with their assessments and recommend acceptance. However, a weakness that has been pointed out is that the paper does not perform evaluation on other common QA benchmarks (MSMARCO, TriviaQA, SQuAD, WebQuestions, and Entity Questions) where the performance of baseline models are well established. I strongly encourage the authors to train SEAL on at least some of those datasets and compare with stronger baselines. | test | [
"oT-LIxjPQ5U",
"Wuh8VjJIiv",
"FGF1NrzazG-X",
"OaZ2xPVTPRm",
"hrj27-HBV2J",
"2cc3kcs5xUx",
"iXxGqu3a6x",
"X9X0xDxuSH",
"p6Zj8JJ2YRp",
"qhApTs72KJP"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the score increase and for all the suggestions on how to strengthen the paper! We will revise the paper accordingly.",
" Thanks for providing the response!\nBased on the response to my review and the author's responses to other reviews, I am happy to increase my score to 6.\n\nSome followup comments from my side that would be useful to incorporate in the next version of the paper.\n\n- It will still be good to provide results on the TriviaQA-Open, Squad-Open etc. datasets as was done in the DPR paper and compare them with SEAL. It will help the reader to appreciate the strengths and analyze the weaknesses of your model and will lead to progress as a whole.\n\n- It's good to show SOTA or near SOTA results on KILT. My only worry was that a large stream of work that came after DPR did not evaluate strongly on KILT and as a results the baselines models on KILT are not strong enough.\n\n- Low result @5 and @20: It will be useful to suggest some future work directions in the discussion section in the paper that can aim to improve this.\n",
" Thank you for considering our approach technically sound / interesting and our paper well-organized / easy to follow. We really appreciate you believing this work can lead to follow-ups in this direction.\n\n**Ngram length**. We report here the performance on NQ with different ngram lengths. Performances go up as we increase the ngram length. We will add this table to the revised paper with a relevant discussion of the figures.\n| Model | Length | A@20 | A@100 |\n| --- | --- | --- | --- |\n| SEAL (LM+FM, intersective) | 3 | 64.7 | 74.8 |\n| SEAL (LM+FM, intersective) | 5 | 73.6 | 83.7 |\n| SEAL (LM+FM, intersective) | 10 | 76.2 | 86.3 |\n\n**How the training ngrams are chosen**. The number of training ngrams has been tuned in preliminary experiments. 10 was the optimal value on the NQ dev set, but we have to say that the effect of tweaking this parameter is not critically strong: even with the temperature parameter, biasing towards widows with high overlap with the query makes for a very peaky distribution (see our response to R#3) with usually little diversity among samples. Controlling the ngram distribution (instead of uniform random sampling) is instead very important for performances, because it pushes the model to focus towards relevant and/or predictable parts of the documents. Our method to select training ngrams is very simple, and leaves a lot of room for potential improvements. We will add an ablation study on this in the revised paper.\n\n**Mismatch reported numbers.** The authors of DSI [1] have not been able to share with us checkpoints, preprocessed datasets and document collection. We have then tried to reproduce their setup according to their specifications, but we had no way to check how well our setting matches with theirs. Therefore, the direct comparison with their reported numbers is impossible. Their BM25 numbers are computed with the `gensim` library, which seem to produce significantly lower results compared to the more commonly used Lucene bindings in the `pyserini` library, so we chose to report results obtained with both `gensim` and `pyserini`.\n\n**Notation**. Thank you for pointing out our notation was not clear enough. To answer your questions, in line 142 $K$ is just the set of ngrams generated for some query, regardless of its size; $P(n|q)$ is the probability assigned to the ngram (given the query) by the encoder-decoder itself. We will improve the clarity of that section following your suggestions.\n\n**Intersective results**. The KILT results in Table 4 use R-precision, which is almost identical to the accuracy@k with k=1. Lower values of k show more sensitivity to noise because the model assigns the same score to multiple matches: this happens not only in the KILT Table the reviewer has mentioned, but also in NQ (see Table 3). Intersective scoring partially ameliorates the issue by merging evidence for multiple matches.\n\n**References**\n* [1] Tay et al., Transformer Memory as a Differentiable Search Index, arXiv:2202.06991, 2022.",
" Thank you for considering our paper well-written, organized, with sound experiments and a good related work section. We really appreciated you acknowledging the novelty of the proposed approach and our effort to release all code and models to reproduce our results. \n\n**Experimental setting**. The focus of our experiments is showing that autoregressive retrieval is a promising paradigm for robust retrieval in a variety of settings, rather than chasing the latest state of the art. In addition to Natural Questions, which is standard for retrieval systems to test on, we have chosen to use KILT (that includes Natural Questions and TriviaQA) as it is a well-established, diverse benchmark that features tasks that go beyond open-domain question answering: for example, it includes things like relation extraction and fact verification. \n\n**Stronger baselines**. We appreciate the reviewer’s concern with our included comparison systems. We will include stronger systems in our NQ table so as to give a better sense of what the current state of the art is, and show what the longer term goal for autoregressive retrieval should be. However, we still feel that our relevant experimental comparison system is DPR. Our goal was to establish SEAL as a simple, autoregressive baseline to be compared on equal footing against simple, dense baselines. The systems that the reviewer has mentioned offer solid improvements over the simple baseline, but do not fundamentally change the standard recipe of dense training, and it could definitely be possible to adapt and use them for autoregressive retrieval as well: ANCE improves over the dense baseline by refining negative mining; ICT-DPR and MSS-DPR boost performances by using a pretraining objective that is more aligned to retrieval.\n\n**Low result @5 and @20**. Results at low recall probably could be explained by the fact that the matching scheme does not take into account the full context, resulting in a slightly noisier ranking. In other words, non-relevant documents that happen to contain a generated ngram might be assigned the same score, with nothing to break the tie. For example, the string “the largest cat” matches both “The liger is often believed to be *the largest cat* in the world.” and “*the largest cat*amarans and monohulls also carry cars…”. Note that this is not specific to SEAL: approaches based on lexical exact matches such as BM25 or GAR are affected as well. At higher recall, e.g., @100, which is more tolerant to this kind of noise, results reach or surpass the dense retrieval baseline.\n\n**Model size**. Thanks to the reviewer for pointing this out! We already provide evidence that the number of parameters is not the main reason for the good results on KILT in Table 4, where we outperform both DPR base and large. For completeness, we also report here a comparison with DPR large on NQ: the results @100 are slightly better, but in the same ballpark as SEAL. \n| Model | Params | A@5 | A@20 | A@100 |\n| --- | --- | --- | --- | --- |\n| DPR (*bert-base*) | ~220M | 68.3 | 80.1 | 86.1 |\n| DPR (*bert-large*) [1] | ~350M | 69.1 | 80.2 | 86.7 |\n| SEAL | ~400M | 61.3 | 76.2 | 86.3 |\nWe will be happy to add these numbers to the paper, as well as a discussion on the issue that the reviewer raised.\n\n**Memory usage of embeddings**. We have reported 64 GB as the size of the embedding table size for 32-bit DPR because that is what has been used to compute results that we have reported in the table. Even scaling this number down by 0.5 by using 16-bit floating points, the substance of our claim does not change, because the size of the FM-index, at 8.8 GB, is still significantly smaller.\n\n**References**\n* [1] Oguz et al., Domain-matched Pre-training Tasks for Dense Retrieval, Findings of NAACL, 2022.\n* [2] De Cao et al., Highly Parallel Autoregressive Entity Linking with Discriminative Correction, Proc. of EMNLP, 2021.\n",
" Thank you for defining our paper readable and well-written, for considering the drawbacks of autoregressive models well explained, for defining our approach appealing and for acknowledging that the empirical results demonstrate that our solution can improve the quality of retrieval results.\n\n**Re-ranking**: the paper describes a first-stage retrieval system to get candidates from a large collection of documents. Current LM-based reranking solutions can be applied only to a small set of documents given the computational cost of cross encoding query and document text, so they are not well suited as first-stage retrieval solutions. A re-ranking step is orthogonal to our solution and can be applied to the top-k results from SEAL to boost retrieval results.\n\n**Latency**. The main bottle for SEAL is, as you have suggested, decoding speed rather than the search itself, since the FM-index’s querying time complexity does not depend on corpus size. Decoding speed, however, can be greatly improved. Increasing decoding speed is a hot area of research and there are several works that propose solutions to speedup the process (e.g., [1], https://huggingface.co/blog/tf-xla-generate). Moreover, generation is becoming the de facto standard approach to NLP, not just as the method for materializing final outputs, but also for modeling the computational process needed before computing the answer (think chain-of-thought, workspaces etc.). As such, we expect generation latency will improve significantly and increasingly over time as a result of this growing interest. Any improvement will be directly applicable to SEAL. We will add a discussion on this issue in the revised paper!\n\n**Stronger baselines**. We make SoTA claims w.r.t. the public KILT leaderboard where, at the time of writing the paper, SEAL was outperforming all competing systems on some datasets. However, our aim is not chasing SoTA but presenting a novel approach to retrieval that achieves comparable results to other families of solutions (e.g., dense), which may, with additional research and development, ultimately outperform other approaches.\n\n**Constrained decoding**. We have an ablation in Table 6: constrained decoding improves performance compared to the unconstrained baseline. However, the unconstrained baseline still produces good results, as non-occurring ngrams get filtered out anyways (as they produce no matches).\n\n**Vocabulary mismatch/synonyms**. Although documents are indexed by exact n-grams, the NLG capabilities of language models can avoid vocabulary mismatch by generating multiple n-grams containing synonyms or related concepts from multiple documents (Table 7 reports an example of this behavior). \n\n**References**\n* [1] De Cao et al., Highly Parallel Autoregressive Entity Linking with Discriminative Correction, Proc. of EMNLP, 2021.",
" Thank you for considering our work “a significant progress for learned indexing structures”.\n\n**Biasing towards ngrams with high overlap**. We thank the reviewer for pointing this lack of information out. We sample training ngrams from the following distribution:\n\n$$\\frac{\ne^{L(q,d_{i:i+k}) / \\tau }\n}\n{\n\\sum_{j=1}^{|d|-k+1} e^{L(q,d_{j:j+k})/ \\tau }\n}$$\nwhere $L$ is the normalized Levenshtein distance between the query and a document span and $\\tau = 1.5$ is a temperature parameter. We will add this information to the paper.\n\n**Unsupervised**. Similarly to [1], we have added unsupervised examples to expose the model to the full document collection (see the ablation in Appendix B). Beyond that, we have not experimented with ad hoc, large scale pretraining, as the main goal was developing a working, viable model for autoregressive retrieval. We consider unsupervised autoregressive approaches an interesting area for further research. \n\n**Possible to optimize a ranking objective?** Thanks for the suggestion! There is no reason why we could not, in principle, use a training objective that is more aligned with the scoring function. For example, we could produce a document score by summing scores of sampled ngrams, and then train the model with a contrastive objective similar to those used by dense methods. We would like to explore this as future work.\n\n**References**\n* [1] Tay et al., Transformer Memory as a Differentiable Search Index, arXiv:2202.06991, 2022.",
" This paper proposes a novel scheme to apply autoregressive language models decoding to retrieval tasks, in which documents are represented using all constituent n-grams as possible identifiers. The key idea is to use an FM-Index to prevent the generative model from producing document identifiers with text outside of any of the indexed documents. The appeal of the approach is that large language models can be adapted (fine-tuned) for this task without major architectural changes. Empirically, the approach performs competitively with a selection of recent baselines, in some cases outperforming them.\n\n This paper does a good job explaining current drawbacks of autoregressive models in retrieval tasks (related work) and the empirical results demonstrate that the proposed scheme does improve the quality of retrieval results.\n\nMy main concern is that there may be better ways to apply large language models for retrieval, such as the re-ranking approach that is mentioned in the introduction; this wouldn't require complex generation involving indices and constrained decoding. Indeed, re-ranking shares the purported benefits of the proposed approach, namely straightforward application of large language models without architectural changes, with the key advantage that it only requires *scoring* a (query, target) pair. Autoregressive models, by definition, suffer from slow decoding speed, and therefore I'm skeptical the latency would be low enough for this approach to be practical in many settings.\n\nRegarding the experiments, not enough motivation is given for why the baselines were selected, and it's not clear whether these truly represent SoTA results. For example, do you compare to a re-ranking baseline? There are many approaches from the IR community such as ColBERT that seem relevant but are not discussed.\n\nRegarding the selected baselines, it doesn't look like the baselines leverage any sort of constrained decoding. It would be interesting to constrain decoding using simpler alternatives to fully understand the impact of the FM index and proposed weighting scheme.\n\nThe paper is readable and well-written. Additionally, the authors detail their experiments well and share their code which bodes well for reproducibility.\n\nL37: extend->extent\nL230: \"our the size of our index is\" * What are the implications of the assumption that documents are indexed by exact n-grams? For example, it would be good to provide intuition for why this does (or does not) preclude matching documents via synonyms or related words / expressions. * I would have liked to see more discussion of inference-time latency.\n",
" - The paper presents an approach (called as SEAL) for document retrieval where a language model conditioned on a question generates n-gram tokens to identify the relevant documents from the evidence documents (or passages). To enable this functionality, the method trains BART using question and n-gram pairs, where the n-grams are sampled from the gold passages. To constrain generation to output valid n-grams such that they correspond to some documents, the approach indexes the evidence documents with an efficient datastructure called FM index. The paper presents several ways to score the generated n-grams, one where the n-gram corpus level frequency is considered and another one where they try to score multiple n-grams using intersection of tokens. They experiment with Natural Questions (NQ) Open dataset and the KILT tasks to showcase the competitiveness of their proposed approach. **Strengths**\n - The biggest strength of the paper is the novelty of the proposed approach. While the dominant classes of retrieval methods consist of training dual encoder models with pre-computed evidence indexes, this paper presents another method with which retrieval can be performed. \n\n- The paper is well-written, organized, sound experiments, and related work section is good. The authors have submitted the code-base which will make the work reproducible. \n\n\n**Weaknesses and Suggestions for Improvements**\n\n- Apart from NQ, the paper does not perform evaluation on popular QA datasets such as MSMARCO, TriviaQA, SQuAD, WebQuestions, and Entity Questions where the performance of baseline models are well established. These are the datasets where a large fraction of the retrieval papers report results and benchmark their models. SEAL should also train a common model on all these datasets in the DPR-Multi style to assess the generalization of their approach.\n\n- Performance comparison with DPR and other strong dual encoder models. From the results in Table 3, SEAL falls behind DPR on Accuracy@ 5 and Accuracy@20, where its performance is considerably low. I suggest it’s important to highlight this limitation and probe further the reasons. There should be a more rigorous performance comparison with improved dual-encoder training approaches which obtain much better results. For example, the model ANCI (https://openreview.net/forum?id=zeFrfgyZln) and ICT-DPR and MSS-DPR in (https://arxiv.org/abs/2101.00408) obtains better performance numbers than DPR and these results should be reported in Table 3.\n\n- To understand the dependence of SEAL model on number of training examples, it will be useful to compare the sample efficiency of SEAL algorithm with that of DPR on the NQ training examples.\n\n- Considering the number of trainable parameters, where DPR trains 220M parameters while SEAL trains ~400 M parameters, the direct comparison with DPR is not fair. This should be clearly highlighted in the paper. Are the benefits of SEAL model on KILT tasks mainly due to the trainable parameter size? It would be good to finetune smaller and larger generator models such as different configurations of T5 (or T5 lm adapted models) and then study the correlation of performance vs model size.\n\n- The writing in the method section seems a bit dull, especially the paragraphs of “factoring in FM-index frequencies” and “intersective scoring for multiple n-grams”. Using visual illustrations and diagrams to convey the information would help the reader understand the importance of these ideas more.\n\n- Probably a minor point, I feel that the training / inference time of SEAL / DPR should be calculated using the same compute hardware and then compared. Also, the 64 GB of DPR index size in Table 2 is when using 32 bit representation for floats. It has been shown that 16 bit or more optimized representation of passage embeddings also works just as well (https://arxiv.org/abs/2106.05346). Similarly, GPU is not always required to perform fast search. Toolkits such as ScaNN (https://arxiv.org/abs/1908.10396) also works well on CPU.\n Please see the points under the Weaknesses section.\n\n** EDIT **\nIncreasing the rating to 6. Yes, the authors adequately addressed the limitations and potential negative societal impact of their work.",
" This work proposes SEAL, which trains a language model to perform retrieval tasks leveraging constrained decoding over a FM-index of ngrams in a corpus. More specifically BART is finetuned to generate sampled 10 ngrams from each ground truth document biased in favor of ngrams with high character overlap with the query.\n\nSeveral scoring functions are explored including:\n(LM scoring) the score P(n|q) of the most probable fixed-length ngram.\n(LM+FM scoring) the pointwise mutual information between query and ngram computed from P(n|q) and P(n) \n(LM+FM intersective scoring) aggregates the contribution of multiple ngrams\n\nFor page level task NQ320k, SEAL improve over the quality of a much larger retrieval solution (GENRE). For passage level task (NQ), SEAL significantly improve over DSI, which suffers capacity losses when memorizing document ids. SEAL produces comparable results to DPR and GAR (especially for A@100 which is the most important for QA purposes). More detailed analysis shows that SEAL is a lot better on novel questions or answers. \nFor KILT passage retrieval task, SEAL improved the state of the art result significantly.\nFor KILT downstream tasks, SEAL improves state-of-the-art for 4 out of 7 tasks.\n\nThis work represents a significant progress for learned indexing structures -- combining LM with multi-point indexing and scoring.\n see the summary how is the ngram sampling done to bias towards overlap with queries? there seem to be an lack of details.\n\nwhy only train from the ground-truth doc? why not leverage unsupervized training for retrieval?\n\nit seems that the model is never trained to optimize ranking quality directly. Is there any possibility for learning to rank?\n\n see the summary",
" This paper proposed to use autoregressive language models as search engines. Previous work in this direction has explored ways to do such retrieval by generating some unique identifiers of the documents. The authors proposed an even simpler approach -- using all ngrams in a document as possible identifiers and directly generating them using an autoregressive language model. This process was done through constrained decoding with the help of FM-index. They conducted experiments on the Natural Questions and the KILT benchmark, and the experimental results demonstrated that they could achieve comparable performance with the SOTA models while using a considerably lighter memory footprint.\n - Originality: Although the idea of generating unique document identifiers using autoregressive models is not quite new, I do like the idea of generating the ngrams in a passage as possible identifiers. Also, I appreciate the efforts the authors put to make this simple idea work, as can be seen from the results, one needs to combine equations (2), (3), and (4) to make it really work.\n\n- Quality: The proposed approach is technically sound and interesting. As seen from the experimental results, they outperformed the prior autoregressive baselines on NQ and KILT. They also achieved comparable performance with DPR on NQ and outperformed DPR by a large margin on KILT. Note that the authors also claimed they achieved SOTA performance on 4 out of 7 datasets on KILT. Although this does not hold by now, I don't believe it diminishes the quality of this work. \nOther comments I have: (1) From my perspective, the ngram length $k$ and the number of k-grams selected during training are very important hyperparameters (set to 10 in the experiments) but lacked detailed discussion over in the paper. I would like to see an experiment regarding the performance change over a different length of ngrams. It could just be a small-scale experiment but I think having those numbers would help consolidate the claims made in the paper. (2) Correct me if I am wrong but I do not find the details of how to select a fixed number of k-grams from a total of $|d| - k$ k-grams. How important is the selecting strategy? It would be better if the authors can establish a baseline using a random selection of the k-grams.\n\n- Clarity: In general this paper is well-organized and easy to follow. However, I think having a more formal definition of some notations in the paper could further improve the clarity. For example: \nLine 142, does ngrams(K) denote $K$ ngrams or a set of k-grams? \nLine 147, how is $P(n|q)$ computed?\n\n- Significance: I think this paper studied a very interesting problem and proposed a promising approach. Although the experiments are not comprehensive for the readers to understand every aspect of the system, I think it still brings many benefits to the research community. I believe it would lead to some follow-ups in this direction.\n\n - By comparing Table 2 in this paper with Table 3 (Tay et al., 2022), I saw that the performance varies a lot on NQ320K even for BM25. Also, they basically achieved better performance on hits@1 while you achieved better performance on hits@10. Can you elaborate on this issue a bit more? This makes me hard to judge where this work sits in the literature.\n- In Table 5, I noticed that the performance of **SEAL (LM+FM, intersective)** outperforms **SEAL (LM+FM)** by a large margin (almost doubled), whereas it is not the case on NQ. Do you know why this happens?\n\n\n The authors sufficiently addressed the limitations in the paper."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"Wuh8VjJIiv",
"OaZ2xPVTPRm",
"qhApTs72KJP",
"X9X0xDxuSH",
"iXxGqu3a6x",
"p6Zj8JJ2YRp",
"nips_2022_Z4kZxAjg8Y",
"nips_2022_Z4kZxAjg8Y",
"nips_2022_Z4kZxAjg8Y",
"nips_2022_Z4kZxAjg8Y"
] |
nips_2022_7-bMGPCQCm7 | Heatmap Distribution Matching for Human Pose Estimation | For tackling the task of 2D human pose estimation, the great majority of the recent methods regard this task as a heatmap estimation problem, and optimize the heatmap prediction using the Gaussian-smoothed heatmap as the optimization objective and using the pixel-wise loss (e.g. MSE) as the loss function. In this paper, we show that optimizing the heatmap prediction in such a way, the model performance of body joint localization, which is the intrinsic objective of this task, may not be consistently improved during the optimization process of the heatmap prediction. To address this problem, from a novel perspective, we propose to formulate the optimization of the heatmap prediction as a distribution matching problem between the predicted heatmap and the dot annotation of the body joint directly. By doing so, our proposed method does not need to construct the Gaussian-smoothed heatmap and can achieve a more consistent model performance improvement during the optimization of the heatmap prediction. We show the effectiveness of our proposed method through extensive experiments on the COCO dataset and the MPII dataset. | Accept | This paper proposes to use earth mover distance to measure the loss function between a predicted heatmap and ground truth heatmap. It initially received mixed reviews. After rebuttal and discussion, all reviewers converged to acceptance of the paper. Reviewers believe this paper is novel and achieved significant practical performance across several models. AC follows the consensus and recommends acceptance of the paper. | test | [
"3SZk8PGMYlH",
"FcHuQxkWmez",
"aYyHJXk_9eQ",
"1sxt_fL1rTe",
"F1q4BaiFYPu",
"wayHXr2hHfl",
"XbXZYU3OSjd",
"IREZ5n0MYv6",
"GoNJU3j8VKZ",
"STyjzKKg6yq",
"K4x5AO3vbe",
"_ZbnwJ1Hgno",
"USbh0kO0TaX",
"swkU4QS2u1F",
"VDl_iC17e32",
"kKYD44TMxJJ",
"bCcXNkALcem"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Your replies generally answered my concerns and thus I change my rating. The suggestion of clarifying the core idea and supplementing the missing ablation study in the revised version, as mentioned in *Weakness*, still holds.",
" We thank the reviewer for the additional thoughtful discussions. In the following, we seek to address each of the concerns.\n\n>**Q8:** *\"In Q7: Experiments [...], I meant Missing comparisons with the same Gaussian heatmap.\"*\n\n**A8:** Below, we show the comparisons with the same Gaussian heatmap as the target. Note that all the experiments are conducted with the same backbone model (HRNet-W48) and on the same set (COCO validation set).\n\n| Method | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ | $AR$ |\n|---|---|---|---|---|---|---|\n| **Gaussian heatmap (MSE loss)** | 77.1 | 91.8 | 83.8 | 73.5 | 83.5 | 81.8 |\n| **Gaussian heatmap (Sinkhorn Distance loss)** | 77.7 | 92.2 | 84.3 | 74.0 | 83.8 | 82.3 |\n\nAs shown, with the same Gaussian heatmap as the target, the proposed Sinkhorn Distance loss leads to a performance improvement compared to MSE loss.\n\n>**Q9:** *\"Why not use the Gaussian heatmap as the demander/target? A Gaussian heatmap can also, even better, reduce the quantization error, which is the crucial reason for proposing a sub-pixels demander.\"\"Besides, comparisons between sub-pixels demander and Gaussian heatmap are missing. Is a sub-pixels demander better than a Gaussian heatmap? If not, the difference between the proposed \"novel pipeline\" and the previous \"Gaussian heatmap and MSE\" pipeline is only the existing Sinkhorn Distance loss.\"* \n\n**A9:** Below we further explain why we do not use the Gaussian heatmap as the demander/target. (1) As shown in Line 41-45 of our paper and also in [17], by using the Gaussian heatmap as the demander/target, the standard deviations of the Gaussian distributions often need to be carefully chosen, which is non-trivial. (2) As shown in Fig. 3 and Line 341-347 of our paper, while the human pose estimation task aims to localize body joints accurately, by using the Gaussian heatmap as the demander/target, the predicted heatmap is not very compact. This can lead to difficulties in accurately localizing the body joints.\n\nMeanwhile, our proposed pipeline can alleviate the misalignment problem between the training loss and the final body joint localization performance (as elaborated in **A5**). Also, as shown in Fig. 3 of our paper, by using our proposed pipeline with sub-pixels demander, a more compact body joint localization can be achieved. Thus, the body joints can be localized more accurately. By using our proposed pipeline, we also bypass the step of choosing proper standard deviations.\n\nWe also compare between sub-pixels demander and Gaussian heatmap with the same backbone model (HRNet-W48) and on the same set (COCO validation set). With the same Sinkhorn Distance loss, the variant using the sub-pixels demander further improves the performance over the variant using the Gaussian heatmap (78.8 vs 77.7 for AP), demonstrating the effectiveness of the sub-pixels demander in our proposed pipeline. \n\n>**Q10:** *\"In A6, I'm familiar with the training of HRNet-W48 on COCO. If trained on a single RTX 3090 GPU, it can not be done in three days. I suggest re-checking the experiment.\"*\n\n**A10:** In many recent human pose estimation works [33, 26, 37], during training, the batch size is set to either 128 or 256, and multiple GPU cards are used. Following this setting, in our work, during training, we set the batch size to 256 and run the experiments using a GPU cluster with RTX 3090 GPU, which has 8 GPU cards in it. We will make this clearer in the revised version.",
" 1. In **Q7: Experiments** *\"Missing comparisons between the MSE loss and the proposed loss with the same type of target heatmap.\"*, I meant Missing comparisons with the same Gaussian heatmap. Why not use the Gaussian heatmap as the demander/target? A gaussian heatmap can also, even better, reduce the quantization error, which is the crucial reason for proposing a sub-pixels demander. \n\n2. Besides, comparisons between sub-pixels demander and Gaussian heatmap are missing. Is a sub-pixels demander better than a gaussian heatmap? If not, the difference between the proposed \"novel pipeline\" and the previous \"Gaussian heatmap and MSE\" pipeline is only the existing Sinkhorn Distance loss.\n\n3. In **A6**, I'm familiar with the training of HRNet-W48 on COCO. If trained on a single RTX 3090 GPU, it can not be done in three days. I suggest re-checking the experiment. ",
" We thank all reviewers for recognition of our contributions (Reviewer wVS3:\"an easy way to improve the quality of joint localization''; Reviewer 1LPW:\"notable performance improvement''; Reviewer aExd:\"novel and technically sound\", \"heatmap is a basic and important tool in human pose estimation'', \"model-agnostic that can be applied in most pose estimator and improve the performance; Reviewer ggaN: \"elegant and effective\", \"marvelous performance\"). ",
" >**Q2:** *\"It will be interesting to observe how does this method compare to a heatmap regression based method where a multi-step training is applied where with each step of training the standard deviation is decreased. [...] If the objective is to decrease the area of localization it is important to observe a naive approach first. Several papers have shown that to work for facial landmark localization.\"*\n\n**A2:** Thanks for your suggestion. Below we compare our method with the multi-step training method where with each step of training the standard deviation is decreased. Specifically, a facial landmark localization work named \"AdaLoss: Adaptive Loss Function for Landmark Localization\" introduced two different methods of decreasing the standard deviation over steps. The first method (**Linear Decrease**) is to decrease the standard deviation linearly from a quarter of the heatmap resolution to 0 over epochs. The second method (**Decrease Based on Loss Variance**) is to decrease the standard deviation when the loss value has not changed significantly over the last 3 epochs. Below we compare our method with these methods and the baseline method on the same backbone model (HRNet-W48) on COCO validation set.\n\n| Method | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ | $AR$ |\n|---|---|---|---|---|---|---|\n| **Baseline** | 77.1 | 91.8 | 83.8 | 73.5 | 83.5 | 81.8 |\n| **Linear Decrease** | 77.2 | 92.0 | 83.9 | 73.5 | 83.6 | 82.0 | \n| **Decrease Based on Loss Variance** | 77.3 | 92.1 | 84.0 | 73.6 | 83.6 | 82.0 |\n| **Ours** | **78.8** | **92.5** | **85.1** | **75.0** | **85.3** | **83.1** |\n\nAs shown, the multi-step training methods (**Linear Decrease** and **Decrease Based on Loss Variance**) achieve slightly better performance than the baseline method. However, our performance is still significantly higher than the performance of these methods, demonstrating the effectiveness of our method that can consistently aggregate the pixel values in the predicted heatmap towards the dot annotation position and thus lead to superior performance. \n\n>**Q3:** *\"It will be good to see the failure cases of this method. [...] Please provide failure cases in the experiment section. The paper does not talk about any limitations of the proposed approach.\"*\n\n**A3:** Thanks for the suggestion. We have shown the failure cases in Section E in the revised version of our Supplementary. As shown, in some extremely challenging cases (e.g., body joints under severe occlusion), both the baseline method and our method may not localize the body joints very accurately. This is also a limitation, and an important research problem in the task of pose estimation that we will take as a future direction.\n\n>**Q4:** *\"Were the networks in the experiment initialized from already available pre-trained weights? If that in the case it will be interesting to observe how the networks train when trained from scratch. Do the networks converge with EMD when trained from scratch. [...] Earth movers distance can be difficult to train from scratch. This limits the paper to only be applied to already existing pre trained networks.\"*\n\n**A4:** Yes, the networks in the experiment are initialized from already available pre-trained weights. We also try to train networks from scratch with our method applied, and we report the results on COCO validation set below. Note that both methods below are with the same backbone model (HRNet-W48).\n\n| Method | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ | $AR$ |\n|---|---|---|---|---|---|---|\n| **Train from scratch** | 78.8 | 92.4 | 85.3 | 75.0 | 85.2 | 83.2 |\n| **Train from pre-trained weights** | 78.8 | 92.5 | 85.1 | 75.0 | 85.3 | 83.1 | \n\nAs shown, via training from scratch with our method applied, the networks can also converge and achieve similar performance. Thus, our method can be used both on models trained from scratch and models trained from pre-trained weights.\n",
" We thank the reviewer for the thoughtful comments. In the following, we seek to address each of the concerns.\n\n\n>**Q1:** *\"Several other papers have shown the limitation of heatmap regression and proposed solution to tackle those limitation. For example, integral pose regression shows that it achieves similar numbers on MPII dataset, which the authors have not included in the table. However, the paper does show companion with pose regression for COCO dataset. Similarly [7] also shows similar performance on MPII dataset.\"*\n\n**A1:**\nCompared to previous works tackling the limitations of heatmap regression, our method outperforms these methods on MPII validation set.\n\n**(1) Comparison with integral pose regression.** \n\nMPII dataset has two subsets that can be used to assess performance, i.e., MPII validation set and MPII testing set. Integral pose regression published in ECCV 2018 reported its performance (mean PCKh\\@0.5 score) on both MPII validation set (from 86.0 to 87.3) and MPII testing set (from 90.0 to 91.0). Since the ground-truth of MPII testing set is not publicly available, the evaluation on MPII testing set can only be done by sending an email to the dataset owner. Thus, most recent works ([39] from CVPR 2020, [15] from ICCV 2021, [7] from ICCV 2021, [a] from CVPR 2021, [b] from CVPR 2021) only evaluate their model performance on MPII validation set which we follow in our experiments. **On MPII validation set, the performance of our method (from 90.3 to 90.9) is significantly higher than the integral pose regression (from 86.0 to 87.3), showing the superior performance of our method.**\n\nMoreover, during our experiments, we also try to evaluate our method on MPII testing set by sending emails to the dataset owner. However, we did not get response. We will add integral pose regression to Tab. 3 of our paper and write the evaluation settings clearer in the revised version. \n\n**(2) Comparison with [7].**\n\nOn MPII validation set, our performance is 0.4 higher than [7] on ResNet-152 and 0.3 higher than [7] on HRNet-W32. We believe that this level of performance improvement is already significant on MPII dataset. This is because in recent years, performance improvement on MPII dataset has been close to saturation, and other recent methods have also only yielded performance improvement at similar scales. For example, [7] (ICCV 2021) is 0.3 higher than its baseline on ResNet-152 and 0.2 higher than its baseline on HRNet-W32; [39] (CVPR 2020) is 0.3 higher than its baseline on HRNet-W32; [15] (ICCV 2021) is 0.1 higher than the previous method.\n\nMoreover, on the more challenging COCO dataset, our performance is significantly higher than [7] (76.7 vs 74.4 on COCO validation set with ResNet-152 as the backbone, 78.2 vs 75.8 on COCO validation set with HRNet-W32 as the backbone, and 77.2 vs 76.1 on COCO test-dev set with HRnet-W48 as the backbone).\n\n[a] Yu, Changqian, et al. Lite-HRNet: A lightweight high-resolution network. CVPR, 2021.\n\n[b] Li, Ke, et al. Pose recognition with cascade transformers.\" CVPR, 2021.",
" >**Q6: Experiments** *\"Comparisons on the training time between the baseline and the proposed method. As the Sinkhorn Distance takes 1000 iterations and each iteration has calculations on $(H_{hm} * W_{hm}) \\times (H_{hm} * W_{hm})$ dim matrixes, it's presumed to be time-consuming.\"*\n\n**A6:** We show the training time of the baseline (HRNet-W48 [26]) and the proposed method (HRNet-W48 + Ours) below with the same backbone model on COCO dataset on RTX 3090 GPU. For fair comparison, in the experiment below, we train both the baseline and the proposed method from scratch. \n\n| Method | Training Time | Inference Time | Performance(AP) |\n|---|---|---|---|\n| **Baseline (HRNet-W48 [26])** | 2 days | 0.02 second per sample | 77.1 |\n| **HRNet-W48 + Ours** | 3 days | 0.02 second per sample | 78.8 |\n\nAs shown, though our method achieves obviously better performance, it only brings relatively small increase of training time. Note that although the calculation of Sinkhorn Distance takes iterations, its computation cost is still not much, when compared to the computation cost of the whole backbone.\n\nAlso note that the calculations of Sinkhorn Distance is only conducted during training. Thus the inference time of the baseline and our method are almost the same.\n\n\n>**Q7: Experiments** *\"Missing comparisons between the MSE loss and the proposed loss with the same type of target heatmap.\"*\n\n**A7:** Below we compare our proposed pipeline with a variant that treats the sub-pixel demander as the target heatmap and optimizes the heatmap prediction via minimizing the MSE loss between the predicted heatmap and the sub-pixel demander. Note that both experiments are conducted with the same backbone model (HRNet-W48) and on the same set (COCO validation set). As shown below, our proposed pipeline outperforms the variant by a large margin under the same type of target heatmap, demonstrating the effectiveness of our proposed method which can consistently aggregate the pixel values in the predicted heatmap towards the dot annotation, and thus lead to a better model performance.\n\n| Method | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ | $AR$ |\n|---|---|---|---|---|---|---|\n| **Sub-pixel demander (MSE loss)** | 50.7 | 86.8 | 49.0 | 46.4 | 55.2 | 61.1 |\n| **Ours** | 78.8 | 92.5 | 85.1 | 75.0 | 85.3 | 83.1 |",
" >**Q2: Novelty & Contribution** *\"Proposing to replace MSE with Sinkhorn Distance does not have enough contribution.\"*\n\n**A2:** \nAs mentioned in the above **A1**, since *heatmap is a basic and important tool in human pose estimation*, in our work, we aim to explore how to better optimize such an important heatmap prediction process.\nHence, the contributions of our work lie in: (1) We present an analysis that the commonly used heatmap optimization pipeline *is not optimal* and can result in a misalignment problem between the training loss and the final performance.\n(2) We propose a novel solution to address the misalignment problem.\n\nIn specific, to handle this problem, we propose a novel pipeline to formulate the optimization of the heatmap as a mass transportation problem directly between the predicted heatmap and the dot annotation. As a result, we can directly minimize the difference between the predicted heatmap and the dot annotation, and the misalignment problem between the loss function and the final performance can thus be tackled.\nThis implies that replacing MSE with Sinkhorn Distance is just a part of our proposed pipeline, and is only a part of our contributions.\n\n>**Q3: Presentation** *\"There are inconsistent presentations. In my perspective, the limitation to address lies in the pixel-wise loss (e.g. MSE), which is claimed in the Introduction(line45-52). However, in Abstract(line4-7), it's claimed that using the Gaussian-smoothed heatmap as the optimization objective is the limitation of previous methods.\"*\n\n**A3:** As mentioned in the above **A1**, most heatmap-based pose estimation methods optimize the heatmap prediction through a commonly used pipeline via using the **pixel-wise loss** (e.g. MSE) as the loss function and using the **Gaussian-smoothed heatmap** as the optimization objective. \nThus, the limitation to be addressed is the misalignment problem in such a pipeline.\nWe will make this clearer in the revised version.\n\n\n>**Q4: Presentation** *\"The paper's target should not be 'bypassing the step of constructing the GT heatmap', but to alleviate or solve the limitation of loss. [...] The demander $D^k$ is just another type of GT heatmap and still needs the construction step. The naive demander formulation is the same as the dot-annotated heatmap.\"*\n\n**A4:** Thanks for your suggestion. We will rephrase 'bypassing the step of constructing the GT heatmap' as 'handling the misalignment problem in the commonly used pipeline' in the revised version.\n\n>**Q5: Presentation** *\"Analysis on why the proposed method can alleviate or solve the limitation of loss.\"*\n\n**A5:**\nBelow, we analyze why the proposed method can handle the limitation of loss (i.e., the misalignment problem between the training loss and the final performance). Specifically, this is because we formulate the cost function between each pair of supplier and demander as their L2 distance (as shown in Line 207-209 of our paper). Therefore, when the pixel values are moved towards the GT dot annotation, the corresponding cost will decrease. Hence, minimizing the EMD based on such a cost function can explicitly aggregate the pixel values in the predicted heatmap towards the dot annotation. This also means that under such a cost function, we are directly minimizing the distance between the location of the predicted body joint and the location of the dot annotation. As a result, the training loss and the final pose estimation (body joint localization) performance are better aligned in our method.",
" We thank the reviewer for the thoughtful comments. In the following, we seek to address each of the concerns.\n\n>**Q1: Novelty & Contribution** *\"Formulating the heatmap optimization as a distribution matching problem is not a novel perspective. Previous methods represent the GT coordinates in either Categorical or Gaussian distribution and optimize the predictions towards the GT distribution by minimizing MSE.\"*\n\n**A1:** In human pose estimation, the heatmap-based methods are a popular category of methods, and *the heatmap is a basic and important tool* (as mentioned by Reviewer aExd). \n\nTo optimize the heatmap prediction, the pioneering heatmap-based human pose estimation method (i.e., \"Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation\", NIPS 2014) proposed a heatmap optimization pipeline to optimize the heatmap prediction via minimizing the MSE loss between the predicted heatmap and the Gaussian-smoothed heatmap. Currently, this pipeline has become the most **commonly used pipeline** in human pose estimation. \n\nAs pointed out by Reviewer wVS3, in our paper, we *present an analysis of why minimizing the Gaussian heatmap with l2 loss* (i.e., the **commonly used pipeline**) *is not optimal* (i.e., by using this pipeline, a misalignment exists between the training loss and the final performance). After the analysis, to handle the problem, rather than measuring the MSE loss between the predicted heatmap and the Gaussian-smoothed heatmap (i.e., instead of using either the MSE loss or the Gaussian-smoothed heatmap), we propose a new pipeline that formulates the optimization of the heatmap as a mass transportation problem directly between the predicted heatmap and the dot annotation. By doing so, we can directly minimize the difference between the predicted heatmap and the GT dot annotation, and thus the misalignment problem between the loss function and the final performance can be tackled.\n\nNote that besides the aforementioned **commonly used pipeline**, in our paper, we also discuss another possible heatmap optimization pipeline - the **naive pipeline**. To optimize the heatmap prediction, the **naive pipeline** first constructs the dot-annotated heatmap from the GT coordinates and then minimizes the MSE loss between the predicted heatmap and the constructed dot-annotated heatmap. However, since the dot-annotated heatmap has the same zero value for all pixels except the pixel representing the dot annotation of the body joint, this naive pipeline can lead to a hard training process and a very significant performance drop (more than **30%** of performance drop on COCO validation set compared to the **commonly used pipeline**), as indicated in [28] and also demonstrated in Tab. 1 of our Supplementary. Hence, to the best of our knowledge, this naive pipeline is not used in any state-of-the-art human pose estimation works.\n\nThe very significant performance drop of this **naive pipeline** also demonstrates the large difference between this pipeline and our proposed method, as our proposed method can *largely improve the performance of pose estimation methods* (as mentioned by Reviewer ggaN) on both COCO and MPII datasets.\n\n**In summary, heatmap is a basic tool for pose estimation, while we analyse that there is an important misalignment problem in the popularly-used heatmap techniques and propose a new method for addressing the problem.** \n\n**To the best of our knowledge, we are the first to formulate the optimization of heatmap prediction as a mass transportation problem, and thus can minimize the difference between the predicted heatmap and the dot annotation directly, which thus tackles the important misalignment problem in the commonly-used heatmap optimization pipeline, which is crucial since the heatmap is a basic and important tool in human pose estimation (as mentioned by Reviewer aExd). Therefore, we think that the contribution of our paper can be significant and meaningful to the community.**\n\nNote that in our paper, the term **distribution matching** actually refers to **mass transportation between the supplier and the demander**. We will make this clearer in the revised version.",
" >**Q4:** *\"The authors can compare in more detail with UDP and Darkpose, which fix some issues in native Gaussian Heatmap.\"*\n\n**A4:** Thanks for your suggestion. As shown in Tables 1-3 of our paper, we have compared the performance of our method and these two methods (UDP and Darkpose) on both COCO and MPII datasets. Our method achieves better performance than these two methods, demonstrating the effectiveness of our method. \n\nHere we also explain the difference between our method and these two methods.\nSpecifically, while fixing some issues in Gaussian Heatmap, both UDP and Darkpose still use Gaussian Heatmap as the target Heatmap in the Heatmap optimization process: (1) UDP proposes to construct two offset maps in addition to the Gaussian heatmap to reduce the statistical error in standard encoding-decoding; (2) Darkpose proposes to place the center of the Gaussian kernel used to construct the Gaussian heatmap directly at the position of the GT coordinates, instead of the center of the nearest pixel, to reduce the quantisation error in the standard coordinate encoding process. In contrast, in our method, instead of using any Gaussian heatmap, we formulate the optimization of the heatmap as a mass transportation problem directly between the predicted heatmap and the dot annotation. We will add more detailed discussions to paper.",
" We thank the reviewer for the thoughtful comments. In the following, we seek to address each of the concerns.\n\n>**Q1:** *\"The paper lacks the complexity analysis of the proposed method, such as the training and inference time. Compared with baseline, does the running time increase? This helps to assess the practicality of the method.\"*\n\n**A1:** We show the training time of the baseline (HRNet-W48 [26]) and the proposed method (HRNet-W48 + Ours) below with the same backbone model on COCO dataset on RTX 3090 GPU. For fair comparison, in the experiment below, we train both the baseline and the proposed method from scratch. \n\n| Method | Training Time | Inference Time | Performance(AP) |\n|---|---|---|---|\n| **Baseline (HRNet-W48 [26])** | 2 days | 0.02 second per sample | 77.1 |\n| **HRNet-W48 + Ours** | 3 days | 0.02 second per sample | 78.8 |\n\n\nAs shown, though our method achieves obviously better performance, it only brings relatively small increase of the training time. Moreover, as the calculations of EMD is only conducted during training, the inference time with and without the proposed method are almost the same. \n\n>**Q2:** *\"Why baseline dot-annotated heatmap (MSE loss) does not work, but proposed method does?\"*\n\n**A2:** The baseline dot-annotated heatmap (MSE loss) does not work since it faces a hard training process and thus has a weak model performance, as shown by [28]. \n\nSpecifically, the GT dot-annotated heatmap has the same zero value for all pixels except the pixel representing the dot annotation of the body joint.\nOwing to such extreme sparsity, it is difficult to optimize the heatmap prediction by properly pushing the activated region on the predicted heatmap to move towards the GT position. More specifically, in heatmap estimation, we need to activate the pixel corresponding to the GT position, however, when the activated region in the predicted heatmap is not adjacent to the GT dot annotation, no matter which direction the activated region moves on the heatmap in an optimization step, the corresponding MSE loss could still keep the same. In other words, the baseline dot-annotated heatmap (MSE loss) can have difficulties in optimizing the heatmap prediction by moving (aggregating) the activation towards the dot annotation position. Hence, the baseline dot-annotated heatmap (MSE loss) faces a hard training process (as shown in [28]) and thus a weak model performance (e.g., more than **30%** of performance drop on COCO validation set, as shown in Tab. 1 of our Supplementary).\n\nIn contrast, in our proposed method, we formulate the cost function between each pair of supplier and demander as their L2 distance (as shown in Line 207-209 of our paper). Because of this, when the pixel values (activated region) are moving (aggregating) towards the dot annotation position, the corresponding cost (loss) will decrease. Hence, minimizing the EMD based on such a cost function can explicitly move (aggregate) the pixel values in the predicted heatmap towards the dot annotation position. Therefore, when minimizing the proposed loss function, our method can help to achieve a more consistent model performance improvement.\n\n>**Q3:** *What role does Heatmap normalization play here?*\n\n**A3:** A prerequisite for being able to calculate the Earth Mover's Distance between the suppliers $S$ and the demanders $D$ is that the total units of mass stored by $S$ needs to be the same as the total units of mass required by $D$. Thus, the role Heatmap normalization plays here is to ensure that this prerequisite is met. Specifically, we use Heatmap normalization to ensure that the total units of mass stored by the suppliers $S$ is 1. Then as the total units of mass required by the demanders $D$ is also 1, the total units of mass stored by $S$ and the total units of mass required by $D$ can then be guaranteed to be the same.",
" >**Q3:** *\"It seems that EMD can also be applied to the heatmaps with multiple keypoints, which means that the proposed functions can also be applied for multi-human pose estimation. Have the author tried this?\"*\n\n**A3:** We also apply our method to the heatmaps with multiple keypoints. Specifically, we apply our method on two commonly used backbone models where the outputs are heatmaps with multiple keypoints (i.e., HrHRnet-W32 [c] and HrHRnet-W48 [c]). As shown below, on both backbones, our method achieves consistent performance improvement and achieves SOTA performance on both multi-human COCO validation set and multi-human COCO test-dev set.\n\n| Method | Set | Input Size | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ |\n|---|---|---|---|---|---|---|---|\n| HrHRnet-W32 [c] | multi-human COCO validation | 512 | 69.9 | 87.1 | 76.0 | 65.3 | 77.0 |\n| HrHRnet-W32 + Ours | multi-human COCO validation | 512 | 71.6 | 88.7 | 77.7 | 66.7 | 78.5 |\n\n| Method | Set | Input Size | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ |\n|---|---|---|---|---|---|---|---|\n| HrHRnet-W32 [c] | multi-human COCO test-dev | 512 | 69.0 | 89.0 | 75.8 | 64.4 | 75.2 |\n| HrHRnet-W32 + Ours | multi-human COCO test-dev | 512 | 70.7 | 90.0 | 77.7 | 66.3 | 76.8 |\n\n| Method | Set | Input Size | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ |\n|---|---|---|---|---|---|---|---|\n| HrHRnet-W48 [c] | multi-human COCO validation | 640 | 72.1 | 88.4 | 78.2 | 67.8 | 78.3 |\n| HrHRnet-W48 + Ours | multi-human COCO validation | 640 | 73.5 | 89.9 | 79.4 | 69.7 | 79.1 |\n\n| Method | Set | Input Size | $AP$ | $AP^{50}$ | $AP^{75}$ | $AP^{M}$ | $AP^{L}$ |\n|---|---|---|---|---|---|---|---|\n| HrHRnet-W48 [c] | multi-human COCO test-dev | 640 | 70.5 | 89.3 | 77.2 | 66.6 | 75.8 |\n| HrHRnet-W48 + Ours | multi-human COCO test-dev | 640 | 72.2 | 90.9 | 79.0 | 68.3 | 77.4 |\n\n\n[c] Cheng, Bowen, et al. HigherHRNet: Scale-aware representation learning for bottom-up human pose estimation. CVPR, 2020.",
" We thank the reviewer for the thoughtful comments. In the following, we seek to address each of the concerns.\n\n>**Q1:** *\"According to Line144, the authors seem to use Sinkhorn algorithm to solve the EMD. Is this process differentiable? Could you please supplement more details about how the gradients are passed backward?\"*\n\n**A1:** This process is differentiable. The analysis of (1) how the gradients are calculated from the Sinkhorn algorithm and (2) how the gradients are passed backward from our proposed loss $L_{Matching}$ into the backbone model was shown in Section C of our Supplementary. Below we also show that analysis:\n\n**(1) How the gradients are calculated from the Sinkhorn algorithm.**\n\nTo calculate the gradients from the Sinkhorn algorithm, for the $k$-th body joint, we first rewrite the corresponding Earth Mover's Distance $E^{reg}_{C^k}(S^k, D^k)$ in its dual form (as shown in Eq. 2 of our Supplementary). \n\nThen from its dual form, we can calculate the gradient of $E^{reg}_{C^k}(S^k, D^k)$ w.r.t. the supplier $S^k$ as $a^k$. Note that $a^k$ is already provided during computing the Earth Mover's Distance using Sinkhorn algorithm so calculating $a^k$ does not introduce additional computational cost.\n\n**(2) How the gradients are passed backward from $L_{Matching}$ into the backbone model.**\n\nTo pass the gradients backward from $L_{Matching}$ into the backbone model, for the $k$-th body joint, we first calculate the gradient of $L_{Matching}$ w.r.t. $S^k$ as $\\frac{\\partial L_{Matching}}{\\partial S^k} = \\frac{\\partial L_k}{\\partial S^k} = \\frac{\\partial E^{reg}_{C^k}(S^k, D^k)}{\\partial S^k} = a^k$. Then we calculate the gradient of $S^k$ w.r.t. its corresponding predicted heatmap, which is clearly differentiable as the supplier is formulated from the predicted heatmap through $relu$ and normalization operations (as shown in Eq. 3 of our paper), and both $relu$ and normalization operations are differentiable. This means that the process is differentiable.\n\nMore details about how the gradients are passed backward are shown in our Supplementary. Kindly refer to Section C of our Supplementary for more details.\n\n\n>**Q2:** *\"According to Eq 5, the EMD is respectively calculated for each keypoint. How long does it take to solve the EMD for one keypoint? And it seems that the calculation of EMD also cannot be paralleled among a mini-batch. Will the proposed method largely extend the training time?\"*\n\n**A2:** (1) In the total training time of a mini-batch of data, solving the EMD takes roughly **33%** of the time. Also note that the calculations of EMD is only conducted during training, and thus the inference time with and without our proposed method are almost the same. \n\n(2) As the calculations of EMD of each keypoint are independent to each other, these calculations can be paralleled among a mini-batch, and thus are efficient. \n\n(3) Below we show the training time of the original HRNet-W48 [26] without the calculation of EMD and the proposed method (HRNet-W48 + Ours) with the calculation of EMD on COCO dataset on RTX 3090 GPU. For fair comparison, in the experiment below, we train both the original HRNet-W48 [26] and the proposed method from scratch. \n\n| Method | Training Time | Inference Time | Performance(AP) |\n|---|---|---|---|\n| **HRNet-W48 [26]** | 2 days | 0.02 second per sample | 77.1 |\n| **HRNet-W48 + Ours** | 3 days | 0.02 second per sample | 78.8 |\n\nAs can be seen in the table above, though the proposed method achieves obviously better performance, it only brings relatively small increase of the training time. Besides, as our proposed method does not need to calculate EMD during inference, the inference time with and without our proposed method are almost the same. ",
" This paper presents a method to estimate 2d locations of joints for human pose estimation task. Authors present a technique loss function to reduce the trade off between dot notation and heatwap notation to estimate the joint locations. For this purpose authors propose using earth mover's distance as the loss function. The paper proposes the construction of supplier and demander values as well as the cost function associated. The final minimization is obtained by using the sink horn's iteration method. Authors also present an analysis of why minimize the Gaussian heatmap with l2 loss is not optimal. Experiments on mpii and coco datasets are provided Strengths \n1. The paper is well rewritten and easy to understand with almost no typos \n2. Although not very complicated paper the paper present an easy way to improve the quality of joint localization\n3. The paper presents an analysis demonstrating the limitation of the methods.\n\nWeakness\n1. Several other paper have shown the limitation of heatmap regression and proposed solution to tackle those limitation. For ex, integral pose regression shows that it achieves similar numbers on mpii dataset, which the authors have not included in the table, however the paper does show companion with pose regression for coco dataset. Similarly (7) also shows similar performance on mpii dataset.\n\n2. It will be interesting to observe how does this method compare to a heatmap regression based method where a multistep straining is applied where with each step of training the standard deviation is decreased\n\n3. It will be good to see the failure cases of this method.\n\n4. were the networks in the experiment initialized from already available pretrained weights? If that in the case it will be interesting to observe how the networks train when trained from scratch. Do the networks converge withEMD when trained from scratch. \n\n Point (2) from the weakness section. If the objective is to decrease the area of localization it is important to observe a naive approach first.\nSeveral, paper have shown that to work for the cash of facial landmark localization.\n\nPoint (3) please provide failure cases in the experiment section. The paper does not talk about any limitations of the proposed approach\n.\n\nPoint (4) Earth movers distance can be difficult to train from scratch. This limits the paper to only be applied to already existing pre trained networks. The authors have not talked about any limitation of the paper. Conceptual or societal\n\nSuggestions are already included is the previous sections",
" Pose estimation performance may not be consistently improved when optimizing the heatmap prediction by minimizing the mean-squared error(MSE). The paper proposes to utilize the Sinkhorn Distance as the optimizing target. ### Strengths\n\n1. The paper points out and analyzes the misalignment between the training loss and the final performance.\n1. Sinkhorn Distance is proposed to replace the MSE loss to achieve a more consistent performance improvement during training.\n1. The proposed method is reported to have notable performance improvements on COCO and MPII.\n\n### Weaknesses\n\n#### Novelty & Contribution\n\n1. Formulating the heatmap optimization as a distribution matching problem is not a novel perspective. Previous methods represent the GT coordinates in either Categorial or Gaussian distribution and optimize the predictions towards the GT distribution by minimizing MSE.\n1. Proposing to replace MSE with Sinkhorn Distance does not have enough contribution.\n\n#### Presentation\n\n1. There are inconsistent presentations. In my perspective, the limitation to address lies in the pixel-wise loss (e.g. MSE), which is claimed in the Introduction(line45-52).\n + However, in Abstract(line4-7), it's claimed that using the Gaussian-smoothed heatmap as the optimization objective is the limitation of previous methods\n + The paper's target should not be 'bypassing the step of constructing the GT heatmap', but to alleviate or solve the limitation of loss. \n + Missing analysis on why the proposed method can alleviate or solve the limitation of loss. \n2. It's misleading to claim bypassing the step of constructing the GT heatmap. The demander $D^k$ is just another type of GT heatmap and still needs the construction step. The naïve demander formulation is the same as the dot-annotated heatmap.\n\n#### Experiments\n\n1. Missing important reports and comparisons on the training time between the baseline and the proposed method. As the Sinkhorn Distance takes 1000 iterations and each iteration has calculations on $(H_{hm} * W_{hm}) \\times (H_{hm} * W_{hm})$ dim matrixes, it's presumed to be time-consuming. \n2. Missing comparisons between the MSE loss and the proposed loss with the same type of target heatmap Please refer to Weaknesses None",
" 1. The paper analyzes that the commonly used minimizing pixel-wise loss cannot guarantee that the performance of pose estimation increase.\n2. The paper proposes using Earth Mover’s Distance to match the distributions between GT dot heatmap and predicted heatmap. The experiment shows that this loss can improve the performance by 0.9-1.8% AP.\n **Strengths:**\n\n1. The heatmap is a basic and important tool in human pose estimation. The paper analyzes the problem of commonly used pixel-wise heatmap loss, and formulate the optimization of the heatmap prediction as a distribution matching problem by Earth Mover’s Distance. It is novel and technically sound.\n\n2. It seems that the proposed loss is model-agnostic that can be applied in most pose estimator and improve the performance.\n\n**Weaknesses:**\n\nThe paper lacks the complexity analysis of the proposed method, such as the training and inference time.\n 1. Why baseline dot-annotated heatmap (MSE loss) doesn't work, but proposed method does. Is it because the MSE loss (gradient) is too sparse? What role does Heatmap normalization play here? The authors may be able to provide more thorough analysis on this problem.\n\n2. The authors can compare in more detail with UDP and Darkpose, which fix some issues in native Gaussian Heatmap.\n\n3. Compared with baseline, does the running time increase? This helps to assess the practicality of the method.\n This paper does not provide limitations and potential negative societal impact. Please refer to the Questions section on limitations.",
" This paper proposes using EMD to measure the difference between the GT and the predicted heatmaps. Compared with L2 distance, EMD avoids the choice of standard deviation and is more consistent with the final objective of pose estimation. The proposed loss function can largely improve the performance of pose-estimation methods. Strengths:\nThe paper is well written and easy to follow. The proposed method is simple yet can largely improve the performance of pose-estimation methods.\n\nQuestions:\n1. According to Line144, the authors seem to use Sinkhorn algorithm to solve the EMD. Is this process differentiable? Could you please supplement more details about how the gradients are passed backward? \n\n2. According to Eq 5, the EMD is respectively calculated for each keypoint. How long does it take to solve the EMD for one keypoint? And it seems that the calculation of EMD also cannot be paralleled among a mini-batch. Will the proposed method largely extend the training time?\n\n3. It seems that EMD can also be applied to the heatmaps with multiple keypoints, which means that the proposed functions can also be applied for muli-human pose estimation. Have the author tried this? See the questions above. The idea is elegant and effective. My only concern is about the training time. However, given its marvelous performance, a longer training time is also acceptable to some degree. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
5
] | [
"FcHuQxkWmez",
"aYyHJXk_9eQ",
"XbXZYU3OSjd",
"nips_2022_7-bMGPCQCm7",
"wayHXr2hHfl",
"swkU4QS2u1F",
"IREZ5n0MYv6",
"GoNJU3j8VKZ",
"VDl_iC17e32",
"K4x5AO3vbe",
"kKYD44TMxJJ",
"USbh0kO0TaX",
"bCcXNkALcem",
"nips_2022_7-bMGPCQCm7",
"nips_2022_7-bMGPCQCm7",
"nips_2022_7-bMGPCQCm7",
"nips_2022_7-bMGPCQCm7"
] |
nips_2022_q-FRENiEP_d | SageMix: Saliency-Guided Mixup for Point Clouds | Data augmentation is key to improving the generalization ability of deep learning models. Mixup is a simple and widely-used data augmentation technique that has proven effective in alleviating the problems of overfitting and data scarcity. Also, recent studies of saliency-aware Mixup in the image domain show that preserving discriminative parts is beneficial to improving the generalization performance. However, these Mixup-based data augmentations are underexplored in 3D vision, especially in point clouds. In this paper, we propose SageMix, a saliency-guided Mixup for point clouds to preserve salient local structures. Specifically, we extract salient regions from two point clouds and smoothly combine them into one continuous shape. With a simple sequential sampling by re-weighted saliency scores, SageMix preserves the local structure of salient regions. Extensive experiments demonstrate that the proposed method consistently outperforms existing Mixup methods in various benchmark point cloud datasets. With PointNet++, our method achieves an accuracy gain of 2.6% and 4.0% over standard training in ModelNet40 and ScanObjectNN, respectively. In addition to generalization performance, SageMix improves robustness and uncertainty calibration. Moreover, when adopting our method to various tasks including part segmentation and standard image classification, our method achieves competitive performance. Code is available at https://github.com/mlvlab/SageMix. | Accept |
This paper studies the point cloud data mixup with the saliency guidance. The proposed SageMix focus on the mixup over the local regions to preserve salient structures which are more informative for downstream tasks. The whole paper is well organized with clear logic to follow. The proposed method is simple but effective. Moreover, there are solid experiments in various tasks, including object classification, parts segmentation and calibration, to comprehensively evaluate proposed methods. One of the major concerns is the limited improvements over the standard mixup (Reviewer VLSt) on PointNet++. And the discussion of 2D and 3D mixup can be enriched in the aspects of technical challenges and novelties (Reviewer YgrL). This paper includes five different tasks and four benchmarks in experimental studies that strongly address the third major concern in the limited evaluation of Reviewer YgrL, who, however, has not provided any feedback after the authors' rebuttal. Considering the overall contributions in methods and solid evaluation, this submission is slightly above the bar of acceptance. | train | [
"jIJ3IURrn1U",
"0bdOVAo127g",
"c4LiKLigG9m",
"Exdbx6DAxTE",
"5Ep2H6EFOtc",
"pFVlkVYQB3s",
"EpNXkB2hRsf",
"t8jOQt-9ges",
"7-5xyNQO2Ie",
"t0G2R-HiXqL"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer yU1N, we appreciate the reviewer for constructive feedback and comments.\n\nThe end of the Author-Reviewer Discussion is close. Through rebuttal, we have addressed all your concerns, and we believe that our responses have answered your suggestions and questions. So, would it be possible to check our responses and let us know if you have any concerns or questions unresolved?\n\nOnce again, we appreciate your efforts in reviewing our paper.\n\nSincerely, Authors",
" Dear Reviewer YgrL, we appreciate the reviewer for constructive feedback and comments.\n\nThe end of the Author-Reviewer Discussion is close. Through rebuttal, we have addressed all your concerns, and we believe that our responses have answered your suggestions and questions. So, would it be possible to check our responses and let us know if you have any concerns or questions unresolved? \n\nOnce again, we appreciate your efforts in reviewing our paper.\n\nSincerely, Authors",
" Dear Reviewer VLSt, we appreciate the reviewer for constructive feedback and comments.\n\nThe end of the Author-Reviewer Discussion is close. Through rebuttal, we have addressed all your concerns, and we believe that our responses have answered your suggestions and questions. So, would it be possible to check our responses and let us know if you have any concerns or questions unresolved? \n\nOnce again, we appreciate your efforts in reviewing our paper.\n\nSincerely, Authors",
" We appreciate the Reviewer yU1N for strong support to SageMix and detailed comments. We will address all of the concerns raised and incorporate them into the final version.\n\n---\n\n**Comment 1:** One potential improvement to the method could be rotating and translating the whole point clouds to move the two query points far away from each other.\n\n**Answer:** Great question! Indeed, we have considered exactly the same technique as your suggestion. The rotation-translation method in most cases successfully preserves salient parts but we observed that it fails in some corner cases. We provide the visualization of **failure cases of the rotation-translation approach in Section C.2 of the revised appendix**. For instance, when the query points are located at the center of objects, then with any rotation and translation the augmented samples will lose the salient local structure. In SageMix, since the weight $w^t_i$ in **Equation (6)** for Mixup is computed based on the distance from each point $p^t_i$ to the query point $q^t$ in point cloud $\\mathcal{P}^t$, rotating or translating a point cloud including the query points does not affect the weights (See the column $w^\\alpha_i, w^\\beta_i$ in Figure 5 of the appendix). Further, despite the various cases of rotation and translation, the points around a query point still correspond to the salient part in another point cloud (See the column $\\phi$). As a result, the local structure of the salient parts is distorted in the augmented sample $\\tilde{P}$. \n\nOur preliminary experiment shows that our saliency-guided sequential sampling is more suitable for SageMix. But, except for these extreme cases, Reviewer yU1N’s suggestion is also effective.\n\n**Equation (6)** : $w^t_i = K_\\sigma(p^t_i, q^t) = \\text{exp}\\left(-\\frac{\\|p^t_i-q^t\\|^2}{2\\sigma^2} \\right)$, where $t \\in \\{\\alpha, \\beta\\}$ \n\n---\n\n**Comment 2:** Why don’t the authors present all robustness tests presented in RSMix. e.g. Rotation 90°, Rotation Y, Scale 1.4.\n\n**Answer**: Thanks for the detailed feedback on experimental settings. As suggested, we provide the additional experimental results below. As shown in the table, SageMix still achieves the best robustness on every corruption presented in RSMix.\n\n| Method | X-axis 90° | Y-axis 90° | Z-axis 90° | Y-axis 180° | scale 1.4 |\n| --- | --- | --- | --- | --- | --- |\n| Base | 11.3 | 86.0 | 13.3 | 86.1 | 82.1 |\n| + PointMixup | 12.4 | 86.2 | 13.9 | 86.2 | 82.2 |\n| + RSMix | 13.2 | 86.3 | 14.2 | 86.1 | 83.0 |\n| + **SageMix** | **15.2** | **87.1** | **14.7** | **87.2** | **84.7** |\n\n---\n\n**Comment 3:** How does $\\sigma$ affect the performance of SageMix?\n\n**Answer**: Here, we share the quantitative analysis of the bandwidth $\\sigma$ with DGCNN and OBJ_ONLY. We observed that SageMix with a wide range of bandwidth $\\sigma$ (0.1 to 2.0) consistently outperforms previous Mixup methods (e.g., 86.9%, 86.6% for PointMixup, RSMix). We will provide this analysis in the final version.\n\n| $\\sigma$ | 0.1 | 0.3 | 0.5 | 1.0 | 2.0 |\n| --- | --- | --- | --- | --- | --- |\n| OA | 87.2 | 88.0 | 87.6 | 87.3 | 87.6 |\n\n---\n\n**Comment 4:** (Minor issue) Missing reference of a paper on arxiv (PointCutMix).\n\n**Answer**: PointCuxMix is non-peer-reviewed, so we did not include it in our submission. However, we agree with Reviewer yU1N, and we will include it in the final version.",
" **We appreciate the Reviewer YgrL for the acknowledgment of the novelty of our saliency-guided sequential sampling. We will address all of the concerns raised and incorporate them into the final version.**\n\n--- \n\n**Comment 1:** The difference between 2D and 3D mixup-based methods is not insightfully analyzed in the introduction. Why 3D mixup is more challenging than 2D mixup? What is the particular difficulty of extending Mixup from 2D to 3D?\n\n**Answer:** We apologize for the brief explanation of the difference between 2D and 3D Mixup. As we mentioned in line 29-30 of the main paper, one of the main differences in point clouds is that the data has an **unordered and non-grid structure**. 2D images on a regular grid space naturally come with one-to-one correspondence between pixels. So, 2D mixup can be defined by a simple interpolation of pixel values. However, in the point cloud domain, there is **no one-to-one correspondence between two point clouds**. In addition, unlike 2D images with RGB values at each pixel, **3D point clouds have no feature** at each point except for the coordinates. Hence, even if the correspondence between two point clouds is estimated, naive interpolation of the coordinates (e.g., PointMixup) will destroy important local structures of the original point clouds. For these reasons, it is challenging to devise effective 3D Mixup for point clouds by simply adopting 2D Mixup methods.\n\n---\n\n**Comment 2:** **The shape-preserving continuous mixup component just follows the mainstream** method and thus the novelty is limited.\n\n**Answer:** We are glad that Reviewer YgrL agrees on the importance of shape/continuity preserving in 3D Mixup. But we want to point out that **developing a method in a right direction does NOT limit the novelty**. More importantly, no previous method in 3D Mixup achieved both shape-preservation and continuity of augmented samples. For instance, PointMixup generates continuous samples but it causes a huge distortion of local structures. On the other hand, RSMix, which is an adoption of CutMix in 3D, preserves the local structure of an extracted region but the resulting samples are discontinuous. To the best of our knowledge, **SageMix is the first work, in the point cloud domain, that combines two point clouds into one continuous shape while preserving the local structure.** Further, as Reviewer YgrL mentioned, based on **novel saliency-guided sequential sampling**, SageMix preserves the shape of the salient region. Our ablation study shows that it further improves generalization ability. Note that no previous 3D Mixup utilized saliency information. Considering these aspects, our contributions are significant.\n\n---\n\n**Comment 3-1:** The experimental results are not extensive. I suggest the authors make a more solid evaluation for the proposed method.\n\n**Answer:** In the main paper, **we provided experimental results on 4 benchmark datasets (ModelNet40, ScanObjectNN, ShapeNetPart, and CIFAR-100) in 5 different tasks**: 2D/3D classification, part segmentation, uncertainty calibration, and robustness evaluation. Since 3D Mixup has been relatively less explored, only recent works PointMixup (2020) and RSMix (2021) are included as our baseline methods. Following PointMixup and RSMix, we provided experimental results only on ModelNet40. To provide more solid evaluation, we additionally conducted experiments in classification with two splits (OBJ_ONLY, and PB_T50_RS) of ScanObjectNN. In addition, we reported uncertainty calibration errors that are not studied by previous 3D Mixup methods and we evaluate the effectiveness of SageMix in part segmentation using ShapeNetPart. Finally, SageMix was applied to 2D images (CIFAR-100) with minor modifications. If any specific experiments are needed, please let us know. We’re willing to provide more experimental results.\n\n**Comment 3-2:** The proposed method is only quantitatively compared with PointMixup and RSMix. And there is also a lack of qualitative results.\n\n**Answer:** In the main paper, we provide various qualitative results in the main paper/supplement. Figure 1 visually compares our method with baselines: PointMixup and RSMix. In addition, we provided sensitivity analysis of hyperparameters (prior factor $\\pi$ and bandwidth $\\sigma$) in our method in Figure 3. Augmented samples by our method are presented in Figure 1-3 of the appendix. Also, considering Reviewer YgrL’s suggestion, we include more qualitative results to compare augmented samples by ours and baselines (**Figure 4 of the revised appendix**).",
" **Comment 4:** The computation of saliency is similar to the popular CAM approach. What is the **impact** of using different methods for saliency computation?\n\n**Answer:** Great question! There are several ways to calculate a saliency map in the 2D image domain [1-5]. However, the methods for 2D images cannot be directly applicable to the point cloud domain due to the lack of background, especially in single-object point cloud data. Most methods in the 2D image domain focus on detecting salient \"foreground objects\" whereas in our setting we need to detect salient parts of a single object. In addition, [3-5] utilize the additional network to detect salient regions in the supervised setting with the ground truth of foreground objects. Since no ground truth label for saliency detection is available and we wanted to minimize computational overhead for saliency detection, we simply used the norm of gradient as PuzzleMix [6] and CoMixup [7]. Also, in a preliminary experiment, we observed that SageMix achieves slightly higher performance with this simple gradient-based saliency map than PointCloud Saliency Maps [8].\n\n[1] Li et al. “Robust Saliency Detection via Regularized Random Walks Ranking” Proceedings of the IEEE conference on computer vision and pattern recognition, 2015\n\n[2] Zhu et al. “Saliency Optimization from Robust Background Detection”, CVPR 2014\n\n[3] Deng et al. “3Net: Recurrent Residual Refinement Network for Saliency Detection”, IJCAI 2018\n\n[4] Liu et al. “PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection”, CVPR 2018\n\n[5] Qin et al. “BASNet: Boundary-Aware Salient Object Detection”, CVPR 2019\n\n[6] Kim et al. \"Puzzle mix: Exploiting saliency and local statistics for optimal mixup\", ICML 2020\n\n[7] Kim et al. \"Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity\", ICLR 2021\n\n[8] Zheng et al. “PointCloud Saliency Maps”, ICCV 2019",
" **We appreciate the Reviewer VLSt for supportive comments and constructive feedback on our work. We will address all of the concerns raised and incorporate them into the final version.**\n\n---\n\n**Comment 1-1**: A critical weakness is that experimental findings do not appear to be very significant. The performance difference between various techniques is not very large.\n\n**Answer:** Compared to the state-of-the-art Mixup methods, the improvement by SageMix are promising. Our SageMix with PointNet++ achieved **2.6%, 1.7%, 4.0%** improvements over a standard training in ModelNet40, OBJ_ONLY, and PB_T50_RS, respectively. The performance gap over the second-best techniques are **1.0%, 0.6%, and 2.6%**, which is significant. We also observed similar improvements with DGCNN by **1.1%(OBJ_ONLY), 1.1%(PB_T50_RS)** over previous SOTA methods. Lastly, the performance gain with PointNet seems relatively small (e.g., +0.4%(ModelNet40), +0.1%(OBJ_ONLY), +0.4%(PB_T50_RS)) compared to the second best techniques but we believe that this is mainly due to **the limited capacity of PointNet**, which is a nascent model for point clouds only with MLPs. Our experimental results evidence that our method significantly boosts performance as long as the model has sufficient capacity.\n\n\n**Comment 1-2:** It would be nice if the paper could provide some mean +- std measures.\n\n**Answer:** Great Point! As Reviewer VLSt mentioned, performance oscillation is an important issue in the point cloud benchmarks. However, for a fair comparison with the numbers reported in PointMixup and RSMix, we followed the prevalent evaluation metric in point clouds, which reports the best validation accuracy. Apart from this, we here provide the additional results with five runs on OBJ_ONLY. The mean and standard deviation are presented in the table below.\n\n| Method | PointNet | PointNet++ | DGCNN |\n| --- | --- | --- | --- |\n| Base | 78.56±0.51 | 86.14±0.39 | 85.72±0.44 |\n| +PointMixup | 78.88±0.28 | 87.50±0.26 | 86.26±0.34 |\n| +RSMix | 77.6±0.56 | 87.30±0.65 | 85.88±0.59 |\n| +**SageMix** | **79.14±0.30** | **88.42±0.26** | **87.32±0.53** |\n\nIt is worth noting that SageMix consistently achieves the best performance with **significant improvements over the second-best methods**. These improvements prove the effectiveness of SageMix. We will provide this result in the appendix.\n\n---\n\n**Comments 2:** It would be nice to see experiments similar to Table 2 for segmentation as well.\n\n**Answer:** Thanks for Reviewer VLSt’s constructive suggestions. Part segmentation is one of the major tasks in point cloud processing. However, PointMixup and RSMix did not demonstrate their methods in part segmentation and no number was reported for ShapeNetPart. So, we reported the performance of our method only in the main paper. In addition, we provided the detailed results of part segmentation with and without SageMix in Table 3 of the appendix. As suggested, we compare our method with PointMixup and RSMix for part segmentation. We used the official code by the authors with minor modifications for generating point-wise ground truth. The results are summarized in the table below.\n\n| Method | DGCNN | PointNet++ |\n| --- | --- | --- |\n| Base | 85.1 | 85.1 |\n| +PointMixup | 85.3 | 85.5 |\n| +RSMix | 85.2 | 85.4 |\n| +**SageMix** | **85.4** | **85.7** |\n\nNote that although the gain seems small, SageMix outperforms previous Mixup methods. Also, considering the already saturated performance of ShapeNetPart, we believe that the improvement (+0.3%, +0.6% in DGCNN, PointNet++) over the base model is not trivial. We will reflect this table in the final version as well.",
" The paper proposes SageMix, a data augmentation technique for point clouds. Similar to the Mixup family of data augmentations, SageMix mixes two point clouds. It tries to mix point clouds in a saliency-guided way to preserve the salient local structures. Experiments have been conducted to show the efficacy of the method. Strengths:\n\n- The paper is well-written and easy to follow.\n- It is nice that this data augmentation can lead to more robust networks as shown in Table 3. \n\nWeakness:\n\n- A critical weakness is that experimental findings do not appear to be very significant. The performance difference between various techniques is not very large and no error margin has been reported (Table 2). It would be nice if the paper could provide some mean +- std measures. This could be done by running the same experiments multiple times (with random initialization) and reporting the mean and variance. This is particularly important as point-based benchmark methods can have significant variations across runs.\n\n- Most experiments in the paper are limited to point cloud classification. The experiment on part-segmentation has not been described in detail. It would be nice to see experiments similar to Table 2 for segmentation as well. This would help in showing that the technique can be used beyond classification. Refer to the weakness section for questions. Overall, I am ambivalent about the paper. The method could be useful but I am unable to conclude from the experiments because of reasons mentioned in the weakness sections. Hence, I suggest a borderline rating for now. I will update the score based on the rebuttal. NA",
" This paper presents a method for the data augmentation of 3D point clouds. The proposed Saliency-Guided Mixup for point clouds (SageMix) preserves discriminative local structures and generates continuous samples with smoothly varying mixing ratios. Here saliency is computed based on the impact to the corresponding task, measured through the gradients of the loss. Experimental results show that SageMix brings consistent and significant improvements over state-of-the-art Mixup methods. Strengths:\n1. The saliency-guided sequential sampling is technically novel.\n2. There are some ablation studies to demonstrate the effect of the proposed method.\n3. Overall, the paper is well organised.\n\nWeaknesses:\n1. The difference between 2D and 3D mixup-based methods is not insightfully analysed in the introduction.\n2. The shape-preserving continuous mixup component just follows the mainstream method and thus the novelty is limited.\n3. The experimental results is not extensive. The proposed method is only quantitatively compared with PointMixup and RSMix. And there is also a lack of qualitative results. 1. Why 3D mixup is more challenging than 2D mixup? What is the particular difficulty of extending mixup from 2D to 3D?\n2. The computation of saliency is similar to the popular CAM approach. What is the impact of using different methods for saliency compuation?\n3. I suggest the authors make a more solid evaluation for the proposed method. n/a",
" This paper proposes a novel saliency guided mixup method for point clouds. It first utilizes saliency to find a query point for each of the two point clouds. Then it uses an RBF kernel (around the query point) to calculate the blending weights for each of the points. This method generalizes PointMixup [2], and shows superior performance against existing ones. Strengths\n- The idea of introducing saliency guidance to the mixup method in point clouds sounds reasonable, as it has been proven to be useful in image-based methods [11] [12] [26].\n- The design of sequentially sample two remotely located query (salient) points is delicate. It avoids the overlapping problem while preserving the important local structure of the two point clouds.\n- The experiments seem sufficient.\n\nWeaknesses\n- I didn’t see any major weaknesses in this paper. One potential improvement to the method could be rotating and translating the whole point clouds to move the two query points far away from each other. In my opinion, it’s a more elegant way. Nevertheless, I think it’s fine to leave it as a future work. - About the robustness experiments in Table3. Why don’t the authors present all tests presented in Table & in RSMix [15] ? e.g. Rotation 90, Rotation Y, Scale 1.4 are missing.\n- The authors mentioned their varying performance with different $\\sigma$ in Appendix D.2. However, they didn’t quantitatively experiment the performance except for a qualitative one in Figure 3. How does $\\sigma$ affect the performance of SageMix?\n- Another minor issue is missing reference of a paper on arxiv (non peer reviewed, but with decent citations). PointCutMix: Regularization Strategy for Point Cloud Classification. In ArXiv. https://arxiv.org/abs/2101.01461. N/A"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"Exdbx6DAxTE",
"pFVlkVYQB3s",
"EpNXkB2hRsf",
"t0G2R-HiXqL",
"7-5xyNQO2Ie",
"7-5xyNQO2Ie",
"t8jOQt-9ges",
"nips_2022_q-FRENiEP_d",
"nips_2022_q-FRENiEP_d",
"nips_2022_q-FRENiEP_d"
] |
nips_2022_yQDC5ZcqX6l | Efficient and Effective Optimal Transport-Based Biclustering | Bipartite graphs can be used to model a wide variety of dyadic information such as user-rating, document-term, and gene-disorder pairs. Biclustering is an extension of clustering to the underlying bipartite graph induced from this kind of data. In this paper, we leverage optimal transport (OT) which has gained momentum in the machine learning community to propose a novel and scalable biclustering model that generalizes several classical biclustering approaches. We perform extensive experimentation to show the validity of our approach compared to other OT biclustering algorithms along both dimensions of the dyadic datasets. | Accept | The reviewers discussed strengths and weaknesses of the paper. One potential issue (to which the author's answer was rather unhelpful) was resolved by a reviewer running the experiments with higher precision output. Reviewers were mostly convinced by the strong empirical improvements.
| train | [
"4_PfQVKqzQ",
"yT6X_0u9nv",
"aQakfo1kWnj",
"ye6tQI0cFC7",
"hA796uTdIIf",
"FXEMdwmuz82",
"Bk5mX_8U7VO",
"n4CVQsoDHSg",
"Wa8it9NLGTe"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their comments. Please revise the mentioned part in the manuscript and probably add some more details about the computational complexity (answer 9) in the manuscript. The sd =0 still look suspicious and need more clarifications.",
" We thank you for your response and the interest you show in bettering our paper!\n# Ground-truth data\nWe have performed additional experiments on the following synthetic datasets:\n\n||rows|cols|biclusters|bicluster sizes|Sparse|Structure|\n|-|:-:|:-:|:-:|:-:|:-:|:-:|\n|A|500|500|10|equal|Yes|Block diagonal|\n|B|800|1000|6|unequal|No|Block diagonal|\n|C|800|800|7|equal|No|Checkerboard|\n|D|2000|1200|4|unequal|No|Checkerboard|\n\nWe have generated A in a way as to make it similar to doc-term matrices (sparsity). B, C and D are made to be similar to gene-expression data (matrices containing biclusters with somewhat constant values). You can see the structure of these models in the following figure: https://imgur.com/a/VW21nLI\n\nThe results are averaged over 10 runs. The metric used is **(1-CCE)$\\times$100**\n| | A | B | C | D | \n|-|:-:|:-:|:-:|:-:|\nITCC|80.1±6.1|93.8±6.7|91.1±4.6|97.1±1.7|\nCCOT|54.4±3.5 | 70.0±.0 | 29.7±.4| 55.7±1.8 |\nCCOT-GW|99.1±.0|83.5±.0|83.4±.0|75.3±.0|\nCOOT|99.8±.0|78.8±2.0|99.8±.0|93.7±1.2|92.2±1.1|\nCOOTλ|39.9±2.4|84.9±4.6|28.2±.0|60.7±.0|39.5±1.9|\nBCOT | 99.8±.0|80.4±2.2|99.6±.1|91.3±.7|\nBCOTλ |**100±.0**|99.1±.4|**100±.0**|**100±.0**|\nBCOT (ground truth **r** and **c**)|same|99.9±.0|same|95.5±2.3|\nBCOTλ (ground truth **r** and **c**) |same|**100±.0**|same|99.2±.9|\n\nOur models have the best results on all four datasets (tie with COOT on A). We thank you as these tests additionally allow us to show the utility of the the row cluster distribution **r** and column cluster distribution **c**. The use of these ground truth distributions resulted in an increase of 19.5 and 4.2 points for BCOT on C and D; and an increase of .3 and decrease of .8 for BCOTλ.\n\n# Choice of datasets\nIn bioinformatics, the homogeneity of a bicluster or several biclusters depends on the sought after model and has to respect some constraints. For instance, this is illustrated in figure 1 of Madeira, S.C., & Oliveira, A.L. (2004). Biclustering algorithms for biological data analysis: a survey. TCBB, 1(1), 24-45. The discussed models and algorithms are devoted to the biclustering task. This is not the same problem searched in document/term clustering where we aim to reveal groups of documents characterized by groups of terms. Thereby, we are concerned with the co-clustering of sparse high dimensional data. When dealing with such data, seeking homogeneous blocks may not always be enough to produce useful, ready-to-use results. In fact due to data sparsity, several co-clusters may, for example, be primarily composed of zeros. Such co-clusters, while homogeneous, are not relevant and must be filtered out in the post-processing phase. In other words, it is for the user to select the most useful co-clusters so as to determine which document clusters should go with which term clusters, a task which is, however, not straightforward even with a reasonable number of document and term clusters. Approaches that take into account the sparsity characteristic exhibited by text data are, therefore, needed if co-clustering approaches are to be usable in realistic scenarios. This is the aim of our proposal for document and term clustering; it has the advantage of directly producing the most meaningful co-clusters. \n\n**We have also added two gene-expression data benchmarks, the CuMiDa breast cancer (Breast_GSE57297) and Leukemia datasets (Leukemia_GSE9476). BCOTλ has the best performance on both of them:**\n\n|||Breast Cancer|||Leukemia||\n|-|:-:|:-:|:-:|:-:|:-:|:-:|\n||ACC|NMI|ARI|ACC|NMI|ARI|\nITCC|68.5±11.9|24.2±16.8|16.5±15.4|64.7±7.4|61.9±7.0|39.8±7.6|\nCCOT||OOM||40.6±.0|.0±.0|.0±.0|\nCCOT-GW||OOM|||OOM||\nCOOT|63.1±5.2|5.4±8.7|-1.2±2.9|36.2±2.7|14.0±3.6|5.4±3.2|\nCOOTλ|61.5±.0|5.4±.0|2.2±.0|32.5±3.3|8.7±2.7|-.5±2.1|35.9±1.7|9.8±5.5|1.4±1.8|\nBCOT|76.9±.0|37.2±.0|26.7±.0|71.2±5.4|59.6±6.9|39.9±6.3|\nBCOTλ|**84.6±.0**|**48.3±.0**|**46.0±.0**|**80.9±3.8**|**70.9±4.1**|**55.3±3.3**|\n# Baselines\nHere is a table containing the training times of BCOT, BCOTλ and ITCC. One of our two models has the fastest training times on the four datasets (the shortest time is highlighted in bold). All models had the same number of iterations and results are averages of 10 runs.\nMethod|ACM|DBLP|Pubmed|Wiki|\n|-|:-:|:-:|:-:|:-:|\nITCC|1.53±.46|.88±.23|4.42±1.07|5.66±.98|\nBCOT|.93±.36|**.74±.25**|7.97±.72|6.01±.69|\nBCOTλ|**.64±.19**|4.56±.45|**2.98±.31**|**5.6±.74**|\n\nITCC requires only the number of row and column clusters as hyperparameters. Unlike BCOT; with COOT, CCOT and ITCC the number of row and column clusters is not necessarily the same. However, in the case of sparse data for example, by seeking to reveal a block diagonal structure (biclustering with the same number of row and column clusters), BCOT filters out homogeneous but noisy blocks making the results easier to analyze and interpret. Note that ITCC deals with nonnegative matrices only.\n\n**These experiments will be added to the supplementary material. We hope that we have addressed the reviewer's concerns!**",
" Dear Authors,\n\nthank you for your comments and clarifications!\n\nSome concerns and/or questions regarding the empirical methodology remain:\n\n* **Ground-truth data** (partially addressed in **4.1**): What is your reasoning to not simulate synthetic dyadic data suitable for BCOT and BCOT$\\_{\\lambda}$? In this way, you could compare BCOT and BCOT$\\_{\\lambda}$ to the other methods with respect to retrieving biclusters, which could strengthen your empirical evaluation.\n* **Choice of datasets** (addressed in **4.2**): As you mention in the introduction, the analysis of gene expression data is one of the main application areas of biclustering; many biclustering methods are developed in this context, and, arguably, the field of computational biology is also interesting for the broader ML community. Could you clarify what you mean with \"Our expertise in co-clustering showed that processing gene expression data is not the same as processing document-term data which are sparse; the underlying models are not the same\"?\n* **Baselines**: Thank you for adding the additional baselines for document clustering. It is interesting to see that ITCC is fairly competitive with BCOT and BCOT$\\_{\\lambda}$, how does it compare in runtime? Also, in your answer to Reviewer _c4gm_, you mention that you optimized the hyperparameters of your method according to \"rules of thumb and internal unsupervised metrics such as the Davies-Bouldin index\", but for the baseline methods, you used \"default values prescribed by the authors of each baseline\". Have you tried performing hyperparameter tuning for the baselines? It would be worth investigating whether ITCC is able to outperform BCOT and/or BCOT$\\_{\\lambda}$ with some hyperparameter optimization.",
" Thank you for your thorough review and for supporting our proposal. We appreciate the comments and suggestions that we are willing to address in the revised version. A first revised version is already available.\n\n# Addressing the perceived weaknesses\n\n1. Since we are in an unsupervised context, the ground truth labels are not available during training, so there is no concept of cross-validation or validation set. We tune the hyper-parameters according to rules of thumb and internal unsupervised metrics such as the Davies-Bouldin index which we used in our case.\n\n2. There were not many parameters to tune in the baseline models, we used the ground truth number of clusters for all baseline. For the rest of the hyper-parameters, we used the default values prescribed by the authors of each baseline as there were no heuristics on how to tune specific hyper-parameters.\n\n# Answers to questions\n\n1. They are different objective functions, BCOT$_\\lambda$ (8) is to the fuzzy block seriation problem (9) what BCOT (6) is to the block seriation problem (5). We introduced (9) due the fact that there was no prior concept of a fuzzy block seriation in literature (to the best of our knowledge).\n\n2. BCOT can be reduced to a distance, but only when a set of very restrictive conditions are respected, i.e., n=d=k, $\\mathbf{r}=\\mathbf{c}$ and $L(B)$ being a distance matrix. With this we recover the classical definition of the Wasserstein metric with BCOT and the sinkhorn distance with BCOT$_\\lambda$ and lose the clustering component of our proposition. Concerning applications on images, sorry the sentence was incomplete in the sense that this requires preprocessing as you have suggested.\n\n3. The choice of $L$ proposed in our contribution appears efficient and effective in terms of co-clustering. As we saw in section 3, other choices of $L$ leading to other co-clustering criteria and algorithms would be interesting to investigate, for instance, in an ensemble approach.\n\n4. We have proposed a simple rule of thumb to see if a chosen $L$ function is a good candidate by looking at the number of clusters retrieved (no empty clusters).\n\n5. When $B$ is dense then $\\Vert B \\Vert_0=nd$, we obtain the new computational complexity by replacing $\\Vert B \\Vert_0$ with $nd$ in **table 1**. The term $k\\Vert B \\Vert_0$ becomes $knd$ which is just the complexity of computing $L(B)^\\top Z$ and $L(B)W$ (given that we assumed that n=d for simplicity).\n\n6. The regularization function in BCOT$_\\lambda$ is the entropy function. We have not tried other regularizations. In (9), we use $\\Omega$ to propose a fuzzy variant of the block seriation problem.\n\n7.The only hyper-parameter we set for $k$-means is the number of clusters for which we assume that the ground truth number is given.\n",
" Thank you for your thorough review and for supporting our proposal. We appreciate the comments and suggestions that we are willing to address in the revised version. A first revised version is already available.\n\n\n# Addressing the perceived weaknesses\n1. BCOT, COOT and CCOT tackle the biclustering problem in very different manners. \n\nWith the COOT and CCOT variants, a co-clustering is proposed at the convergence ; co-clustering is then a consequence and not a main goal. However, BCOT aims at co-clustering and integrates this objective from the beginning and not to deduce it at the end of any process.\nUnlike the COOT and CCOT variants, with BCOT we keep the original data as input.\nWe propose a general formulation for the block seriation which reveals a link with low-rank optimal transport formulated as a minimization linear program easier to solve. The approach has the added possibility of choosing the distribution of elements over the row and column clusters (through setting the proportional size of clusters using r and c). \nFurther, unlike COOT and CCOT, BCOT implicitly performs dimensionality reduction dealing thus effectively with sparse and noisy high-dimensional data. At each iteration row clusters are updated based on low dimensional representation spanned by the column clusters $L(B)W$ and vice-versa by the row clusters $L(B)^T Z$ showing, thereby, the mutual reinforcement between row and column clusterings.\nFaced with real data, in the literature when we deal with co-clustering we unfortunately limit ourselves to an evaluation of a one-side clustering (document clustering for instance), with BCOT we show the consistency of biclusters by dealing with term clustering.\n\nThis explains the interest of our approach. \n\n2. $r$ and $c$ should be set to the desired proportion of the clusters. For example r=(0.3, 0.7) means that the first cluster should contain around 30% of the row elements while the second 70%. The same reasoning applies for $c$ wrt the column elements. As mentioned in the paper, when no such information is available, setting them to the uniform distribution is the most reasonable choice.\n\n3. Thank you for this remark, the established connections (section 3) lead us towards two interesting lines of comparison, the first by a simple comparison with original algorithms optimizing the criteria (10, 11, 12); we intend to add the results showing that BCOT outperforms them. The second being based on new versions of BCOT, by setting the function $L(B)$ accordingly, should be interesting to investigate.\n\n# Answers to questions\n\n1. Please see 1. in # Addressing the perceived weaknesses\n\n2,3,4. The three constraints (binarity, assignment, impossible triads) are implicitly included in (5) since $Z$ and $W$ are classification matrices. Matrix $C=ZW^\\top$ respects all three constraints as well as the rank constraint.\n\n5. We defined the discrepancy as a sort of reciprocal to connectivity meaning that the largest entries in the adjacency matrix should become the smallest entries. \n\n6. The dimensions of the distributions $\\mathbf{r}$ and $\\mathbf{c}$ are the same, they are both equal to $k$, the desired number of biclusters, it is their entries that are not necessarily the same.\n\n7. We will revise this in the final version.\n\n8. They are different objective functions, BCOT$_\\lambda$ (8) is to the fuzzy block seriation problem (9) what BCOT (6) is to the block seriation problem (5). We introduced (9) due the fact that there was no prior concept of a fuzzy block seriation in literature (to the best of our knowledge).\n\n9. This is due to the fact that the authors in [19] omitted the computational complexity associated with computing two pairwise distance matrices, one over the rows and one over the columns (as seen in **algorithm 1 and 2** in [19]).\n\n10. If we know the distribution of the cluster sizes over the rows and columns, we can set $\\mathbf{r}$ and $\\mathbf{c}$, otherwise, it is better to use a uniform distribution. $k$ is the same as the number of biclusters which is required as a hyper-parameter.\n\n11. We have increased the number of runs and the results are similar. The presence of sd=0 means that over several runs the algorithm leads to the same co-clustering.\n",
" Thank you for your review and appreciate the comments and suggestions that we are willing to address in the revised version\n\n#Originality\n\nWe respectfully disagree with the assessment that our work is heavily based on [19,21,26].\n- [19] identify biclusters by detecting jumps in the scaling vectors \\alpha and \\beta in\nthe solution of the entropic regularized OT in CCOT and the entropic Gromov-Wasserstein in CCOT-GW\n- [26] rely on Information-Theoretic CoClustering whereby a summary matrix of size $k\\times g$ which is as close as possible to the original data matrix wrt loss in mutual information is learned to perform biclustering. The same idea is applied in COOT except that they minimize the COOT metric instead\n- [21] is nothing more than a problem statement for biclustering as a maximization of integer programming \n\nWhile in COOT and CCOT, a co-clustering is proposed at the convergence of the criterion (co-clustering is a consequence and not the main goal), BCOT aims at co-clustering and integrates this objective from the beginning. This explains the interest of our approach. Thereby, we propose a general formulation for the block seriation which reveals a link with low-rank optimal transport formulated as a minimization linear program easier to solve. The approach has the added possibility of choosing the distribution of elements over the row and column clusters (through setting the proportional size of clusters using r and c). Further, unlike COOT and CCOT, BCOT implicitly performs dimensionality reduction dealing thus effectively with sparse and noisy high-dimensional data. At each iteration row clusters are updated based on low dimensional representation spanned by the column clusters $L(B)W$ and vice-versa by the row clusters $L(B)^T Z$ showing, thereby, the mutual reinforcement between row and column clusterings\n\n#Quality\nCCE is used for simulated data because of the availability of labels of row and column clusters. In our tests, only the row/document labels are present and so we propose to use accuracy for row clusters and PMI-based score to evaluate the column clusters’ coherence.\n#Clarity\nThe transition between the presentation of biclustering and the OT parts seems sudden\nbecause there is no inherent connection between the two and the connection becomes clear once we delve into the details of the proposed BCOT problem\n\n#Answers\n\nPoints 1-9 are summarized in 1-3\n1. The row and column exemplar distributions r and c can be seen as the distribution of the row and column clusters respectively. There is no inherent relation between them and row and column weights just like for the source and target distributions in optimal transport.\n2. A detailed discussion of the approximation is out of the scope of this paper which focussed mainly on the biclustering part; Z, W are not necessarily full-rank\n3. For the COOT variants the complexities are the same as the one reported in [26], we only adapted them to biclustering (d’=k,n’=k). Furthermore, the complexity they reported is only for the computation of $L(X,X^T)$, we thus added the additional complexity of the iterative part of their BCD algorithm. For CCOT, the authors omitted the complexity of initial computation of two pairwise distance matrices over the rows and the columns\n\n4. 4.1 The simulated data according to a Gaussian LBM [19,26] are small in size with few row and column classes ; there is no information on the parameters used to generate them. As we are interested in dyadic data (sparse/not), not to mention that all values are positive, it is certainly not the GLBM to be recommended for this task\n\n4.2. We chose document-term matrices because 1) It is mainly on this type of matrices (high dimensional and sparse) that co-clustering has convinced the most the ML community on its interest compared to clustering 2) the evaluation of new approaches is more straightforward due to the presence of document labels and the possibility of using semantic coherence that we propose 3) Our expertise in co-clustering showed that processing gene expression data is not the same as processing document-term data which are sparse; the underlying models are not the same\n \n4.3. We add two baselines, ITCC and ONMTF; most suited for document-term matrix and competitive unlike the others. BCOT outperforms ITCC and ONMTF. Note that BCOT generalizes multiple biclustering models and so should be as effective as these models when using the corresponding L function (section 3). In [19,26] RBC, GLBM and DKM make the Gaussian assumption, and are therefore not appropriate for document-term. Further, the use of ITCC and ONMTF in [19,26], requiring that all values are non-negative, is inappropriate\n\n4.4. Please see 4.3\n\n4.5. Please, see the #Quality section\n\n5.1. We cite [30] for possible applications of biclustering rather than clustering on dyadic data\n\n5.2 Dyadic data appear in survey research, marketing, business intelligence, information retrieval, and recommender systems",
" In the given paper, the authors propose a novel biclustering algorithm for dyadic data based on optimal transport (OT) and the block seriation problem. Their main contribution can be summarized as the formulation of the optimization problem BCOT, an indefinite bilinear program, whose solution can be used to obtain biclusters, i.e., a simultaneous clustering of rows and columns. In special cases, the solution of BCOT allows for the computation of an approximation of the optimal transport map with bounded rank for the discrete Kantorovich OT formulation. In addition to BCOT, the authors also introduce the fuzzy variant BCOT$_{\\lambda}$, which is an adaptation of BCOT with entropic regularization, and connect BCOT to existing biclustering algorithms. To empirically evaluate the proposed method, the authors perform experiments with six document-term datasets and compare their results with four other OT based biclustering approaches. **Strengths**\n\n_Originality_: The concept to combine the block seriation problem and optimal transport to obtain block diagonal biclusters for dyadic data is a novel idea.\n\n_Quality_: In the presented work, I did not see any major technical issues and the authors provide proofs for the propositions in the appendix.\n\n_Clarity_: The overall structuring of the paper is good, especially section 3 (_Connections to Existing Work_), as it helps to contextualize the proposed method with respect to related work. Concerning reproducibility, the authors publicly released their code. The application of a statistical test to quantify the significance of the experimental results is a nice addition.\n\n_Significance_: The presented results show that the proposed method is superior to existing OT based biclustering methods with respect to document-term clustering.\n\n**Weaknesses**\n\n_Originality_: Although the presented method is novel, it heavily relies on previous work (i.e., [19, 21, 26]) and provides comparably little conceptual originality. The contribution's main novelty seems to be that it works better for dyadic data compared to other OT based biclustering methods. As it is conceptually not that convincing, I would consider it more of an empirical paper, and for that it lacks sufficient experimental evaluation with SOTA biclustering methods for dyadic data. \n\n_Quality_: While there are no obvious major technical errors, there are some vague statements and (suspected) smaller errors in some of the formulas. Moreover, instead of using bicluster-specific metrics, such as e.g., the co-clustering error (CCE), the authors only assessed the resulting clusters using standard clustering metrics. Overall, I am also missing a clear motivation, other than the fact that optimal transport is a trending topic in the machine learning community.\n\n_Clarity_: One of my main concerns is that the paper (in the given version) is not sufficiently self-contained. It was difficult to understand the presented content without consistently referring to [19] and [26] and further referenced literature. Also, a clear train of thought is missing throughout the paper (e.g., in Section 2.2 we switch from inducing a biclustering based on the optimal transport block seriation problem to an approximation of the Kantorovich OT formulation without much transition), which is especially evident in the introduction. I would suggest starting with a more general motivation before going into detail about bipartite graphs as well as applications. While contextualizing the work in section 3 is helpful, this section is, again, not fully self-contained and comprehensible, and related work needs to be frequented. Concerning language and grammar, there is room for improvement. Language is oftentimes repetitive, somewhat colloquial, and not always precise.\n\n_Significance_: While the BCOT performs much better to identify document and term clusters in comparison to existing OT based biclustering methods, the overall significance of these results is not (yet) clear to me. I have multiple questions/concerns about the methodology of the empirical evaluation, which includes, among others, the choice of datasets, baselines, and metrics (please see **Questions 4.x** for more details).\n\nPlease see below for a more detailed feedback.\n\n**Minor remarks & suggestions**\n\n* References:\n - Please consider citing the peer-reviewed version instead of the arXiv version in the references (e.g. reference [24])\n - Lines 30-31: Please consider adding references that support your statement (\"Optimal Transport (OT) took the machine learning community by storm and was used in the resolution of various data mining problems and biclustering was not an exception\")\n - Lines 42-43: Please consider adding references for the previous biclustering approaches, which your method generalizes\n - Lines 61-63: Please consider adding a reference for the the Kantorovich formulation of OT\n - Line 225: Please consider adding a reference for the Davies-Bouldin index\n - Line 233: Please consider adding references for the employed clustering metrics\n\n* Structure: \n - Please consider restructuring your introduction, starting with a more general motivation and introduction to biclustering and OT. Also, pointers to sections would be much appreciated (e.g., \"In Section 2, we propose our method...\").\n - Lines 25-29: Some more high-level classification of biclustering algorithms might be helpful for the reader (e.g., there exist probabilistic approaches, etc.) in addition to enumerating applications.\n\n* Language/Grammar:\n - Please consider refraining from very long sentences (e.g., lines 32-35, 140-143)\n - Lines 103-104 is not a proper sentence\n - Line 145: mentionned -> mentioned\n - Line 170: \"proposed to fuzzy variant\" -> \"proposed a fuzzy variant\"\n - Line 171: traditionanal -> traditional\n - Line 191: in -> is\n - Line 194: \"algorithm available\" -> \"algorithm is available\"\n - Line 198: \"in a way that make\" -> \"in a way that makes\"\n - Line 223: $\\lambda$ is not in subscript\n - Line 282: \"certain types dyadic data\" -> \"certain types of dyadic data\"\n \n* Examples for vague language:\n - Lines 43-44: \"We propose two efficient methods for solving this problem [...]\" -> What problem?\n - Line 98: \"Now let its [...]\" -> What is _it_?\n - Line 107: \"[...] we are interested in inducing a couple of _almost-hard clustering_ [...]\" -> What does _a couple_ refer to?\n - Lines 117-118: \"[...] this should not significantly change the structure of the solution [...]\"\n - Lines 130-131: \"[...] row exemplars (or representatives or centroids)\" -> Please consider using one of these terms consistently\n - Line 134: \"Biclustering is the main purpose of the approach we proposed [...]\" -> Could you be more precise what is meant by _the approach we proposed_\n - Line 200: \"[...] the computation of [...] is quite efficient [...]\"\n - Line 209: \"[...] our model should be faster in most cases [...]\"\n - Line 286: \"[...] the proposed approach does a good job of finding clusters [...]\"\n\n* (Suspected) minor errors:\n - Line 70: Is $\\mathbf{K}=-\\texttt{diag}(\\mathbf{a})\\exp(\\mathbf{M}/\\lambda)\\texttt{diag}(\\mathbf{b})$ correct, cf. [4], Lemma 2? Should it be a negative exponential, i.e., $\\exp(-\\mathbf{M}/\\lambda)$?\n - Lines 78-79: \"The block seriation problem is an integer programming problem and is consequently NP-hard\" -> Please consider adding a reference which contains a proof for NP-hardness for the block seriation problem\n - Line 99: Is $\\mathbf{Z}$ a typo? \n - Line 158: $\\sum_{h=1}^r p(\\mathbf{b}_i, \\mathbf{b}'_j \\in h)$ -> Should it be \"$=$\" instead of \"$\\in$\"? 1. Could you please elaborate on the row exemplar and column exemplar distributions? How do they relate to the row and column weights? In [26] these are distributions from the second data matrix, but what are their purpose in your work?\n\n2. Could you give more details on the approximation of the optimal transport map (lines 136-139)? Also, what do we know about the rank of $Z$ and $W$?\n\n3. Table 1: How did you compute the time complexities for CCOT, CCOT-GW [19], COOT and COOT$_{\\lambda}$ [26]? Why do they differ from the time complexities reported in [19] and [26], respectively? What restricted class of cost functions is considered?\n\n4. Empirical evaluation\n\n 4.1. Have you considered using simulated data to have ground truth biclusters, which you can use to assess your method? This is quite common for the evaluation of novel biclustering methods, and is also done in [19] and [26]. If it is due to the page limit, you could also consider including this in the appendix or supplementary material.\n \n 4.2. What is the reasoning for the choice of datasets? Why mainly document-term matrices and not e.g., gene expression data?\n \n 4.3. It would be beneficial to include the majority of baselines also used in [19] and [26]: ITCC, Double K-Means, Orthogonal Nonnegative Matrix, Tri-Factorizations (ONTMF), the Gaussian Latent Block Models (GLBM) and Residual Bayesian Co-Clustering (RBC).\n \n 4.4. Why did you choose to compare your method, which is specifically designed to work well for dyadic data, with methods which are not? Have you considered including common biclustering methods for dyadic data as baselines?\n \n 4.5. Why did you look at document and term clusters separately, and not evaluate your obtained biclusters using e.g., the CCE (see point 4.4)? If you lack ground truth data, please consider point 4.1. It might be beneficial to compare your method with CCOT, CCOT-GW [19], COOT, COOT$_{\\lambda}$ [26] with respect to the CCE as well.\n\n5. Applications\n\n 5.1. Reference [30]: What applications of biclustering do the authors discuss in this book? Could you please give some pointer to specific pages?\n\n 5.2. As the main novelty in this contribution seems to be the focus on dyadic datasets, could you give an outlook on other areas of application (in addition to document-term matrices) and elaborate on the significance of your contributions? The authors addressed the limitations of their work pertaining to the type of data suitable for their proposed method.",
" This paper proposes a generic framework for biclustering using optimal transport. Two methods are developed in this framework and usually result in an almost hard biclustering and a fuzzy biclustering accordingly. The computational efficiency and accuracy are validated through six benchmark datasets. originality\nStrength\nThis paper leverages the low-rank optimal transport to solve the biclustering problem for the dyadic data.\n\nquality\nThe paper is generally well written in grammar. However, the technical discussion lacks clarity in some respects. Such as Page 1 line 27, the “summary matrix” is mentioned yet without further explanation. \n\nclarity\nWeaknesses\n(1)\tIt is unclear what’s the major advantage makes the proposed methods perform better than the others in the experiments. \n(2)\tA lot of details of the experiment seem to be missing. Such as, how are the parameters r and c selected. There are lots of existing works introduced in section 3 seems can solve the discussed problem also and what they are not compared.\nsignificance\nThis paper leverages the optimal transport to solve the biclustering problem for the dyadic data. The computational efficiency and estimation accuracy are achieved through the low-rank assumption of the solution matrix. \n 1. The introduction of the relevant is unclear and would need more details and formalization. I fail to grasp exactly what causes the weakness of the these introduced methods CCOT, CCOT-GW, COOT and COOT-GW.\n\n2. When the authors say integrating the constraint $rank(C) \\leq k$ to (4) and finally get (5), \nare the three constraints (binarity, assignment, impossible triads) included or ignored?\n\n3. The definition of anti-adjacency matrix seems a little arbitrary since it is defined based on the “discrepancy” between two nodes, which seems to be a concept without any rigorous mathematical statement in the paper. \n\n4. Page 3 line 95. It seems that the dimension of r and c and be different. How can that be achieved considering Z should be a matrix with n row r columns and W should be a matrix with d row and c columns.\n\n5. It would be better to replace the notation for the assignment matrix since it is duplicated with the solution matrix C. This causes some confusions during the reading.\n\n6. Page 5, is there a connection between (8) and (9) or they are two different objective functions?\n\n7. Page 7. The time complexity of CCOT reported in the paper “Coclustering through optimal transport” seems do not match with the order in the Table 1. What causes the extra computational burden in this paper?\n\n8. In the implementation, it seems the algorithm needs r and c as input. The details about how to choose these two parameters and decide the rank k seems to be missing.\n\n9. In the experiment, the repeat times seems to be small (only 10 runs are conducted). In some settings, the standard deviation of the results is 0. How does that come?\n The authors mention about the limitation that the method is specifically tailored to datasets consisting in dyadic data for biclustering and can not be applied on other data types such as images directly.",
" The paper proposes a new approach to bi-clustering, BICOT, and its entropic regularization BICOT_{\\lambda}. The proposed BICOT model is quite general. In primis, it can be reduced to a low-rank solution for optimal transport under specific distribution conditions. Secondly, it can be reduced to several bi-clustering models in the literature. \n \nThe authors also prove that the computational complexity of BICOT is the same as that of previous Co-Clustering work COOT [26] and inferior to the versions of CCOT [19]. To support their claims, parallel to the theoretical demonstrations, the authors conduct an accurate analysis of their algorithm by comparing their method with state of the art for different dyadic data sets with increasing size. In particular, besides better accuracy, adjusted random index and normalized mutual information, BICOT_{\\lambda} performs best in all the experiments.\n \nFrom a theoretical point of view, this work introduces the problem in a more general and straightforward form that can be traced back through appropriate choices to several models previously introduced in the literature. The main idea is to formulate the bi-clustering problem by using two building blocks: block seriation and optimal transport.\n The strength of this work is mainly based on three aspects:\n1.\tThe proposed approach generalizes many already present algorithms that can be obtained as special cases.\n2.\tThe presented algorithm is more efficient both in terms of complexity and in terms of memory usage.\n3.\tThe authors provide evidence of the effectiveness of their approach both theoretically and experimentally.\nThe paper conducts extended analysis over several data-set showing great performance compared to other existing approaches. The results are well exposed, and the paper is well written and easy to follow. \n \nA possible weakness is that determining the number of clusters requires resorting to external metrics. Furthermore, it is unclear if the authors set the hyper-parameters by performing cross-validation on a data set independently from the validation set. Also, there is no reference to the hyper-parameters used in the methods that serve as the baseline. Finally, concerning the term clustering experiment, it seems that the best performances are due to the fact that some methods have not been optimized.\n - In (9) L is not used; it is not well explained in which sense (9) is related to (8).\n- In [26] is proved that COOT is a distance. Some discussion comparing L in COOT and BICOT would be appreciated. Comparisons with [26] are done on documents and terms, and in the paper it is said that it would not be possible on images. On the other hand, COOT has shown experiments on MNIST and USPS (see Figure 1 in [26] (on Neurips)), simply normalizing pixel magnitude to [0,1].\nComputational complexity of [26] is O(min{(n + n’)dd’ + n’^2n; (d + d’)nn’ + d’^2d})\n- The performance of the proposed approach strongly relies on the choice of the function $L$. How much will the results change when changing $L$?\n- Is there a way to define an optimal $L$ for a given quality measure?\n- How the algorithm asymptotic complexity will change if one uses a dense function $L$?\n- How much do the results depend on the regularization function $\\Omega$ in $BCOT_{\\lambda}$?\n- How have been chosen the hyper-parameters of k-means clustering?\n \nTypos:\n -Line 99: L(b)_{ij} remove Z\n -Line 101: two times “between”\n- (12) L(b_{ij})\n- Line 191: in --> is\n- Line 194 is available\n -Line 280: till—>still\n \nProof of proposition 3:\nLine 445 twice L(B)W --> L(B)W and L(B)^TZ\n The limitations of the proposed approach are not fully explained. For example, it seems that the asymptotic low cost of the algorithm is largely due to the sparseness of the cost matrix. Although one has the freedom to choose this matrix, for some applications a sparse matrix may not give satisfactory results. However, the use of dense matrices could considerably slow down the algorithm making it de facto uncompetitive. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"hA796uTdIIf",
"aQakfo1kWnj",
"FXEMdwmuz82",
"Wa8it9NLGTe",
"n4CVQsoDHSg",
"Bk5mX_8U7VO",
"nips_2022_yQDC5ZcqX6l",
"nips_2022_yQDC5ZcqX6l",
"nips_2022_yQDC5ZcqX6l"
] |
nips_2022_B_LdLljS842 | Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions | One of the most important AI research questions is to trade off computation versus performance since ``perfect rationality" exists in theory but is impossible to achieve in practice. Recently, Monte-Carlo tree search (MCTS) has attracted considerable attention due to the significant performance improvement in various challenging domains. However, the expensive time cost during search severely restricts its scope for applications. This paper proposes the Virtual MCTS (V-MCTS), a variant of MCTS that spends more search time on harder states and less search time on simpler states adaptively. We give theoretical bounds of the proposed method and evaluate the performance and computations on $9 \times 9$ Go board games and Atari games. Experiments show that our method can achieve comparable performances to the original search algorithm while requiring less than $50\%$ search time on average. We believe that this approach is a viable alternative for tasks under limited time and resources. The code is available at \url{https://github.com/YeWR/V-MCTS.git}. | Accept | I found this to be an interesting paper. As the reviewers indicated, it could be improved in terms of clarity, and I strongly encourage the authors to consider those comments carefully, as ultimately this could only make their paper more impactful.
In particular, the authors could consider how to be clearer about their claims, and how to provide stronger evidence for these. For instance, a claim like "It can maintain comparable performances while reducing half of the time to search adaptively" is very general, and it is unclear that it is really true: for instance, is this true under _all_ conditions?
That said, I believe the paper is clear enough, and the method is simple enough, that it might be of interest to the community, and I think it would be good to accept it for presentation at the conference. This agrees with most reviewers, three of whom voted to accept the paper. I do agree with the one reviewer voting to reject that I'm somewhat unsure how this compares to other reasonable approaches, but I think this can be further discussed in follow-up papers as well. | train | [
"V-imCg8efHS",
"XxQkxowhVk_",
"fhhiFcmiNSV",
"cTME-veTe4",
"I1i26J6KzK8",
"YdW1cVlFfj",
"WpMGchitMkh",
"a58NooVzSEX",
"QxGg0TO-cyG"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer oRW8,\n\nWe kindly remind you that the final stage of discussion is ending soon, and so please kindly let us know if our response has addressed your concerns.\n\nHere is a summary of the revisions:\n\n- We further clarified the **main distinctions** between our work and the Time Management algorithms **from three aspects:** the conditions, the targets, and the method. They are not comparable and we will add a more detailed discussion on the related work in the final version. \n\n- We mentioned that we **made the ablation** of the different expansion methods, especially for the greedy one. \n\n- We **emphasized the targets of our work**. Namely, we aim to reduce the search budget of MCTS adaptively while keeping comparable performances to the vanilla MCTS without early termination.\n\n- We **revised** our paper and updated it on the website.\n\nThanks again for your time and reviews, we will be happy to answer if there are additional issues or questions.",
" Thank you for your comments and advice! We hope the following address your concerns:\n\nAs for the question \"How does virtual expansion compare with other action selection strategies (e.g., select by Q value, and some BAI strategies)?\":\n\nWe would like to clarify that virtual expansion has two effects. (1) as you correctly mentioned, virtual expansions can mitigate the issue of exploratory behavior masking out the best action information. (2) However, virtual expansions do not aim to fully remove the exploratory behavior; instead, we aim to keep the exploratory component in the final policy. The oracle policy (without virtual expansions) is also highly exploratory in the early training phases, because of the Dirichlet exploration noise added and the inherent uncertainty of the model. Virtual expansion aims to keep that as well. We note that this is very important to RL, because without the exploratory part, it will quickly collapse due to over-exploitation issues.\n\nMoreover, we have a BAI ablation in our original paper in Section 5.4. The greedy expansion means that after k vanilla expansion, it will spend the left N − k simulations to visit the current best action greedily. But the performance is much poorer, which indicates that focusing on the best action leads to failure in hard-to-explore board games.\n\nAs for the question \"The empirical results are only tested with very few number of rollouts, and it remains unsure whether the proposed techniques are useful in more general cases\": Thanks for the great question! To investigate whether our method still holds with larger amounts of MCTS expansions, we take a pretrained model and compare two strategies: (1) vanilla expansion with N=150/400/600/800 nodes in MCTS (2) virtual expanded policy with $N=800, r=0.2, \\epsilon=0.1 $.\n\n| | MCTS ($N=150$) | MCTS ($N=400$) | MCTS ($N=600$) | MCTS ($N=800$) | V-MCTS ($N=800, r=0.2, \\epsilon=0.1 $) |\n| -------------- | ------------- | ------------- | ------------- | ------------ | -------------------------------------- |\n| Average budget | 150 | 400 | 600 | 800 | 431.1 |\n| Winning rate | 82.0% | 84.5% | 84.9% | 85.9% | 85.0% |\n\nThe result shows that (1) V-MCTS($N=800, r=0.2, \\epsilon=0.1 $) is better than MCTS (N=600) in both the average budget and the winning rate, (2) V-MCTS can achieve comparable performance to the oracle MCTS(N=800) while keeping a much less average budget. Therefore, V-MCTS works with a larger amount of MCTS expansions.\n\nFor the question of \"might only be useful in games with less chaotic rewards\", thanks for the insight! We agree that our method's usefulness will vary by the level of how chaotic the reward is. However, if we consider the game of Go, the reward function is already very chaotic. First, the reward function in the game of Go is only provided at the end of the game. The MCTS search can only be roughly guided by the value function, since the reward is all zero in the middle of the game. Second, the training in the game of Go is adversarial. This means that for any player, the environment is his opponent, who tries his/her best to screw up the first player. We empirically show that even in this sparse reward and adversarial setting, our V-MCTS is still useful. This provides strong evidence that our method is robust to chaotic rewards and environmental dynamics for many practical purposes.\n\nWhat's the intuition of the Theorem 4.1?\n\nAnswer: As you have mentioned, the intuition is that V-MCTS converges to the optimal policy as the number of rollouts increases. Theorem 4.1 is a mathematically precise way of stating this intuition. Our theorem is a probabilistic statement, and one can choose the appropriate k given the desired confidence level. It is true that we need a sufficiently large k to absolutely guarantee BAI. However, in practice, we don't need it to be correct for every MCTS search. The theorem states that even if we can not guarantee the correct BAI, it still has a large probability of being correct.\n\nFinally, thanks for your suggestions! We have revised our paper and updated it on the website. And we highlight the changes and essential details that reviewers have mentioned with blue color.\n",
" Thank you for your comments and corrections on the typos! We hope the following address your concerns:\n\nAs for the question \"apply V-MCTS to all pre-trained Top Go 19x19 programs\", we have to mention that MuZero spends 16 TPUs for training and 1000 TPUs for selfplay in 19x19 Go games. Such computation resources are not available and affordable to our team. Therefore, we conduct the experiments among the Atari games and the harder Go 9x9 games to prove the effectiveness of V-MCTS.\n\nAs for the question \"In experiments, N = 150. What N_0 would be to satisfy Theorem 4.2, if N=150.\", $N_0$ should satisfy the equation $c_1M_a \\sqrt{rN_0} = 1 + (c_1 + \\log 3)M_b \\frac{\\sqrt{rN_0}}{2}$ in Lemma A.1 in Appendix A.4 (line 688-689). It should be larger than $|A|$. In 9x9 Go games, $N_0$ should be larger than 82 (81 intersections as well as a `pass` action). In Figure 1 (b), V-MCTS (N=150) and (N=90) can work but V-MCTS(N=30) cannot because $N=30 < 82 \\le N_0$. And we will add more explanations for this in the final version due to the limitation of the pages.\n\nAs for the question \"It is unclear about ‘3 different runs’ in Tables 1~3. The authors need to clarify it.\", it means we do 3 separate training runs. We have revised our paper and updated it on the website.\n\nAs for the suggestion \"Please keep the consistency of the symbols, for example, $\\epsilon$ and eps\", we will add notations that the usage of $\\epsilon$ is equal to eps as $\\epsilon$ is hard to draw in figures.\n\nAs for \"Line 186: highlighted in line 5 -> highlighted in line 6\", we have revised our paper and updated it on the website.\n\nAs for the question \"Explain why the ratio (the visitation of an action in a state divided by N) is the policy of this state. The symbols in line 10 conflict with the situation in line 8...Where is that action?\": there is a typo in line 10 (Now is line 11) that causes some confusion. $\\hat{\\pi}_k(s) = \\hat{N}_k(s, a) / N $ should be $\\hat{\\pi}_k(s, a) = \\hat{N}_k(s, a) / N $. ${\\pi}_k(s, a) \\text{and} \\hat{\\pi}_k(s, a) $ are corresponding probabilities of action $a$, and we will sample an action from the returned policy distribution. As for why the visitation is divided by N, we have mentioned the definition of the policies in line 169. The policy distribution of MCTS is defined as the visitation of actions divided by the total visitations (line 9: $\\pi_k(s, a) = N_k(s, a) / k $ for k-th iteration). But for the virtual expanded policy, it does $k$ vanilla expansion as well as $N - k$ virtual expansion, which means the total visitation is N rather than k.\n\nFinally, thanks for your detailed suggestions! We have revised our paper and updated it on the website. And we highlight the changes and important details that reviewers have mentioned with blue color.",
" Thank you for your comments and advice! We hope the following address your concerns:\n\nFor question (1) \"Give a complementary analysis to see if given a fixed amount of time, it could make better decisions than a vanilla MCTS using the same amount of time.\", we have made such a comparison in Figure 1(a). We will put more emphasis on this in the paper. In Figure 1(a), the red point of V-MCTS (eps=0.1) and the blue point of MCTS(N=90) consume the same amount of time (x-axis), but V-MCTS gives better performance (y-axis).\n\nFor question (2) \"There is an argument to be made for the L2 norm instead.\": thanks for the great question! It is interesting to use the L2 norm of policy distributions. And here we make some ablations to see the difference. We take a pretrained model and compare two strategies (L1/L2 norm of distributions). And the results are as follows:\n\n\n\n| | MCTS (N=150) | V-MCTS, **L1 norm** ($r=0.2, \\epsilon=0.1$) | V-MCTS, **L2 norm** ($r=0.2, \\epsilon=0.1$) | V-MCTS, **L2 norm** ($r=0.2, \\epsilon=0.05$) |\n| -------------- | ------------ | ------------------------------------------- | ------------------------------------------- | -------------------------------------------- |\n| Average budget | 150 | **96.2** | 97.1 | 119.3 |\n| Wining rate | 82.0% | **81.5%** | 79.8% | 81.0% |\n\nWe can find that (1) L2 norm can also work for V-MCTS; (2) L1 norm is better than L2 norm. And we attribute this to the formulation of ucb scores. Because the ucb scores have already taken into account the difference in the visitations (see the N(s, a) in Eq (1)). Therefore, amplifying the deviations may result in some bias.\n\nFor suggestion (3), we agree that \"this framing is not quite accurate\". And we have revised our paper and updated it on the website.\n\nFor suggestion (4), we agree with you and have committed the changes in the revised version: \"The computation bottlenecks in vanilla MCTS come from the search loop, especially for the evaluation stage and the selection stage of each iteration.\".\n\nFor suggestion (5), we agree that we have confused UCT and UCB1 in writing sometimes. And we have revised our paper and updated it on the website.\n\nFor question (6), the term `intersection` does mean \"an AND over the bounds on all possible actions\". We use intersection here to give a more simplified mathematical description.\n\nFor suggestion (7), thank you for your detailed corrections. We have revised our paper and updated it on the website.\n\nMoreover, according to the questions in the Quality and Clarity parts, we make the following responses:\n\nAs for the question \"claiming that V-MCTS was somehow preferable here is not borne out by the data,\": similar to the above answer to question (1), the comparison of V-MCTS(eps=0.1) and MCTS(N=90) can support the conclusion.\n\nAs for the suggestion \"Aggregating information over many more such positions would have made for a stronger argument and paper.\", we agree with you and add more evaluations from more games (the agent plays as Black/White) in Appendix A.2.\n\nAs for the question \"No details are provided about the training procedure; It's not clear if the code for reproducing the results will be made available,\": we have provided more detailed parameters in Appendix A.3 including models, hyper-parameters, and training details of Go. Moreover, we have mentioned in the checklist that the code will be available.\n\nAs for the question \"I was also a little confused by why there was a curve for GnuGo, which is an off-the-shelf Go playing engine\", the GnuGo engine provides models of different levels (1-10). Each level is a trade-off between the run time and the strength of the agent. The y-axis is the winning rate against the model of level 10. We plot the green curve to show the performance-computation trade-off of the GnuGo engine and compare the trade-off with ours. Due to the limitation of pages, we will add more descriptions in the final version.\n\nAs for the suggestion \"it would have been nice to at least see a proof sketch.\", we will add more descriptions in the final version.\n\nFinally, thanks for your detailed suggestions! We have revised our paper and updated it on the website. And we highlight the changes and essential details that reviewers have mentioned with blue color.\n\n\n",
" Thank you for your comments and advice! We hope the following address your concerns:\n\n\nFor the question \"Typically game-playing programs have time management algorithms, but the authors do not mention the approach.\": as we know, time management algorithms (TM) aim to do fine-grained time control conditioned by the total time cost of an episode. We think there are main distinctions between ours and time management algorithms:\n\nFor one thing, the condition is different. TM algorithms are conditioned by a fixed time cost of an episode. Since the episodic length in board games is not constant, the thinking time of the current step is based on the time cost in the past. But V-MCTS targets normal games or tasks without any time constraints, which terminates the search based on the current situation of states.\n\nFor the second, the target is different. TM algorithms aim to allocate time for each step in one episode (E.g., In a tournament). But V-MCTS aims at approximating the oracle policy distribution through early termination.\n\nFinally, the method is different. TM algorithms explicitly build dynamic strategies based on the real-time cost of the past steps. But V-MCTS or DS-MCTS would not take into account the time cost. Instead, we focus on policy distributions, not the past time cost or the left time budget.\nWe will add more explanations for the distinctions.\n\nAs for the suggestion \"another naive baseline is to stop UCT if the best move is much better than the others.\", we think it will cause severe exploration issues if only matching the best action without considering others or the entire distributions. This is because, in MCTS RL algorithms, the current policy not only needs to find which one is the best but also needs to maintain an exploration policy on the remaining potentially good actions. Actually, we did one ablation study in Sec. 5.4 where we greedily expand in MCTS, i.e., prioritizes the best action. But the greedy method fails due to weak exploration.\n\nAs for the question \"Does Virtual MCTS attempt the time required for MCTS only when a model is trained? Or does it also attempt to reduce the time when playing a game against its opponent?\", our method reduces the time not only when a model is trained but also when the agent is playing a game. This is because V-MCTS can be applied if an evaluation model is given. Algorithms 1-2 display the detailed procedure of search iteration, and the search iteration is used when playing a game against its opponent. In conclusion, we propose the V-MCTS algorithm to reduce the search budget of MCTS adaptively while keeping comparable performances to the vanilla MCTS without early termination.\n\nAs for the question \"Misc. Page 3: 'Equation (1)'' Is this correct?\", we confirm the correctness of the p-uct equation. This equation is the same as the paper: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model [1].\n\nAs for other typos, we have revised our paper and updated it on the website.\n\nFinally, thanks for your detailed suggestions! We have revised our paper and updated it on the website. And we highlight the changes and essential details that reviewers have mentioned with blue color.\n\n[1] Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., ... & Silver, D. (2020). Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839), 604-609.",
" The authors introduce Virtual MCTS which approximates its vanilla version with a smaller amount of computations. They also perform theoretical analysis as well as empirical performance analysis on 9x9 Go and the Atari game. \n While the topic seems reasonable, I have a difficulty in understanding the problem they attempt to address (see the questions). Typically game-playing programs have time management algorithms but the authors does not mention the approach. \n\nFor example, \n\nhttps://dke.maastrichtuniversity.nl/m.winands/documents/time_management_for_monte_carlo_tree_search.pdf\n\nhttps://www.remi-coulom.fr/Publications/TimeManagement.pdf\n\nIn addition, their comparison is against vanilla UCT. But another naive baseline is to stop UCT if the best move is much better than the others. It could be estimated by considering the number of visits and the reward received so far.\n\nMisc. \nPage 3: \"Equation (1)\"\n\nIs this correct?\n\nPage 3: \"Thus, MCTS-based RL is roughly N times computationally more expansive than traditional RL algorithms\"\n\nexpensive\n\nPage 3: \"It is consists of two components:\" \n\nIt consists of\n\nPage 4: \"... illustrated in Algorithm 1, 2\"\n\nAlgorithms 1 and 2\n Does Virtual MCTS attempt the time required for MCTS only when a model is trained? Or does it also attempt to reduce the time when playing a game against its opponent?\n\nIt apparently does not look Algorithms 1-2 are for the latter case. \n NA\n",
" This paper proposes a modification to the Monte-Carlo Tree Search paradigm---specifically, the UCT algorithm---that is more sample efficient. The method attempts to use its \"thinking time\" more wisely: as the authors describe it, their tree-building algorithm seeks to spend more time when evaluating \"harder\" states and less time on \"simpler\" states (i.e., states that are less positionally complex, and where the best action can be determined more easily). Identifying these situations is accomplished by noting when additional iterations of search would not appreciably change the visitation distribution of the actions at the root node (i.e., the stochastic policy at the root node). Specifically, on each iteration of tree-building, after the traditional loop of node selection-expansion-evaluation-value propagation is completed, some additional computation is performed -- what the authors term \"virtual expansions\". In this phase, $N-k$ \"pulls\" of the arms of the root node are performed, according to the selection strategy (for eg., UCB1), _but without descending down the tree further_ ($N$ = total search budget measured by iterations, $k$ = current iteration index). Instead, after each pull, the current average utiltity of that action $Q(s, a)$ is used as the reward, so that the only modification to the statistics accumulated in the nodes at level 1 in the search tree are in the visitation counts. The policy induced by the combination of \"regular\" tree-building and virtual expansions is tracked over time; when it begins to show signs of convergence (i.e., the norm between the policies from two sufficiently different iterations is sufficiently small), the search is terminated. The authors provide two theorems that characterize the nature of this convergence, as well as empirical evaluation in several domains---9x9 Go and five Atari games---to demonstrate the validity and benefit of their approach. Originality:\n(+) The simplicity of the approach is a strength in my opinion. The idea is fairly straightforward, and the fact that it results in significant gains in a number of application domains is noteworthy.\n\nSignificance:\n(+) MCTS is of broad interest in the ML and AI communities and papers that deal with enhancements to the baseline algorithm or propose novel applications are often published at NeurIPS. So the paper is likely to be of interest to many researchers.\n\nQuality:\n(+) The paper contains a nice balance of theory and experiment. The main theorems suggest that the authors' proposed approach should provide some benefits, and the empirical evaluation reinforces this.\n(+) The gains in performance are particularly strong in 4/5 Atari domains, where V-MCTS (the authors' approach) outperforms vanilla MCTS while building much smaller search trees (typically, only 50% as big).\n(+) The ablation experiments involving tuning the algorithm hyperparameters $r$ (the search budget ratio) and $\\epsilon$ (which determines the tolerance criterion for convergence) were useful for evaluating their impact on performance.\n(-) The results in 9x9 Go were however less compelling to me. The evidence here did not appear to support the authors' conclusions -- vanilla UCT was shown to be a little slower to act, but also a little stronger as a player. So claiming that V-MCTS was somehow preferable here, or a superior choice (265--266: \"Therefore, such termination rule can keep strong performances with less budget.\") is not borne out by the data.\n(-) The qualitative analysis in Fig. 3 about V-MCTS spending more time in challenging positions and less time in less complex positions was welcome, but drawing such samples from a single game seems a little anecdotal. Aggregating information over many more such positions would have made for a stronger argument and paper. \n\nClarity:\nUnfortunately, this was the weakest element of the paper, which relegates it to a borderline accept\nin my eyes. The lack of clear writing and visualizations made it harder to evaluate some of the claims; it also raises questions about the reproducibility of the work. More specifics are provided in the next section, but I outline some broad issues here.\n(-) No details are provided about the training procedure. EfficientZero is mentioned in Section 5.1, but it's not clear if that was the training procedure that was used. It's not clear if the code for reproducing the results will be made available.\n(-) I found Fig. 1(a) difficult to understand, as it lumps together different algorithms that are being parameterized in different ways. I was also a little confused by why there was a _curve_ for GnuGo, which is an off-the-shelf Go playing engine -- presumably, the authors did not retrain this agent from scratch, so why are there different points along the $x$-axis for this player?\n(-) The proofs for theorems 4.1 and 4.2 are presumably in the Appendix, so their proofs could not be verified; while I understand the authors are operating under space constraints, it would have been nice to at least see a proof sketch. (1) In all of the comparisons, V-MCTS was upper-bounded in how large a tree it could build. A complementary analysis where it was given a fixed _time_ budget would have been useful to see: to see if given a fixed amount of time, it could make better decisions than a vanilla MCTS using the same amount of time.\n\n(2) The convergence criterion appears to rely on when the L1 norm of the policy at the root node converges; but I'm wondering if this is the best choice. There is an argument to be made for the L2 norm instead: bigger deviations should be amplified more, over an accumulation of smaller deviations, as this could change what action would be picked at the root. Or to ignore the specific distribution and focus on the _ordering_ of the nodes. It would be interesting to hear from the authors whether these alternatives were considered.\n\n(3) Lines 58--59: \"Afterward, UCT algorithms have generally replaced earlier heuristic methods for Monte-Carlo tree search (MCTS), which apply UCB1 to select action at each node of the tree\" -- this framing not quite accurate. UCT is a specific instantiation of the MCTS family of algorithms, so saying it has \"replaced\" MCTS doesn't quite make sense.\n\n(4) Lines 63--65: \"There are three kinds of bottlenecks in vanilla MCTS in the aspect of speed: the selection stage of each iteration, the evaluation stage of each iteration, and the search loop.\" -- I found this confusing, since the search loop _includes_ selection and evaluation steps.\n\n(5) Line 5 in Alg. 1, Line 9 in Alg. 2 -- I believe these should read UCB1(Q, P, N), rather than UCT(Q, P, N) (UCT is the overall algorithm, UCB1 (or a variant) is the bandit algorithm that is handling the selection step). Similarly, Line 4 in Alg. 3 -- should probably read \"Selection with UCB1\".\n\n(6) In Theorem 4.1(a), what does the $\\Cup$ (set intersection) mean? Did you mean $\\Wedge$, i.e., an AND over the bounds on all possible actions?\n\n(7) There are also typos and grammatical issues throughout the paper, that could be eliminated with a careful and thorough read through. Here's a small sampling just from the first page:\n11--12: \"while requiring less than 50% number of search times on average.\" --> \"less than 50% search time\"\n19: \"recent AI researches\" --> \"research\"\n26--27: \"It is the first time for a computer program to beat a human professional Go player.\" --> \"first time a computer program beat a human...\"\n29--30: \"Later, MCTS-based RL algorithms are further extended to other board games and Atari games\" --> Later, MCTS-based RL algorithms were further extended...\"\n32--33: \"...they require massive computations to train and evaluate.\" ---> \"...they require massive amounts of computation...\"\n Yes -- any concerns have been raised in other sections.",
" This paper proposed a novel method named Virtual MCTS (V-MCTS) to reduce the computation time of vanilla MCTS with the Virtual Expended Termination Rule (VET-Rule). This paper also gives theoretical bounds of the proposed method and evaluates the performance and computations on 9 × 9 Go board games and Atari games. Experiments show that this method can achieve comparable performances to the original search algorithm while requiring less than 50% search times on average. In general, this paper shows that V-MCTS can think faster on simple states and longer on hard states. For solid theoretical analysis and positive experimental results, I would recommend the acceptance of this paper. \n *Strengths*\n\n* This paper gives the theoretical bounds of the proposed method with proof. Namely, for any given epsilon with sufficient large N, if the policy distance between k and k/2 is smaller than epsilon, then the distance between k and N is smaller than 3*epsilon. This theoretical result supports an early stop of MCTS search, which I believe has some impact. The theoretical analysis is sound, after I verify the proofs of Theorem 4.1 and 4.2 in Appendix (a little bit hard to read though).\n* In the experiments with Go 9x9 and Atari, V-MCTS shows that this method can achieve comparable performances to the original search algorithm while requiring less than 50% search times on average. \n* There is a comparison with DS-MCTS (a past work in AAAI), and V-MCTS still has better performance.\n\n*Weaknesses*\n\nSome minor presentation problems listed in Questions Section. \n\n According to the proof given by the authors, V-MCTS’ Error Bound is still effective for programs that do not use virtual expansions during training. We should be able to apply V-MCTS to all pre-trained Top Go 19x19 programs, like KataGo or Leela Zero. How does V-MCTS work on these strong programs?\n\nSome minor comments about presentations. \n* In experiments, N = 150. I am just wondering what N_0 would be to satisfy Theorem 4.2, if N=150. \n* It is unclear about ‘3 different runs’ in Tables 1~3. The authors need to clarify it. \n* Please keep the consistency of the symbols, for example, $\\epsilon$ and $eps$.\n* Line 186: highlighted in line 5 -> highlighted in line 6\n* In Algorithm 3, the symbols in line 10 are unclear and confusing to the reader. Explain why the ratio (the visitation of an action in a state divided by N) is the policy of this state. Where is that action? This conflicts with the situation in line 8.\n Yes",
" This paper proposes V-MCTS, containing two main improvements over classic MCTS. First, the authors propose virtual expansion by applying rollout without actual simulation (the simulation returns are replaced by the current Q value. Next, V-MCTS uses an adaptive termination condition to decide when to stop doing rollout. The proposed algorithm is evaluated on 9x9 Go and five Atari games. Improving the efficiency of MCTS is very important since they typically require a huge amount of computation resources. This paper proposes two techniques: virtual expansion and an early-termination condition.\n\nVirtual expansion aims to mitigate the problem that exploratory behavior of the MCTS rollouts can \"mask out\" information about the action with the highest expected reward. For example, if the agent is given a budget of 100 rollouts and it has only figured out a promising path at the 95th rollout. The last 5 rollouts may not be sufficient to backpropagate this information up to the root node. In such cases, applying virtual expansion to backpropagate such information could be helpful. This is also justified by the ablation study in Sec. 5.4.\n\nHowever, while virtual expansion could be useful in the above-mentioned scenario, I think it (i) could be implemented by an action selection policy for best-arm identification (BAI) and (ii) might only be useful in games with less chaotic rewards. (i): For example, it is possible that simply selecting the action that leads to the best cumulative reward seen in the rollouts can be as good as virtual expansion. (ii): For tasks where the reward function has high variance, it seems possible that virtual expansion will be fooled by some high rewards collected by chance. To allow better justification of the virtual expansion idea, it would be great if the authors could compare it with other action selection strategies (e.g., select by Q value, and some BAI strategies).\n\nThe theoretical analysis in the paper shows that V-MCTS will converge to the optimal policy as the number of rollouts increases, but it does not provide additional intuition on the comparison between vanilla MCTS and V-MCTS. Specifically, according to Thm 4.1, we still need k to be sufficiently large in order to guarantee e.g. BAI. Also, the theorems do not take into consideration the early-termination condition. - How does virtual expansion compare with other action selection strategies (e.g., select by Q value, and some BAI strategies)?\n\n- The empirical results are only tested with very few number of rollouts (<20 rollouts on Atari, and <150 on Go), and it remains unsure whether the proposed techniques are useful in more general cases. The authors addressed the limitations and potential negative societal impact of their work."
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"YdW1cVlFfj",
"QxGg0TO-cyG",
"a58NooVzSEX",
"WpMGchitMkh",
"YdW1cVlFfj",
"nips_2022_B_LdLljS842",
"nips_2022_B_LdLljS842",
"nips_2022_B_LdLljS842",
"nips_2022_B_LdLljS842"
] |
nips_2022_4MT-e8mn3X | Local Linear Convergence of Gradient Methods for Subspace Optimization via Strict Complementarity | We consider optimization problems in which the goal is find a $k$-dimensional subspace of $\mathbb{R}^n$, $k<<n$, which minimizes a convex and smooth loss. Such problems generalize the fundamental task of principal component analysis (PCA) to include robust and sparse counterparts, and logistic PCA for binary data, among others. This problem could be approached either via nonconvex gradient methods with highly-efficient iterations, but for which arguing about fast convergence to a global minimizer is difficult or, via a convex relaxation for which arguing about convergence to a global minimizer is straightforward, but the corresponding methods are often inefficient. In this work we bridge these two approaches under a strict complementarity assumption, which in particular implies that the optimal solution to the convex relaxation is unique and is also the optimal solution to the original nonconvex problem. Our main result is a proof that a natural nonconvex gradient method which is \textit{SVD-free} and requires only a single QR-factorization of an $n\times k$ matrix per iteration, converges locally with a linear rate. We also establish linear convergence results for the nonconvex projected gradient method, and the Frank-Wolfe method when applied to the convex relaxation. | Accept | The submitted work presents a local linear convergence guarantee for a projected gradient descent (PGD) algorithm on an explicit parameterization of the Stiefel manifold. Such a guarantee is easy to make if the convex objective f is assumed to be strongly convex. Instead, this work considers allowing f to be non-strongly convex. Under a strict complementarity assumption, which this paper shows is equivalent to an eigen-gap condition, the authors prove that the problem enjoys a standard quadratic growth condition that allows PGD to converge at a linear rate.
Reviewers Lsrb, tL19, Fujo concur that the theoretical contribution is worthy of publication. The past few years have seen a large number of local linear convergence guarantees by directly optimizing the factor matrix $U$ in the low-rank factorization $X=UU^T$, but all of these work have assumed some notion of strong convexity or restricted strong convexity. Indeed, I remark here that local linear convergence is actually lost in many of these cases (e.g. matrix sensing) if the objective f is not (restrictedly) strongly convex. In comparison, the present work allows f to be an arbitrary smooth convex function, while showing that local linear convergence is surprisingly still possible under a strict complementarity condition.
However, the impact of the work is obfuscated by repeated assertions to the practical aspects of the proposed algorithm, which in my opinion are difficult to defend. The authors repeatedly assert that their nonconvex algorithm requires only a single QR decomposition, and therefore "much faster and simpler to implement". This may be the case, but the actual reduction in the number of QR decompositions is only a logarithmic factor $O(\log(1/\epsilon))$ under the eigen-gap assumption. On the other hand, global convergence is lost with the nonconvex formulation, and random initialization leads to sublinear convergence in practice. Reviewer mnGb remarks that the numerical experiments are very brief, and do not make a strong case for the practical aspects of the algorithm.
Nevertheless, the technical novelty of the analysis pushes this paper towards acceptance. In the camera-ready version, the authors are advised to:
* Revise their summary of contributions to better compare with existing techniques in the literature, as outlined by Reviewers Lsrb, tL19, Fujo;
* Expand on their experimental section to answer the questions posed by Reviewer mnGb on global convergence, the existence of bad local minima. Answers to these questions can and should be supported or disproved by numerical experiments.
| train | [
"j4SBb1N04F",
"ks3ywFaQJgc",
"-lhAtq0gXrC",
"bkzgyKl3IFm",
"sdj2Vgd8dSJ",
"YNgGeltn-1b",
"MG_eAR5fLZq",
"7SQ-cJ1a0w3",
"1mAKTyh9cf-",
"GPnUg5qciqk",
"ieEqzPAVUH-",
"sLucwT_3DJ",
"dm8i53cfXko",
"ueK5JRbsiQV",
"GAE9tE7Dixe"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Lsrb,\n\nHave we answered your main concerns? If so, would you consider raising your score? Otherwise, we will be very happy to try and answer additional concerns.",
" Dear Reviewer FuJo,\n\nHave we answered your main concerns? If so, would you consider raising your score? Otherwise, we will be very happy to try and answer additional concerns.",
" Dear Reviewer mnGb,\n\nWe would be very happy to know if the above response answers your main concerns and will be also very happy do further discuss remaining concerns.",
" Many thanks for the replies, and I have raised my score.",
" It is the same k. If our goal is to find a projection matrix onto a k-dimensional subspace that minimizes a convex and smooth loss f(), then strict complementarity is defined in terms of the eigengap between the k-largest and (k+1)-largest eigenvalues. With this respect our work does not deal with how to choose k. This is very similar to classical PCA where we want to extract the leading k-dimensional subspace of the covariance matrix: it is well conditioned if for the chosen k there is indeed a substantial gap in the covariance matrix between the k and (k+1) largest eigenvalues.\n\nA one exception is our Theorem 6, mentioned also in our response, that deals with the case in which strict complementarity does not hold, or holds with a negligible gap. In this case in may be the case that the solution to the convex relaxation is no longer a rank-k matrix as desired. However, this theorem shows that by considering higher-rank matrices, with rank=r>k, it might still be possible to run PGD in an efficient manner, i.e., using only low-rank SVD to compute the projection (rank-r SVD instead of a full rank SVD) and the optimal solution may still be low rank, even if with rank larger than k.\n\nWe hope this helps. We are very happy to help clarify this issue further. ",
" Thanks for the responses.\n\nRegarding the choices of $k$, the authors said that \"Our goal is to develop numerical methods given an appropriate k,\", meanwhile for the strict complementarity, there is also a \"$k$\". I think these two $k$'s are not exactly the same, the \"appropriate k\" in general is larger than the later $k$? If this is true, and the \"appropriate k\" much larger (let's assume), is it possible to design some parameter continuation strategy to gradually reduce the value of the \"appropriate k\", based on the complementarity condition?",
" Dear Reviewers and AC,\n\nQuite embarrassingly, we were not aware until just now that there is an option to revise our submission during the rebuttal period and so we did not plan for it (or allocate time for it).\n\nNevertheless, we have uploaded a slightly revised version in which we address an issue raised by Reviewer Lsrb and Reviewer FuJo, regarding empirical evidence in support of the linear convergence rates. In the revised version we added plots (left panels in Figure 1 and Figure 2) which plot the convergence rate w.r.t. function value in log-scale, and clearly demonstrate the linear convergence of projected gradient and gradient orthogonal iteration methods.\n\nWe shall address the additional issues in the final version.\n\nIf there are additional question, we will be happy to answer.",
" Dear reviewer,\n\nThank you for your overall positive review.\n\nWe now address the weakness you raised:\n1+2: Thank you, we will correct these in final version.\n3. The initial focus of the experiments was not on linear rates but showing the convergence of our methods from simple initializaitons. Nevertheless, while preparing the rebuttal we have verified that if we plot the Y-axis in log scale we clearly see linear convergence of our methods, and we will include these graphs in the final version.\n4. Thank you very much for pointing us to this interesting work. We will add it to our discussion on related works. Please do note however that it concerns a very specific objective function (and one which seems nearly linear in the projection matrix XX^T), while here our goal is to address a much more general smooth convex setting. Moreover, they seem to establish (fast) convergence to critical points and not to global optimum. Generic linear convergence to global optimum is not likely even for their problem without further assumptions, since even for standard PCA, these are attainable only under a spectral gap assumption.\n\nFinally, please note that beyond the linear convergence rate, the main novelty in our work is the analysis of the gradient orthogonal iteration method which has both linear convergence, but also uses only a single QR operation per iteration, and as such is a novel non-linear extension of the classical QR iterations method for leading subspace computation (as in standard PCA). The analysis of this approach is the main novelty, and establishing its convergence by showing that it is a nearly lossless approximation of the exact-SVD-based projected gradient method. This comes with no little effort as might be indicated from Lemmas 4, 5, 9, among others. This is the key novelty. We believe that relating these two methods - a convex one and a nonconvex one is a very novel idea which could be interesting to further developments. \n\n",
" Dear reviewer,\n\nWe first address the weaknesses you raised:\n1. Assumption 1 requires a parameter that cannot be verified: First, note that the eigengap parameter is not an input to any of the algorithms. The parameter k is required to be set properly (so there is an eigengap) and here we have several answers:\na. First we note that, as in classical PCA, the parameter k should be generally understood as part of the input to the problem (of course in practice it is an issue by itself how to set it). Classical numerical algorithms for PCA (such as classical QR iterations) are dependent on the choice of k, and its choice can affect dramatically their performance, and here we encounter a similar situation, so this should not be seen as a specific issue with our work.\nb. Indeed miss-specifying k is problematic for the gradient orthogonal iteration method since it requires strict complementarity to exactly hold. If there is no gap we cannot guarantee is convergence.\nc. This is not a critical problem for our PGD method since as Theorem 6 (see Appendix C) shows: PGD can handle gaps between lower-eigenvalues by increasing the rank parameter r, but guaranteeing only sublinear rates, not linear. Moreover, as discussed in Remark 1 (line 180), for PGD we can verify easily whether it converges correctly or not (i.e., with a provable rate), by certifying that the low-rank projection is indeed the correct projection, which can helps us tune the parameters.\n\n2. Results are weak because warm-start is required: this is indeed true for the gradient orthogonal iteration and projected gradient, but this is very much expected, since these methods solve a nonconvex problem (they always maintain a rank-k matrix)! It should not be expected that, unless the problem is extremely well conditioned so the ball is so large, that they would converge rapidly from any initialization. This is also perfectly well aligned with previous works on generic nonconvex optimization (i.e., works that do not consider very specific models and data), see for instance [2]. Of course it may be expected that in practice these would work from very simple initializations (as we have in our experiments), but here we are interested in worst case performance.\nWe refer you also to Figure 3 in the appendix that examine the empirical convergence from a random initialization and shows it to be considerably slower (sublinear). \nMoreover, in this work we wish to take a somewhat of a ''continuous optimization'' approach to subspace recovery (as opposed to statistically-motivated approaches), where we look for a quite general condition (strict comp.) that will render quite general problems (i.e., any smooth convex objective, as opposed to very specific objectives such as quadratic with data generated by some well known process), well-posed, and because of this we have these requirements. Do note that the Frank-Wolfe method converges globally, but with the price that it does not maintain a rank-k matrix, but a convex combination of such.\n\n3. Quadratic growth is a well known property to allow for linear rates of first-order methods for convex problems and is indeed weaker than strong convexity, see for instance [14,19]. Since it is well-studied, it is beyond the scope of this paper to present it or discuss it in great detail, and we give the appropriate references.\n\nQuestions:\n- How to choose k? As in standard PCA algorithms, k should be understood as part of the input. Our goal is to develop numerical methods given an appropriate k, much like the classical fast methods for computing the leading subspace in standard PCA. It is not part of the problem we set out to solve to find k.\n\nTypos: thank you for catching these!\n\n\nGiven all of the above, we kindly ask you to seriously reconsider your score. We believe our novel approach of analyzing the gradient orthogonal iteration and the corresponding analysis, and its connection to strict complementarity, could be of interest to many working in these and related areas.\n\n\n",
" Dear reviewer,\n\nThank you for your overall positive review.\n\nWe now address the weakness you raised:\n1. We think you might have misunderstood [29]: the condition [29] has for which their error bound holds for matrix problems (they consider nuclear norm regularization) is very similar to the one we have and is in fact equivalent to strict complementarity for their nuclear norm-regularized model.\nAs you write yourself, our purpose is to take a ``continuous optimization'' approach which seeks a condition that will render. quite general problems well conditioned for first-order methods. This is also the reason we bring the numerical evidence to show that for two classical robust PCA models this condition seems to hold very well. In works [5,9] is was shown that if such a condition does not hold then the convex relaxation is ill-posed since it is brittle under arbitrary small perturbations. Such results could be extended to our subspace recovery problem as well, and we shall comment on it in the final version.\n\n2. Indeed when we plot the Y-axis in log-scale (which we have done during the preparation of this rebuttal) we clearly see a linear convergence rate, we shall add this to the final version.\nPlease understand, that as you yourself write ''I think that the authors made good enough technical contributions to the convergence analysis of the first-order methods for solving subspace optimization problems'', we believe our theoretical contribution is strong enough and doing lots of experiments is beyond our interest which is mainly in theoretical analysis. \nThe main purpose of the graphs is not to compare the methods, but mainly to show that indeed simple initializations start PGD already in the regime in which it produces only rank-k iterates, and that the gradient orthogonal iteration indeed converges very similarly to PGD. We will also strongly consider adding the plots of Frank-Wolfe, this should not be a problem, just not our main interest.\n\nAnswers to questions:\n1. We believe that it does. The assumption that [29] has for the nuclear norm-regularized matrix problem is exactly equivalent to strict complementarity for their problem and they use it to obtain their error-bound.\n\n2. Yes definitely. Using the quadratic growth property (Lemma 3), we can relate the convergence in function value to the convergence of the sequence to the optimal solution.\n\n3. The bulk of our analysis is not in obtaining the linear rates, such analysis is standard and simple. Our main novelty and technical effort is in proving that the gradient orthogonal iteration, which only performs a single QR operation per iteration, indeed converges correctly, by establishing that it approximates sufficiently well the steps of PGD -- for which we prove that near the minimizer it produces only low-rank matrices. That is the bulk of the analysis, not the linear rate.",
" Dear reviewer,\n\nWe first address the weaknesses you've raised:\n1. Novelty is incremental w.r.t. [5] and [8,9]: We believe there might be a miss-understanding here. Indeed the quadratic growth result which yields the linear convergence rate is not very novel and we did not try to pretend that it is. The novelty is in the fact that all these previous results rely on SVD computation. Each such computation, when implemented efficiently, requires to iteratively perform multiple QR iterations. Here on the otherhand we have method with same convergence rate using a single QR operation per iteration. The analysis of this approach is the main novelty, and establishing its convergence by showing that it is a nearly lossless approximation of the exact-SVD-based projected gradient method. This comes with no little effort as might be indicated from Lemmas 4, 5, 9, among others. This is the key novelty. We believe that relating these two methods - a convex one and a nonconvex one is a very novel idea which could be interesting to further developments.\nAdditional novelty is in the proof of Theorem 6 (in Appendix C) that gives sublinear rates for projected gradient in case exact strict complementarity does not hold, but some relaxed notions of it. This is a significant and non-trivial at all extension of [9].\n\n2. Results are weak because warm-start is required: this is indeed true for the gradient orthogonal iteration and projected gradient, but this is very much expected, since these method solve a nonconvex problem (they always maintain a rank-k matrix)! It should not be expected that, unless the problem is extremely well conditioned so the ball is so large, that they would converge rapidly from any initialization. This is also perfectly well aligned with previous works on generic nonconvex optimization (i.e., works that do not consider very specific models and data), see for instance [2]. Of course it may be expected that in practice these would work from very simple initializations (as we have in our experiments), but here we are interested in worst case performance. \nWe refer you also to Figure 3 in the appendix that examine the empirical convergence from a random initialization and shows it to be considerably slower (sublinear).\nMoreover, in this work we wish to take a somewhat of a ''continuous optimization'' approach to subspace recovery (as opposed to statistically-motivated approaches), where we look for a quite general condition (strict comp.) that will render quite general problems (i.e., any smooth convex objective, as opposed to very specific objectives such as quadratic with data generated by some well known process), well-posed, and because of this we have these requirements.\nDo note that the Frank-Wolfe method converges globally, but with the price that it does not maintain a rank-k matrix, but a convex combination of such.\n\n3. Experiments: We are mainly interested in the theory of efficient first-order optimization methods. PGD is often the method of choice for smooth convex objectives and so we focus on demonstrating on two robust pca models that indeed using very simple initalizations that indeed PGD converges correctly while maintaining only a rank-k matrix, and that the gradient orthogonal iteration method indeed approximates the steps of PGD very well. If we present the Y-axis in log-scale (as we shall do in our final version) then it also becomes very clear that indeed both methods converge linearly. Doing more comprehensive experiments is beyond the scope of our work since we are mostly interested in understanding the theory and per the discussion above, we think we have sufficient theoretical results. \n\nAnswers to questions:\n1. Not it cannot happen that the rank is smaller than k, we shall comment on it in the paper.\n2. There is no simple way to write this update as a closed formula.\n3. This direction is not true. It can be the case that there exists an optimal solution of rank k and eigen-gap is zero, for instance if the gradient is zero at the optimal solution.\n4. We do not think its true in general: consider the simple problem of computing the leading eigenvector of a symmetric matrix, which corresponds to a linear function f in our setting. Any eigenvector is a stationary point of the nonconvex fomulation, while the convex relaxation is always tight.\n5. We refer you to Figure 3 in the appendix which shows such an experiment. It can be seen that while we have convergence from a random initialization, it is much slower, hinting that in practice indeed sublinear convergence may be expected, but this well require very different arguments to prove and is beyond the scope of this work. \n\nGiven all of the above, we kindly ask you to seriously reconsider your score. We believe our novel approach of analyzing the gradient orthogonal iteration and the corresponding analysis, and its connection to strict complementarity, could be of interest to many.",
" The paper considers the problem of minimizing a convex function f of symmetric n-by-n matrices X over the space P(n,k) of orthogonal projectors on all k-dimensional subspaces of R^n. A special case is PCA but the formulation allows for generalized forms of PCA (robust, sparse, etc). The natural iterative method to approach this problem is projected gradient descent applied to the original problem min f(X) s.t. X\\in P(n,k), in which we need to project on P(n,k) (which requires SVD) in every iteration. The paper proposes a low-rank iterative method applied to the problem min f(Q*Q') s.t. Q'*Q=I, Q is n-by-k, which requires only one QR factorization in every iteration (this can be seen as approximating SVD by doing only one iteration of the orthogonal iterations method). To analyze this method, the paper also considers an SDP relaxation of the original problem, in which P(n,k) is replaced with its convex hull F(n,k) (the Fantope), which has an SDP description.\n\nThe contribution of the paper is a theoretical analysis of the low-rank iterative method. The key concept is the \"eigengap\" of a solution X to the SDP relaxation, which is the difference between the k-th and (k+1)-th largest eigenvalues of the gradient \\nabla f(X) (a symmetric matrix). It is proved that existence of an optimal solution to the SDP with non-zero eigengap is equivalent to strict complementarity of a pair of primal-dual solution to the SDP (Theorem 1) and implies that the SDP has a unique optimum with rank k (hence, the SDP relaxation is tight) (Theorem 2).\n\nThe main contribution is a convergence result for the low-rank iterative method (Theorem 3), which says the following: If the SDP relaxation have a non-zero eigengap and the initial point Q1 of the method is such that Q1*Q1' is within a given (small) ball around the optimal SDP solution, then the method converges linearly to the global optimum of the original problem.\n\nAlmost all the rest of the paper is devoted to a sketch of the proof of Theorem 3. At the end of the paper, there are brief numerical experiments on random data, showing that the convergence rates of the projected-gradient method and the low-rank method are almost the same (but, as written above, the latter needs only one QR factorization rather than SVD factorization in each iteration). This is an interesting formulation of the low-rank method, which requires only one QR (rather than SVD) factorization per iteration.\n\nThe theoretical analysis is interesting. However, its novelty is only incremental, given the prior work [5] and [8,9].\nMoreover, its impact is rather weak: if the initial point of the low-rank method is not within a small ball of the optimal solution (which we of course do not know without solving the SDP), the results (Thm 3) give no guarantees of convergence to the global optimum.\n\nThe text is easy to understand, but its clarity and organization should be improved. In particular, it would be more helpful if all longer proofs were moved to a supplement and the saved space was used to explain the consequences of the theoretical results in more detail. E.g., the questions I raise in the \"Questions\" part could be discussed.\n\nThe numerical experiments are very brief. I believe that more numerical experiments might clarify some issues as I suggest in the \"Questions\" part. Moreover, experiments on data from real applications might better show the properties of the method.\n\nIn summary, I think that the main ideas are interesting but I am not sure if the current contribution and its form are good enough for this conference. I believe that giving the paper more time and effort to develop would result in a better and more mature paper.\n\nMinor remarks:\n- Perhaps, it would be useful in the theoretical analysis to consider one more iterative method, namely projected gradient method applied to the problem min f(Q*Q') s.t. Q'*Q=I, Q is n-by-k. This method requires orthogonal projection to the space of n-by-k matrices with orthonormal columns (i.e., the nearest linear isometry problem), which requires SVD. Perhaps, method (3) can be more directly seen as an approximation to this method than to method (2). But I may be wrong, this is just a suggestion.\n- I'd use a different symbol than \\partial for the total derivative in (3), such as {\\rm d}.\n- Typo in Lagrange function below line 101, (X) should be f(X).\n- In Definition 1, I believe that disjunction (\\vee) should be conjunction (\\wedge). 1. Can it happen (assuming a nonzero eigengap assumption) that the rank of Z_{t+1} is smaller than k in (3)? If so, the QR factorization must do some random choice. Is the iteration valid then?\n\n2. Can fixed points of iterations (3) be described in some simple way, such as a closed formula?\n\n3. Does there hold also the opposite implication in Theorem 1, i.e., is it true that if a (optimal) solution to the SDP (4) has rank k then it has a nonzero eigengap?\n\n4. Does it hold that if the SDP relaxation (4) is tight (i.e., some of its optimal solutions has rank k), then problem (1) has no local minima that are not global minima (where \"local minimum\" has the obvious meaning here as a local minimum of a function on a set)? Note, this could be easily supported or disproved by numerical simulations.\n\n5. A very related question: Suppose some optimal solution to the SDP has a nonzero eigengap but the initial point Q1 to method (3) does not satisfy the other assumption of Theorem 3 (i.e., Q1*Q1' is not within the given ball around the optimal solution to the SDP). Does it mean that method (3) may not converge at all, can converge to the global optimum but with sub-linear rate, or converge to a point that is not the global optimum? Some of these options could (and should) also be supported or disproved by numerical experiments. The impact of the main theoretical result (Theorem 3) is rather weak, see above.",
" In this work, the authors studied the problem of finding a $k$-dimensional subspace problem: $\\min f(\\mathbf{Q}\\mathbf{Q}^T)\\ \\text{s.t.}\\ \\mathbf{Q}^T\\mathbf{Q} = \\mathbf{I}$, where $f$ is convex and smooth. They studied the relationship between the convex relaxation-based method and the non-convex method with the gradient orthogonal iterations under a strict complementarity assumption. Based on this, they showed that the non-convex gradient method converges locally with a linear rate. They also showed the linear convergence of the non-convex projected gradient method and the Frank-Wolfe method for solving this problem. Strengths: (${\\bf 1}$) Subspace recovery is a fundamental problem in machine learning. This paper studied the convergence rate of different methods for solving the subspace problem: $\\min f(\\mathbf{Q}\\mathbf{Q}^T)\\ \\text{s.t.}\\ \\mathbf{Q}^T\\mathbf{Q} = \\mathbf{I}$. Unlike existing approaches that assume $f$ admits a special structure or considers an underlying generative model, they proposed a deterministic condition, i.e., strict complementarity, which seems new and interesting. \n\n(${\\bf 2}$) Overall, the paper is well-written and easy to follow. I think that the authors made good enough technical contributions to the convergence analysis of the first-order methods for solving subspace optimization problems.\n\n(${\\bf 3}$) This work bridges convex and non-convex methods for subspace optimization problems via a strict complementarity condition. In particular, linear convergence can be established through this bridge. \n\nWeaknesses: (${\\bf 1}$) In Assumption 1, the authors imposed an eigen-gap condition, i.e., $\\lambda_{n-k}(\\nabla f(\\mathbf{X}^*)) - \\lambda_{n-k+1}(\\nabla f(\\mathbf{X}^*)) \\ge \\delta$ for some $\\delta > 0$, on an optimal solution $ \\mathbf{X}^*$ to the problem: $\\min f(\\mathbf{X})\\ \\text{s.t.}\\ \\mathbf{I} \\succeq \\mathbf{X} \\succeq \\mathbf{0},\\ \\mathrm{Tr}( \\mathbf{X}) = k$. From the authors' introduction, this condition is closely related to the complementarity condition of the KKT system of Problem (4) (see Theorem 2). It looks strange that studying convergence rate of a method for solving a problem by imposing condition onto its optimal solution. On the other side, the authors mentioned the error-bound condition. According to [29], the error-bound condition holds for many general problems. However, the authors in this work cannot give a concrete example so that Assumption 1 holds like [29]. \n\n(${\\bf 2}$) In general, the numerical simulations are not convincing. For example, in the left figure of Figure 1 in Section 4, the y-axis should be plotted on a log scale. The current scale cannot demonstrate the linear convergence of the tested method. Besides, the authors should also report the convergence rate of the Frank-Wolfe method to support Theorem 5. \n\n \n(${\\bf 1}$) The authors mentioned that strictly complementarity is closely related to the error-bound condition (see Ref [29, 6]). It is known that the error-bound condition also implies linear convergence of the first-order method under some mild conditions. Does Assumption 1 imply the error bound of Problem (4)? It would be great if the authors elaborate on the relationship between them. \n\n(${\\bf 2}$) According to Theorems 3 and 4, the authors only showed the linear convergence of function values. According to the results in [29] and $\\textbf{Attouch and Bolte (2009)}$, they can show linear convergence of the sequence generated by the studied methods. Could the authors show the linear convergence of the sequence?\n\n$\\textbf{Attouch and Bolte (2009)}$: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Mathematical Programming, 116(1), 5-16.\n\n(${\\bf 3}$) Lemma 4 shows quadratic growth of Problem (4). According to [6], quadratic growth is equivalent to the error bound under a mild condition. Maybe, the following proofs could be simplified using the results in [6] and [29] when the error bound of Problem (4) is available. Please check it. Yes",
" For optimization problem of finding a $k$-dimensional subspace, iterative schemes derived from both non-convex and convex formulations are developed in the literature. In this paper, based on a strict complementarity condition, the authors proved local linear convergence of several schemes, including non-convex projected gradient descent and gradient orthogonal iteration. For the obtained convergence results, the starting point of the schemes need to be close enough to the solution. The main contribution or strength of the paper is theoretical analysis. That is, proving local linear convergence of non-convex projected gradient descent, gradient orthogonal iteration and Frank-Wolfe methods, which is based on connecting the non-convex model and convex one. \n\nTo me, there are several limitations of the obtained results:\n - Assumption 1 requires a parameter which cannot be verified before solving the problem. What happens if $k$ is chosen relatively large enough such that $\\lambda_{n-k}=\\lambda_{n-k+1} = 0$, should this be a problem to concern?\n - All the convergence rate theorems require the starting point of the numerical schemes to be closely enough to the solution, which is not discussed how to achieve this in practice. Hence making the local convergence result limited. It is also not elaborated what exactly is the \"warm-start\" strategy.\n - Lemma 3 the \"quadratic growth\" assumption basically is the weaker strong convexity assumption, the authors should discuss its connections/differences compared to existing approaches. A small question, how to choose $k$ in practice?\n\nThe paper is not well-written overall, and the authors should polish the paper thoroughly. \n - line 5, what does \"among others\" mean?\n - line 7 \"or ,\" to \", or\"\n - line 12, the whole sentence \"Our result ...\" i guess needs rephrase.\n - line 24 \"include among other...\"\n - For the footnotes, please unify the starting letter in capital. \n - line 34\"] however\" to \"]. However\"\n - \"aka\" to \"a.k.a.\"\n - line 59 \"efficient implementation of will...\", of what will?\n - line 81, \"(2))\" and \". $^3$\"\n - line 188 $X \\in \\mathcal{F}\\_{k}$, should be $\\mathcal{F}\\_{n,k}$?\n - page 8, Lemma 6,8,9 are not appear in the main paper. Not applicable here. ",
" This paper studies the k-dimensional subspace problem, which is a nonconvex problem over the orthogonal matrix set. An efficient projection gradient method using QR decomposition is studied. The main contribution is that the strict complementarity is shown to be equivalent to the eigen-gap condition. Moreover, under the eigen-gap condition, the projection gradient method is shown to be linearly convergent. \nStrengths:\nThis paper studies a general setting of k-dimensional subspace problem. The local linearly convergence under strict complementarity is interesting.\n\nWeaknesses:\n1. The symbol << is not standard, please use $\\ll$ (\\ll) instead\n2. To use the KKT conditions, the slater's condition should be verified though it is obviously satisfied if k<n. \n3. The numerical experiments do not show the linear convergence, since the linear convergence rates are important results in this paper. \n4. The linear convergence rate of projected gradient method for k-dimensional subspace problem is extensively studied in recent years. The strict complementarity or eigen-gap condition could be strong in some settings. For example, Liu et al. 2019 studies the quadratic problem on Stiefel manifold. But the do not need that the eigenvalues are distinct [Theorem 1, Liu et al. 2019]. Some discussions will be helpful. \n\nLiu, H., So, A. M. C., & Wu, W. (2019). Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods. Mathematical Programming, 178(1), 215-262. no Not applied."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"GPnUg5qciqk",
"7SQ-cJ1a0w3",
"ieEqzPAVUH-",
"sdj2Vgd8dSJ",
"YNgGeltn-1b",
"1mAKTyh9cf-",
"nips_2022_4MT-e8mn3X",
"GAE9tE7Dixe",
"ueK5JRbsiQV",
"dm8i53cfXko",
"sLucwT_3DJ",
"nips_2022_4MT-e8mn3X",
"nips_2022_4MT-e8mn3X",
"nips_2022_4MT-e8mn3X",
"nips_2022_4MT-e8mn3X"
] |
nips_2022_mSiPuHIP7t8 | GraphDE: A Generative Framework for Debiased Learning and Out-of-Distribution Detection on Graphs | Despite the remarkable success of graph neural networks (GNNs) for graph representation learning, they are generally built on the (unreliable) i.i.d. assumption across training and testing data. However, real-world graph data are universally comprised of outliers in training set and out-of-distribution (OOD) testing samples from unseen domains, which solicits effective models for i) debiased learning and ii) OOD detection, towards general trustworthy purpose. In this paper, we first mathematically formulate the two challenging problems for graph data and take an initiative on tackling them under a unified probabilistic model. Specifically, we model the graph generative process to characterize the distribution shifts of graph data together with an additionally introduced latent environment variable as an indicator. We then define a variational distribution, i.e., a recognition model, to infer the environment during training of GNN. By instantiating the generative models as two-component mixtures, we derive a tractable learning objective and theoretically justify that the model can i) automatically identify and down-weight outliers in the training procedure, and ii) induce an effective OOD detector simultaneously. Experiments on diverse datasets with different types of OOD data prove that our model consistently outperforms strong baselines for both debiasing and OOD detection tasks. The source code has been made publicly available at https://github.com/Emiyalzn/GraphDE. | Accept | The authors propose a mixture modeling approach to train GNNs so that out-of-distribution data can be properly down-weighted during training and detected during testing. The reviews were mixed, with some reviewers criticizing the technical novelty and experimental comparison. Indeed, the authors could have explained their contribution more transparently, and emphasized a bit more on the new challenges in the GNN setting, which the response has largely addressed. Perhaps it is also worthwhile to discuss classic works on mixture of experts, as well as variational Bayesian approaches (e.g. https://ieeexplore.ieee.org/document/5563102). As to the experimental comparison, I think the authors made some good explanations in the response and it is perhaps too ambitious for anyone to compare to every possible alternative.
In the end, we think the application of the mixture modeling approach to GNNs is sufficiently interesting, and the initial experimental results appear to be encouraging. We urge the authors to further revise their work by incorporating all changes during the response and better positioning the contributions in historical context. | test | [
"XPrF61b5cA-",
"9tdYv_8jUnX",
"C6vE-PFuP4f",
"-yfl6nx_Ql",
"r4xrwpqF5e",
"GHTNPM0vzl",
"cf-w9sUvkcY",
"2L7lYLEup4h",
"_XfRnkd6uGh",
"ZVfvPR3CKfB",
"on24RX4vAb8",
"QnTZHKgx3p6",
"HU809loT765",
"KW7z_mVFZQY",
"uc8-nsBTKCI",
"RQFb2L7zjaT",
"7ba4NhJdNki",
"nYg0Y4pGPnS",
"85-LwVG-3T-",
"Dg7_l9TMqUf",
"ZSSxDGr3FGT",
"CtB6b_wgtPm",
"Orf7LqzZmH1",
"28OVnYATqxs"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" While the other reviewers have acknowledged our rebuttal and raised their rating accordingly, we are wondering whether our responses have addressed your concerns properly. Your feedback will definitely help reach a more reasonable decision on our submission. Thank you!",
" While the other reviewers have acknowledged our rebuttal and raised their rating accordingly, we are wondering whether our responses have addressed your concerns correctly. Your feedback would be really appreciated and will definitely help reach an informative decision on our submission. Thank you!",
" Thank you for the positive feedbacks. We have polished our presentation according to your advice in the revised version. We believe these updates can help the readers understand our work better.",
" Thank you for the positive feedbacks. According to your suggestions, we have addressed the clarity problems in the revised version. These updates undoubtedly improve the paper. ",
" Thank you to the authors for their detailed response.\n\nRegarding the question on scaling to larger datasets, I still believe this is a potential issue with the method, but understand the authors argument about accessibility of properly split data and also the still-standard practice of focusing on small sizes for graph classification. The clarification for Q2 is appreciated, but of course I'd prefer that this part was stated more carefully in the intro - this is the author's prerogative however. \n\nResponses to Q3/4 resolved ambiguities for me.\n\nI'm happy to move this from borderline to a weak accept, 6.",
" Thank the authors for their response. The response confirms that the initial draft lacks some clarity, but most of my concerns are addressed in the rebuttal.\n\nHonestly, my expertise in GNN OOD detection is limited. The novelty claimed by the authors looks good to me, but I am not so sure whether they overclaim or not compared to SOTA related works, and also not sure whether the experiment is thorough or fair enough. I tend to not change my initial rating, but I am open to opinions from other reviewers or ACs.",
" Dear reviewers,\n\nI would like to express our sincere gratitude for your constructive advice on this paper. \n\nSince the discussion period is approaching its ending, we would be glad to hear from you about whether our rebuttal has addressed your concerns? If you have any further questions and concerns, feel free to post your comments so that we can respond to your questions and concerns.\n\nWe will really appreciate it if you could post some comments so that we can improve this paper accordingly.",
" **Q9:** What is the role of GraphDE-v? It seems that GraphDE-v is entirely outperformed by GraphDE-a.\n\n**R9:** Thank you for pointing out our negligence. As we have discussed in Section 3.2.1 (Instantiations of the Recognition Model), GraphDE-v directly assigns a learnable scalar for each sample in the training dataset, i.e. $q_\\phi(\\mathbf e_i|A_i, X_i, y_i)=\\text{Bernoulli}(\\alpha_i)$ where $\\alpha_i\\in[0,1]$ is a learnable parameter. For GraphDE-a, however, we need to compute the posterior analytically using Eq. (10). Therefore, GraphDE-a owns a substantial larger time and spatial complexity than GraphDE-v, and the latter is also easier to be implemented. Though GraphDE-a gives a tight approximation and performs better, GraphDE-v may be a considerable choice when we have limited space/time resource.\n\n**Q10:** Figure 4 is somewhat counter-intuitive. Why the ablation study shows that removing $\\mathcal L_{cl}$ makes literally no difference on the performance, while removing $\\mathcal L_{reg}$ shows more impact?\n\n**R10:** It seems that you are confused with our Figure 4.(b) and Figure 4.(c). In Figure 4.(b), the test accuracy drops significantly after we remove $\\mathcal L_{reg}$, this result is consistent with our theory that the structure estimation module acts as a regularization to detect and down-weight the outliers in the training dataset. In Figure 4.(c), as discussed in Section 4.3, we **measure the detection AUROC but not the test accuracy** after removing $\\mathcal L_{cls}$. Note that we cannot even conduct label prediction without the classification module (i.e. $\\mathcal L_{cls}$). Therefore, these two figures **cannot be compared with each other since they indeed compute a different measure**. The detection performance degradation in Figure 4.(c) is consistent with our theory that the classification and the structure estimation module are learned in a mutually-promote manner.\n\n\n### References\n\n[1] Training Deep Neural Networks on Noisy Labels with Bootstrapping, ICLR 2015.\n\n[2] Learning to Reweight Examples for Robust Deep Learning, ICML 2018.\n\n[3] Advances in Variational Inference, TPAMI 2019.\n\n[4] Deep Anomaly Detection with Outlier Exposure, ICLR 2019.\n\n[5] Energy-based Out-of-distribution Detection, NeurIPS 2020.",
" Thank you for the positive feedbacks about our framework, theory, and experiments. Hope the following responses can help relieve your concerns:\n\n**Q1:** How do we calculate the threshold in Eq. (3)?\n\n**R1:** Eq. (3) in our paper defines the pratically used OOD detector with a threshold $\\tau$. However, we emphasize that there is no need to specify $\\tau$ beforehand to evaluate the detector's performance. As stated in Section 4.1 and shown in Table 2, we adopt three widely used metrics: **AUROC**, **AUPR**, and **FPR95** to measure the OOD detection performance. For AUROC and AUPR, we continuously change $\\tau$ to get the ROC curve and PR curve of the testing dataset, then calculate the area under them as the measure. For FPR95, we adjust $\\tau$ to get 95% true positive rate and compute the corresponding false positive rate. So in this paper, we have no need to calculate the threshold. To get an OOD detector for practical use, for example, we can choose $\\tau$ that misclassifies the minimum number of testing ID and OOD data as the threshold.\n\n**Q2:** Can GraphDE capture the variation in the feature space?\n\n**R2:** Yes, different from data resampling methods [1] [2] that completely rely on the supervised loss to identify and down-weight outliers in the training set (the same role as our classification module $p_\\theta(\\mathbf y|\\mathbf X,\\mathbf A,\\mathbf e)$), we additionally introduce an structure estimation module $p_\\theta(\\mathbf A|\\mathbf X,\\mathbf e)$, which plays an important role in the learning objective as shown in Eq. (7). In this sense, as proved by our theory, GraphDE can also automatically learn to detect and down-weight those training samples with abnormal relationship between the adjacency matrix $\\mathbf A$ and node features $\\mathbf X$. This directly enables GraphDE to capture variation in the feature space.\n\n**Q3:** Line 132, why are the two distributions parameterized by the same $\\theta$?\n\n**R3:** We use one parameter $\\theta$ to model the classification module $p_\\theta(\\mathbf y|\\mathbf X,\\mathbf A,\\mathbf e)$ and the structure estimation module $p_\\theta(\\mathbf A|\\mathbf X,\\mathbf e)$ for notation simplicity. You can interpret it as $\\theta=[\\theta_{cl},\\theta_{reg}]$, where $\\theta_{cl}$ controls the distribution $p_\\theta(\\mathbf y|\\mathbf X,\\mathbf A,\\mathbf e)$, while $\\theta_{reg}$ determines $p_\\theta(\\mathbf A|\\mathbf X,\\mathbf e)$.\n\n**Q4:** There is a typo in line 134 that $p_\\theta(\\mathbf X|\\mathbf A,\\mathbf e)$ should be $p_\\theta(\\mathbf A|\\mathbf X,\\mathbf e)$.\n\n**R4:** Thank you, this is indeed a typo and it has been fixed in the updated version.\n\n**Q5:** In the last term of Eq. (7), should $p(\\mathbf e)$ be $p(\\mathbf e_i)$?\n\n**R5:** Specifically, Eq. (7) is the sample-wise unfolded form of Eq. (6). And you can see from it that the ELBO includes the KL-divergence between the posterior distribution and prior distribution of the environmental variable $\\mathbf e$. Therefore, $p(\\mathbf e)$ represents our prior knowledge about the dataset, which is **shared over all the data points** [3]. So there is no need to use one $p(\\mathbf e_i)$ for each training sample.\n\n**Q6:** Why line 181-182 mentions that $p(\\mathbf e)$ is a scalar $\\in$ [0, 1] but not a discrete distribution over ID and OOD input?\n\n**R6:** We're sorry for the incorrect expression. In fact, as you said, $p(\\mathbf e)$ is a Bernoulli distribution over the environment variable. In this sense, we have $p(\\mathbf e)=\\text{Bernoulli}(\\alpha)$ where $\\alpha\\in[0, 1]$ represents the portion of ID data in the dataset. We have fixed this issue in the updated version.\n\n**Q7:** Do we assume that training and testing data have the same $p(\\mathbf e)$? If test data have a different $p(\\mathbf e)$, will GraphDE still work?\n\n**R7:** We point out that $p(\\mathbf e)$ stands for the prior distribution under the probabilistic framework, and it is shared across training and testing data since it encodes our belief about the probability a data point is ID or OOD. If testing data have a different $p(\\mathbf e)$, GraphDE will still work since we actually compute a posterior distribution over the environmental variable using Eq. (8). Thus, it can distinguish those OOD samples with abnormal adjacency matrix $\\mathbf A$ or node features $\\mathbf X$.\n\n**Q8:** Do we still output a label when OOD is detected?\n\n**R8:** No, we just report an OOD warning and reject to predict on these samples, which is the same as done in a number of related works [4] [5].\n",
" **Q6:** Is there a typo in line 134? Because the assumption is that $\\mathbf A$ is generated from $\\mathbf X$.\n\n**R6:** Thank you, this is indeed a typo and it has been fixed in the updated version.\n\nAlso thank you for pointing out our ignorance on the negative impact of GraphDE. As we focus on developing trustworthy GNNs, we believe that the negative impacts of our work are small compared to its contributions. However, it can still raise problems like data fairness due to its re-sampling strategy to conduct debiasing. Besides, its robustness as an OOD detector should be studied in-depth as future work, since malicious attackers may fool GraphDE to treat OOD data as ID data, leading to potential performance degradation in practice. We have added these discussions in our newly submitted version.\n\n### References\n\n[1] A Flexible Generative Framework for Graph-based Semi-supervised Learning, NeurIPS 2019.\n\n[2] Graph Stochastic Neural Networks for Semi-Supervised Learning, NeurIPS 2020.\n\n[3] Discovering Invariant Rationales for Graph Neural Networks, ICLR 2022.\n\n[4] OOD-GNN: Out-of-Distribution Generalized Graph Neural Network, Arxiv.\n\n[5] Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation, WSDM 2022.\n\n[6] Generalizing Graph Neural Networks on Out-of-Distribution Graphs, Arxiv.",
" | Dataset | | MNIST-75sp 0.3,0.6 | | | Collab 45,80,100 | |\n| -------- | :--------------: | :----------------: | :----------------: | :--------------: | :--------------: | :----------------: |\n| Detector | AUROC $\\uparrow$ | AUPR $\\uparrow$ | FPR95 $\\downarrow$ | AUROC $\\uparrow$ | AUPR $\\uparrow$ | FPR95 $\\downarrow$ |\n| MSP | 62.37±2.96 | 60.71±1.83 | 88.60±2.71 | 51.37±4.24 | 53.19±3.68 | 91.00±2.41 |\n| WL+OCSVM | 75.35 | 60.72 | 32.75 | 64.61 | 60.39 | 64.80 |\n| WL+LOF | 61.62 | 57.22 | 94.20 | 67.72 | 62.64 | 81.40 |\n| PK+OCSVM | 72.26 | 59.95 | 47.80 | 64.57 | 62.19 | 70.60 |\n| PK+LOF | 61.19 | 58.51 | 92.55 | 64.25 | 58.76 | 91.20 |\n| OCGIN | 65.07±2.55 | 60.13±2.45 | 77.39±5.55 | 70.48±2.72 | **71.77±1.84** | 86.70±0.71 |\n| GraphDE | **94.53±4.63** | **93.78±5.09** | **19.24±9.33** | **72.15±2.27** | 68.46±2.54 | **64.40±0.41** |\n\n**Q4:** More baseline methods should be considered for comprehensive comparison.\n\n**R4:** Thank you for pointing out the baselines for us to compare with. First, we wish to resolve the misunderstanding of relation between [4] [6] and our paper. Specifically, these two papers focus on the topic of OOD generalization, which is orthogonal to our work, as we have discussed in Appendix B. To summarize, OOD generalization aims at training a model that can generalize to the unknown testing distribution from the limited training data. However, debiased learning wishes to identify the outliers (harmful examples) in the training dataset and mitigate their bad effects during training. That's why we do not adopt these baselines in our paper. Next, GLocal [5] is yet another interesting graph OOD detection baseline that we ignored since it is just published in this year's WSDM. Typically, it trains one GNN to predict another GNN with randomly initialized network weights to learn graph representations that capture both local and glocal information of graphs. We adapt its published code (using default hyperparameters) to run on two of our datasets, with the results in the following table. As we can see, GraphDE outperforms GLocal across the 6 metrics on the 2 datasets, which further proves the detection capability of our GraphDE.\n\n| Dataset | | MNIST-75sp | | | Collab | |\n| -------- | :--------------: | :-------------: | :----------------: | :--------------: | :-------------: | :----------------: |\n| Detector | AUROC $\\uparrow$ | AUPR $\\uparrow$ | FPR95 $\\downarrow$ | AUROC $\\uparrow$ | AUPR $\\uparrow$ | FPR95 $\\downarrow$ |\n| GLocal | 80.53±0.38 | 78.72±1.11 | 62.11±6.43 | 66.64±1.56 | 62.00±2.08 | 73.68±1.77 |\n| GraphDE | **93.14±5.42** | **92.95±5.33** | **29.86±9.54** | **70.54±0.34** | **66.73±0.13** | **66.28±1.08** |\n\n**Q5:** Is that practical to define larger graphs in a dataset (i.e. Collab) or the images with gaussian noise (i.e. MNIST-75sp) as the OOD samples? It is hard to say these samples are generated from a different distribution.\n\n**R5:** Before our work, there has already been a series of works [3] [4] focus on the OOD generalization problem for GNNs. These works have used the graph size of Collab, and gaussian noise of MNIST-75sp to construct distribution shifts, which is consistent with this paper. Therefore, it's reasonable for us to get OOD samples by these factors. More intuitively, larger graphs have a different distribution of adjacency matrix compared to smaller graphs. Besides, images with gaussian noise will end up with different node features from the original graphs. Notably, [5] proposes to treat the graphs in minor class as the OOD samples. This is also a potential way for us to construct distribution shifts.\n",
" Thank you for your time and valuable suggestions. We are glad that you appreciate our topic, methodology, and experiments. We also add new experiment results and explanations in the hope that they can address your concerns:\n\n**Q1:** The distribution of features $p(\\mathbf X)$ should be modelled since ID and OOD data can obviously have different feature distributions. \n\n**R1:** Thank you for proposing this important question. It is undoubted that ID and OOD data can have different feature distributions and we need to capture this difference to seperate them out for better debiasing/detection performance. However, $p(\\mathbf X)$ is not necessarily to be modelled since the conditional distribution $p(\\mathbf A|\\mathbf X)$ is a function of ajacency matrix and node features, i.e. it can capture both the distribution shifts of $\\mathbf A$ and $\\mathbf X$. Besides, as shown in Eq. (6), we are maximizing the log probability conditional on $X$ so there is no need to model $p(\\mathbf X)$. This technique has been widely adopted in the graph learning community [1] [2], perhaps both for its simplicity, and for that there rarely exists suitable model for the node feature distribution $p(\\mathbf X)$. We will leave further study for future work.\n\n**Q2:** The instantiation of OOD structure $p_0$ with a simple $p(a=1)=\\frac{1}{2}$ is not practical. More complex or learnable method should be used for it. \n\n**R2:** We agree that we can use more complex or learnable model to instantiate the outlier component. However, we use $p(\\mathbf a=1)=\\frac{1}{2}$ in this paper for its simple implementation and promising empirical power as proved in the experiment section. For your concern that real-world OOD data are universal and cannot be captured by this simple distribution, we emphasize that **the outlier component does not necessarily need to perfectly fit the OOD distribution.** As we have stated in Proposition 2, GraphDE can learn to assign higher probability for ID data if $p_\\theta(\\mathbf A|\\mathbf X)$ and $p_\\theta(\\mathbf y|\\mathbf A,\\mathbf X)$ (the ID components) can better fit ID data than the outliers. This does not add assumptions on the OOD component (but undoubtedly, a better fitted OOD component can help better learn the distribution of environment variable). Also, if the ID data also follows \"$p(\\mathbf a=1)=\\frac{1}{2}$\", we can distinguish between ID and OOD data as long as $p_\\theta(\\mathbf A|\\mathbf X)$ can better fit the ID distribution. If ID and OOD data have the same adjacency matrix distribution, then the distribution shifts should only lie in node features $\\mathbf X$ and labels $\\mathbf y$, which can also be captured by GraphDE.\n\n**Q3:** There is potential leakage in the evaluation of OOD detection.\n\n**R3:** Thank you for raising this considerable question. As what you have said, we make the training outliers and OOD testing samples orthogonal (i.e. do not intersect with each other) this time. The results are shown in the following table (these results will be updated in the final version). Specifically, \"MNIST-75sp 0.3,0.6\" denotes that we add Gaussian noise with a mean of 0.3 to the training outliers, and Gaussian noise with a mean of 0.6 to the OOD testing samples; \"Collab 45, 80, 100\" represents that we treat graphs with 45-80 nodes as ID data, 80-100 as training outliers, and those with more than 100 nodes as OOD testing samples. The other settings are kept the same as in the main text. The table shows that GraphDE still outperforms the baselines on 5 out of 6 metrics on these two datasets. Besides, the performance of GraphDE is comparable to or even better than in the original paper, proving the excellent detection power of GraphDE. We will add these part of results to the paper in the revised version.",
" **Q5:** Missing related works on graph debiased learning. \n\n**R5:** Thank you for pointing out the missing related works for our paper. In fact, we have surveyed these two papers beforehand. **The important term \"bias\" does appear in these two papers, but has entirely different meaning to our paper**. Specifically, they are studying about **the training sample selection bias**. In SR-GNN [1], the authors claim that the bias in the sampling process to select nodes for training can create distributional differences between training and testing set, and they use a regularization and an instance reweighting component to address this issue. In DGNN [2], the authors focus on the label selection bias. Specifically, their datasets are created by biased select nodes from different classes, or select equal but small number of nodes from each class. And they resolve this issue by their proposed differentiated decorrelation regularizer in a causal view. We also focus on the graph-level but not node-level classification tasks in [1] [2]. In this paper, however, the \"bias\" is more correlated to the meaning in [3]. Specifically, as we discussed in Section 1, **our \"bias\" denotes the outliers (harmful samples)** that will skew the training process in the training data. Therefore, our debiased learning is to identify the outliers and mitigate their effects to promote the classification accuracy on ID data, different from OOD generalization in these two papers.\n\nAlso thank you for pointing out the minor problems and we will fix the issues upon revision/publication.\n\n\n### References\n[1] Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training Data, NeurIPS 2021.\n\n[2] Debiased Graph Neural Networks with Agnostic Label Selection Bias, TNNLS 2022.\n\n[3] Resolving Training Biases via Influence-based Data Relabeling, ICLR 2022.\n\n[4] Semi-Supervised Classification with Graph Convolutional Networks, ICLR 2017.\n\n[5] Graph Attention Networks, ICLR 2018.\n\n[6] Robust Variational Autoencoders for Outlier Detection and Repair of Mixed-Type Data, AISTATS 2020.\n\n[7] A Flexible Generative Framework for Graph-based Semi-supervised Learning, NeurIPS 2019.",
" | Model | 0 | 0.02 | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 | 0.3 |\n| --------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |\n| GIN | 76.78±2.87 | 75.53±1.75 | 72.20±6.66 | 73.26±4.26 | 73.14±2.84 | 73.97±2.43 | 72.60±1.99 | 71.91±5.25 |\n| GraphDE-v | 77.00±2.91 | 76.92±1.16 | 74.78±2.70 | 75.47±2.67 | 74.37±1.33 | 74.90±2.32 | **75.67±1.95** | **75.89±1.23** |\n| GraphDE-a | **77.18±3.08** | **77.20±2.26** | **76.02±1.81** | **76.30±2.52** | **75.36±0.43** | **76.45±1.00** | 75.60±1.07 | 75.73±2.17 |\n\nc) Thank you for pointing out the new testbed for our GraphDE. In fact, we have tried GraphDE on `ogbg-molhiv` (it is splitted according to different scaffolds) beforehand. Our conception is to use molecules from different scaffolds to serve as ID and OOD data. However, the train, valid, and test datasets provided by OGB are mixed with molecules from different scaffolds and cannot be captured by one distribution. This may be conflict with our assumption of ID and OOD data, making the dataset unsuitable for our experiments (and we get unsatisfying results as expected). There may exist certain interface that we can curate molecules from different scaffolds to re-conduct the experiment (our DrugOOD dataset is just another molecule dataset providing such an interface), but we leave it for future work due to the limited time of rebuttal.\n\n**Q4:** The writing of the paper could be improved. a) Prop 4.1 and Prop 4.2 should be defined in mathematical forms. b) The claim that modules are optimized in a mutually-promoting manner is not verified in experiments. c) The problem formulation in Section 2 seems to be literally identical to the existing work EERM. \n\n**R4:** a) Thank you for pointing out this problem of rigorousness. We have re-formulate the two propositions as follows:\n\n**Proposition 1.** 1) The learning objective for GraphDE is in a re-weighted form when $q_\\phi(\\mathbf e|\\mathbf A,\\mathbf X,\\mathbf y)$ is instantiated as a Bernoulli distribution, with $q_\\phi(\\mathbf e_i=1|A_i,X_i,y_i)$ acting as a weight for the $i$-th sample; 2) Given the ideal recognition model $q_\\phi^*$ that gives $q_\\phi^*(\\mathbf e=1|(A, X, y)\\in\\mathcal D_{in})=1$ and $q_\\phi^*(\\mathbf e=1|(A, X, y)\\in\\mathcal D_{out})=0$, the generative models can learn to best fit the ID data.\n\n**Proposition 2.** 1) Assuming the generative models fit to the ID data, i.e. $p_\\theta(A|X\\in\\mathcal D_{in})\\geq p_\\theta(A|X\\in\\mathcal D_{out})$ and $p_\\theta(y|(A, X)\\in\\mathcal D_{in})\\geq p_\\theta(y|(A, X)\\in\\mathcal D_{out})$, the recognition model will learn to predict $q_\\phi(\\mathbf e=1|(A,X,y)\\in\\mathcal D_{in})\\geq q_\\phi(\\mathbf e=1|(A,X,y)\\in\\mathcal D_{out})$; 2) Given optimal generative models that best fit the ID data and perform randomly on outliers, there exists a recognition model $q_\\phi^*$ which yields the minimal objective while ideally predict the environment variable.\n\nThe corresponding propositions in the paper have also been revised in our newly updated version. Propositipn 1.1 is left unchanged as we believe the terms \"re-weighted\" and \"weight\" are more friendly to readers to understand the rationale of GraphDE. For Proposition 2.2, we choose to state it directly in language, since the mathematical definitions are clear from the context.\n\nb) Sorry for causing the ambiguity. In fact, these two claims have both been discussed in our main text in Section 4.3. **For mutually-promotion, you can refer to Figure 4.(b) and Figure 4.(c)**. Specifically, we study the debiasing performance with/without structure estimation module in Figure 4.(b) and find that the test accuracy drops over 6% at biased ratio 0.25. In Figure 4.(c), we study the detection performance with/without the classification module and find that the AUROC drops approximately 3% at biased ratio 0.3. These two results support our claim that the composed modules in GraphDE are optimized in a mutually-promoting manner. **For the inferred environments, you can refer to Figure 5.(b)**. The caption \"VI probability\" on the x-axis denotes $q_\\phi(\\mathbf e=1|(A,X,y)\\in\\mathcal D^{tr})$ and we have visualized the distribution of the inferred environment variable on SPMotif and DrugOOD. Typically, we can see that the OOD data are generally assigned with a lower probability score than the ID data, which suggests that GraphDE has taken effect during the training process. You can learn more details referring to our discussions in Section 4.3.\n\nc) We are really sorry for the oversight. We have revised our presentation in the newly updated paper to compensate for this issue.",
" Thank you for your time and thorough reviews. We are happy that you appreciate our motivation, methodology, and in-depth studies. Here are our responses to your problems:\n\n**Q1:** The novelty of the paper is somewhat limited as VI and GNNs are both heavily studied in the literature. The method therefore seems hardly connected with graph data. \n\n**R1:** We agree that VI and GNNs are two fields that have been deeply studied in the literature. However, we emphasize that **this paper's main novelty does not lie in these two conventional methodologies**. As we have claimed in Section 1, in this paper, we focus on the debiasing and OOD detection problems for GNNs: 1) Most importantly, we model the graph generative process and therefore propose a unified probabilistic framework to define and tackle these two problems simultaneously. 2) Besides, through introducing a two-component structure for the generative models, we induce a novel learning objective and justify that GraphDE can identify and down-weight outliers during training while provide an OOD detector on test set. For your concern that the method seems hardly connected with graph data, similar to works [1] [2] that utilizes generative perspectives to model graph data, **our generative process modelling is tightly correlated with graphs**. Potentially this method can be extended to dealing with CV or NLP datasets (we regard it as a strength rather than a weakness), but this may need non-trivial efforts to re-formulate the generative and recognition models. \n\n**Q2:** The authors should provide time complexity analysis or the time cost in practice. \n\n**R2:** Thank you for the problem. Similar to existing GNN [4] [5] and VI [6] [7] works, it may be not so common to measure the time complexity of these modules, as well as our GraphDE. To resolve your concern of GraphDE's efficiency, we provide the practical time cost as follows:\n\n| Backbone (GAT) | DropEdge | GraphDE-v | GraphDE-a | GRAND |\n| -------------- | -------- | --------- | --------- | ------- |\n| 0.2114s | 0.5701s | 2.0962s | 2.6869s | 6.0335s |\n\nThe table reports the training time per epoch on DrugOOD. More specifically, the maximum training epoch is set as 400, and the models usually converge and early stop at around 150 epochs. So it usually takes at around 5min to train our GraphDE model, which can achieve a 5% test accuracy improvement over the backbone. Relatively, we believe this is a valuable time-accuracy trade-off. Besides, we find that GraphDE-v is obviously faster than GraphDE-a, this is because it utilizes simple learnable scalars during training and does not need to calculate the posterior analytically. So this can be a good choice if we have limited time resource. In comparison to other two plug-in modules, we find that GraphDE is much faster than GRAND. Besides, it is slower than DropEdge but with much better testing performance.\n\n**Q3:** The experiments are not entirely convincing. a) We need to compare with OOD baselines such as IRM, DRO, and DIR. b) We need to consider the more popular and powerful backbones such as GIN. c) We need to conduct futher experiments on the OGB datasets.\n\n**R3:** a) Thank you for proposing new baselines to further complete our work. However, these papers focus on the topic of OOD generalization, which is orthogonal to our work, as we have discussed in Appendix B. Specifically, OOD generalization aims at training a model that can generalize to the unknown testing distribution from the limited training data. However, debiased learning wishes to identify the outliers (harmful examples) in the training dataset and mitigate their bad effects during training. That's why we do not adopt these baselines in our paper.\n\nb) Thank you for proposing GIN as another backbone to test our GraphDE. We acknowledge that GIN is a more powerful baseline than GCN and GAT (also shows in the experiment results) and is widely adopted in the literature. So we add new debiased learning experiment results for GIN on the DrugOOD dataset as shown in the following table. The debiased setting is the same as in the original paper (e.g. we use two layers with hidden size 64). As shown in the table, GraphDE can consistenly improve the test accuracy of GIN across all the biased ratio. In particular, it can have an improvement at around 4% at biased ratio 0.3. These results can strongly support the debiasing power of GraphDE. \n",
" ### References\n\n[1] Deep Anomaly Detection with Outlier Exposure, ICLR 2019.\n\n[2] Energy-based Out-of-distribution Detection, NeurIPS 2020.\n\n[3] Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering, NeurIPS 2002.\n\n[4] A Flexible Generative Framework for Graph-based Semi-supervised Learning, NeurIPS 2019.\n\n[5] Graph Stochastic Neural Networks for Semi-Supervised Learning, NeurIPS 2020.",
" Thank you for your time and valuable advice. We are glad that you acknowledged our insight, experiments, and novelty. Here are our responses to your questions:\n\n**Q1:** Results on more updated large scaled data (OGB) would help demonstrate that the binary environment variable approach doesn't break down at scale. \n\n**R1:** Thank you for pointing out the new testbed for our GraphDE. In fact, we have tried GraphDE on `ogbg-molhiv` (it is splitted according to different scaffolds) beforehand. Our conception is to use molecules from different scaffolds to serve as ID and OOD data. However, the train, valid, and test datasets provided by OGB are mixed with molecules from different scaffolds and cannot be captured by one distribution. This may be conflict with our assumption of ID and OOD data, making the dataset unsuitable for our experiments (and we get unsatisfying results as expected). There may exist certain interface that we can curate molecules from different scaffolds to re-conduct the experiment (our DrugOOD dataset is just another molecule dataset providing such an interface), but we leave it for future work due to the limited time of rebuttal. Besides, it is worth noting that we are focusing on graph-level classification tasks. These tasks generally deal with a set of graphs with a relatively small number of nodes (10~1k), and will not encounter the scalability issue (which is common for node-level tasks) since it can train the model through mini-batch optimization. \n\n**Q2:** In line 42, do you actually agree with the statement that test time is just about feature/covariate shift? \n\n**R2:** Sorry for causing the ambiguity. Typically, we agree that joint distribution shift is also a test time issue, and GraphDE also models the joint distribution of $(\\mathbf A,\\mathbf X,\\mathbf y, \\mathbf e)$ for the testing data as in Eq. (8) and Eq. (9). In line 42, however, we mention that \"**OOD testing samples are solely determined by their features**\" under the semantics of OOD detection. That is, we know about both the features and labels for the training data, and the **training outliers can be derived from abnormal features or flipping labels**. But for the testing data, since we have no idea of their ground truth labels, we need to **determine whether they are OOD or not just based on their features when conducting OOD detection** [1] [2]. It is a difference between training and testing data. As shown in Eq. (8), we compute $p_\\theta(\\mathbf e|\\mathbf A,\\mathbf X)$ for testing data, conditional on the graph features $\\mathbf A$ and $\\mathbf X$.\n\n**Q3:** Is it a valid assumption that the connectivity, or the adjacency matrix, is implied by the node features? Under this assumption it seems to follow that the distribution shift is therefore caused by node feature shift. Could GraphDE be extended to when this assumption does not hold? \n\n**R3:** In fact, we can also change the assumption to model $p(\\mathbf X|\\mathbf A)$ using techniques such as Dirichlet energy [3], which will give a foundamentally different model from GraphDE. However, we believe the assumption that the connectivity is implied by the node features is enough expressive and general from the perspective of graph generation, which is widely adopted in the graph learning community [4] [5]. Besides, modeling $p(\\mathbf A|\\mathbf X)$ **does not necessarily mean that the distribution shift is caused by node feature shift**. Since the conditional distribution is a function of both $\\mathbf A$ and $\\mathbf X$, it can capture the distribution shift on the adjcency matrices as well. For cases when this assumption does not hold, GraphDE can also take effect since it only requires a dependence of $\\mathbf A$ on $\\mathbf X$, not necessarily a functional relationship. \n\n**Q4:** Figure 3 and Table 1 seem to display the same kind of data? Could they be visualized in the same way, or was this choice made because the bias ratio was fixed for the table? (why was this fixed for that table?) \n\n**R4:** Figure 3 and Table 1 show experiment results on different datasets, while giving out information of different dimensions. Note that DropEdge, Grand, GraphDE-v, and GraphDE-a are all plug-in modules for the GNNs. In Figure 3, we focus on **the test performance degradation w.r.t. different biased ratio**, and all the plug-in modules are applied on the GAT backbone for fair comparison. In Table 1, however, we apply the plug-in modules to all the four GNN backbones. We fix the biased ratio in this case to show that **GraphDE outperforms the other plug-in modules on top of all the GNN backbones**. To summarize, we use Figure 3 to show that GraphDE can consistently outperform the baselines on different biased ratios, and use Table 1 to prove GraphDE takes effects across all the GNN backbones.",
" Thank you for the comments and nice suggestions. We are pleased that you are satisfied with our motivation, methodology, theory, writing, and experiments. Here is our response to your question:\n\n**Q1:** It would be better if the authors can discuss under what condition the proposed model may not achieve desirable performance. \n\n**R1:** It can be found from Table 2 that GraphDE beats all the other baselines on 7 out of 9 benchmarks. Specifically, GraphDE does not achieve the best performance on Collab in terms of AUPR (area under the precision-recall curve) and on DrugOOD in terms of FPR95 (false positive rate when the true positive rate is 95%). The reason for this can be subtle associated with the detector, dataset, and the metric. \n\n1) For Collab, since we divide ID/OOD data according to graph sizes, there may be graphs of similar sizes in the testing ID/OOD datasets. Besides, the graph sizes can also vary in the ID/OOD dataset. The result probability score distribution is shown in the middle of Figure 6, both ID/OOD distribution have two peaks and overlap to some extent. This may decrease the area under the precision-recall curve since there is not a point to separate out ID/OOD data ideally. \n2) For DrugOOD, the ID/OOD data is decided based on molecule scaffolds. This measure is somewhat unclear and molecules from different scaffolds can also have common properties. The result probability score distribution is shown in the right of Figure 6, in which ID and OOD distributions are very close to each other. In this sense, the false positive rate may be high even when we have a high true positive rate (at 95%). ",
" Dear Area Chair and Reviewers,\n\nWe appreciate reviewers' precious time and valuable advice. We are happy that most of reviewers acknowledged our motivation (gt4u, bxvk, iPSD, xAwj, hmWz), writing (gt4u, bxvk, hmWz), novelty (gt4u, bxvk, xAwj, hmWz) and experiments (gt4u, bxvk, xAwj, hmWz). The major concerns lie in our novelty (iPSD) and additional experiments asked by bxvk, iPSD, and xAwj. To clarify some potential misunderstandings of our paper, we first address some shared concerns:\n\n- **Novelty.** VI and GNNs are two fields that have been deeply studied in the literature. However, we emphasize that this paper's main novelty does not lie in these two conventional methodologies. As we have claimed in Section 1, in this paper, we focus on the debiasing and OOD detection problems for GNNs: 1) Most importantly, we model the graph generative process and therefore propose a unified probabilistic framework to define and tackle these two problems simultaneously. 2) Besides, through introducing a two-component structure for the generative models, we induce a novel learning objective and justify that GraphDE can identify and down-weight outliers during training while provide an OOD detector on the test set.\n- **Experiment baselines.** It is worth noting that previous works such as IRM, DRO, DIR, and OOD-GNN focus on the topic of OOD generalization, which is fundamentally different from the debiasing learning we study in the paper (which we have discussed in Appendix B). Specifically, OOD generalization aims at training a model that can generalize to the unknown testing distribution from the limited training data. However, debiased learning wishes to identify the outliers (harmful examples) in the training dataset and mitigate their bad effects during training. So these works are orthogonal to our paper and we do not adopt them as our baselines.\n\nWe provide extraordinary experiment results and detailed answers to all the questions raised by the reviewers in the following individual responses. Besides, we have also revised the paper w.r.t. the suggestions of the reviewers, which are highlighted in blue in the newly submitted version.",
" This paper focuses on the problem of debiasing learning and out-of-distribution detection in graph data. The authors argue that existing works typically address these two tasks independently, but the intrinsic connections between training outliers and OOD test samples are overlooked. To this end, the authors propose a novel model called GraphDE to tackle debiasing learning for training data and OOD detection for test data under a unified probabilistic model. Extensive experiments are conducted on different GNN backbones, and the results validate the superiority of the proposed method over the baselines. Theoretical analyses are also provided to justify the effectiveness of the model. Strengths:\n1. This paper focuses on an interesting and important problem. Outliers in training data and OOD samples are very common in the graph domain. Properly handling them is important to ensure the performance of a GNN model in practical usage.\n\n2. The paper manages to solve the problem of graph debiased learning and OOD detection in a novel perspective. That is, a unified framework is proposed to model the generative process of both the training data and the test data, where the two problems are tackled dependently.\n\n3. The paper is well-written and easy for readers to follow.\n\n4. The experimental results show that the proposed method achieves consistent performance improvements over the baselines, which demonstrates its superiority. Also, a detailed theoretical analysis is provided to justify the rationale of GraphDE.\n\nWeaknesses:\n1. It would be better if the authors can discuss under what condition the proposed model may not achieve desirable performance. For example, why GraphDE cannot achieve the best performance on Collab in terms of AUPR and on DrugOOD in terms of FPR95? Why does not GraphDE outperform OCGIN or other baselines on Collab in terms of AUPR and on DrugOOD in terms of FPR95? More discussions on the undesirable results may be helpful. No potential negative societal impact.",
" In this work the authors propose a method for jointly addressing outliers present in training data as well as learning an OOD detection model. Using a simple binary environment variable they learn a unified probability mixture model and demonstrate performance on both the debiased learning and OOD detection tasks. ### Strengths\n\n**Quality**: The joint perspective on training time outliers and the task of OOD detection at test time is principled and correct, in this reviewer's opinion. The choice of datasets and the analyses are appropriate for the problem setup and the augmentation methods are relevant baselines. The explanatory ablation results at the end help address appropriate questions about the design choices made.\n\n**Originality**: The main novelty comes from the end-to-end manner in which they address weighting examples at training time as well as extracting the OOD model\n\n### Weaknesses\n\n**Clarity**: The section where Propositions 1 and 2 are stated is lacking proper backing for its claims, or rather in its current form, it's unclear whether this section contributes meaningfully. Overall, the concept of predicting the binary environment variable in order to learn a more optimal model conditioned on the environment variable makes sense, but the assumptions on the perfect recognition and perfect generative models are too strong to just state in this section. I don't feel the sections add significant justification for the approach.\n\n**Significance**: Results on more updated large scale data (say derived from OGB) would help demonstrate that the binary environment variable approach doesn't break down at scale - which matters for practical adoption of new methods, especially those focused on real world problems like distribution shift.\n\nWork would generally benefit from a close editing pass from a native english speaker, _but this does not factor into my assessment_.\n 1. In line 42, do you actually agree with the statement that test time is just about feature/covariate shift? I would argue the very premise of a need for a method such as yours, plus basic knowledge of real world data characteristics together suggest that joint distribution shift is also a test time issue (I know it's arguably hard to address directly with any method)\n2. For the generative structural process you are making the assumption that the connectivity, or the adjacency matrix, is implied (caused) by the node features - cites random graph theory/homophily. Is this a valid assumption? Under this assumption it seems to follow that the distribution shift is therefore caused by node feature shift. Could your method be extended to when this assumption does not hold?\n3. Figure 3 and Table 1 seem to display the same kind of data? Could they be visualized the same way, or was this choice made because the bias ratio was fixed for the table? (why was this fixed for that table?) 1. This method introduces the requirement for a choice of prior on the cleanliness of the training set - this will have some effect on the final model performance, unknown in a real deployment setting.\n",
" This paper studies debiased learning and OOD detection for GNNs. Specifically, the authors propose GraphDE, a probabilistic generative framework to model the distribution shifts of graph data. The proposed method contains three main modules: the recognition model to infer the environment variables, the structure estimation model to detect outlier and OOD testing data, and the classification GNN model. Theoretical and empirical justifications of the proposed method are provided. Pros \n[+] The motivations are clearly present. Figure 1 also helps understand the research problem of this paper. \n[+] The proposed method shows improvements in the adopted datasets. \n[+] The sensitivity analysis and ablation studies are provided to gain deeper insights into the proposed method. \n\nCons: \n[-] The novelty of the paper is somewhat limited. The essential idea of the proposed method is building a variational inference module on top of the existing GNNs, which are both heavily studied in the literature. Besides, the variational inference to infer the environment variable seems rather general and hardly connected with graph data (i.e., it may also be applied to other data types). However, the authors claim the challenges of non-Euclidean graph data in the introduction. \n \n[-] The authors do not provide time complexity analysis or the time cost in practice, so the efficiency aspect of the proposed method is unclear. \n \n[-] The experiments are not entirely convincing. \na) Most importantly, the authors do not compare with OOD baselines, including general OOD methods (e.g., IRM, DRO, etc.), which can be directly combined with GNN backbones, and recent methods designed explicitly for graphs (e.g., DIR and OOD-GNN). \nb) The authors only consider GCN and GAT as the backbone but ignore more popular and powerful backbones such as GIN. \nc) Though I acknowledge that some real-world graph benchmarks are adopted, it would make the experiments more convincing if OGB datasets, widely adopted in the literature, are further included. \n \n[-] The writing of the paper could be improved: \na) Prop 4.1 and 4.2 are in plain language and thus may not be rigorous. It would be better if these propositions are provided in mathematical terms. \nb) Some of the claims are not well supported. For example, one of the claimed contributions is that the modules are optimized in a mutually-promoting manner, which is not verified in the experiments. The necessary discussions on the inferred environments during the training of GNNs (as stated in line 12) have also not been present in detail. \nc) The problem formulation in Section 2 seems to be literally identical to the existing work EERM, e.g., lines 69-73 and Section 2.1 in EERM. Such textual overlap without quotation should be avoided. \n \n[-] The authors claim in line 109: “For the first time to our best knowledge, we formally define and deal with the graph debiased learning problem.”, which misses important related works such as [1-2]. \n[1] Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training Data, NeurIPS 2021 \n[2] Debiased Graph Neural Networks with Agnostic Label Selection Bias, TNNLS 2022 \n\nMinor: \n(1) More related works in Appendix B seem to highly relate to the paper and should be moved to the main paper. \n(2) The font of “idx” in line 222 should be kept consistent with that in Eq. (14). \n See above N.A.",
" This paper aims to address two challenging tasks, i.e. 1) learning debased GNN model from the training data with OOD samples and 2) detecting the OOD samples from testing data, with a unified framework. To this end, a novel method named GraphDE is produced on the basis of variational inference. A probabilistic generative model is introduced to model the distribution of ID and OOD samples, and the classification module and OOD detection module are integrated into the learning framework. Experiments are conducted to verify the performance of GraphDE on both debiased learning and OOD detection tasks.\n Pros:\n- The research problem is interesting.\nBoth debiased graph learning and graph OOD detection are attractive and interesting topics in the research community. The authors discuss the similarities and differences of these two tasks and address both tasks with a unified learning framework, which is promising and challenging.\n- The results are promising.\nIn the experiment, the authors conduct quite extensive experiments, and the proposed method achieves very good results, which prove its effectiveness.\nCons:\n- The construction of graph generative model is defective.\nIn the generative model, the distribution of feature p(X) is not modelled specifically, and the generation of feature X is not involved in the generating process. However, the ID and OOD data obviously have different feature distributions. For example, molecules from different domains tend to be formed by different atoms. In this case, it is not practical to model ID/OOD graph data without considering the difference in feature distribution.\n- The instantiation of OOD structure distribution p_0 is not practical.\nIn the instantiation, the authors use a simple distribution p(a=1)=1/2 to model the OOD data. However, in practice, the real-world OOD data can be universal, which cannot be simply modelled by such an impracticable distribution. Moreover, the ID data is also possible to follow \"p(a=1)=1/2\". In this case, how to distinguish ID/OOD distributions in practice? Therefore, I believe modelling OOD data with more complex or with learnable distribution is more persuasive in this method.\n- There is potential leakage in the evaluation of OOD detection.\nSince the model is trained on data composed by both ID and OOD data, the patterns of OOD data have already been seen by the model. In this way, a concern raises that the knowledge about OOD data would leak during the learning procedure. Such leakage may lead to unfair comparison in evaluation.\n- More baseline methods should be considered for comprehensive comparison.\nAlso graph debiased learning and graph OOD detection are both new-born directions in graph learning, there are already some pioneering works that focus on these topics. However, the authors consider limited baselines for comparison, which reduces the persuasion of experiments. So, more baselines can be consider for debiased learning (e.g., [*1], [*2]) and OOD detection (e.g., [*3]). More related papers can be found in survey [*4].\n[*1] Fan, Shaohua, et al. \"Generalizing Graph Neural Networks on Out-Of-Distribution Graphs.\" arXiv preprint arXiv:2111.10657 (2021).\n[*2] Li, Haoyang, et al. \"Ood-gnn: Out-of-distribution generalized graph neural network.\" arXiv preprint arXiv:2112.03806 (2021).\n[*3] Ma, Rongrong, et al. \"Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation.\" Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 2022.\n[*4] Li, Haoyang, et al. \"Out-of-distribution generalization on graphs: A survey.\" arXiv preprint arXiv:2202.07987 (2022). - Is that practical to define the larger graphs in a dataset (i.e. Collab) or the images with gaussian noise (i.e. MNIST-75sp) as the OOD samples? It is hard to say these samples are generated from a different distribution. \n- In Line 134, the authors say \"p_\\theta(X|A,e)\" is composed of an ID component and an OOD component. Is there a typo? Because the assumption is that A is generated from X. \n The authors have provided the limitations and potential negative impacts of the proposed framework in Appendix G and H, respectively. However, more discussion for potential negative impacts should be given, for example, the debiased learning paradigm may cause the concern about fairness.",
" This paper proposes a framework called GraphDE, which unifies debiased learning and OOD detection for graph data. GraphDE takes a generative view to model the joint distribution of $\\{G, y, e\\}$, with an additional recognition model that infers an OODness indicator variable $e$. The loss is carefully designed according to the framework, with interpretable justification. Various and extensive experiments demonstrate the effectiveness of GraphDE, which is expected to provide reference for future related works. Strength:\n1. The framework unifies debiased learning and OOD detection, i.e. the OOD detector is obtained during the debiased learning.\n2. The loss is well-interpretable.\n3. Experimental results are fairly good.\n\n\nWeakness:\nSee my questions below. I have a few confusions. Please correct me if I am wrong:\n1. How do you calculate the threshold in Eq. (3)? \n2. Line 107-108, the authors mention that prior works are limited to feature-label shift but ignore the variation in the feature space. Can GraphDE capture the variation in the feature space?\n3. Line 132, why are the two distributions parameterized by the same $\\theta$?\n4. Line 134, I think there is a typo, i.e. $p_\\theta(X|A,e)$ should be $p_\\theta(A|X,e)$.\n5. In the last term of Eq. (7), should $p(e)$ be $p(e_i)$?\n6. I am very confused with line 181-182 which mentions that $p(e)$ is a scalar $\\in[0,1]$. If so, why $p(e)$ is not canceled out in Eq. (8)? I think it should be a discrete distribution over ID and OOD input.\n7. Do you have the assumption that training and test data have the same $p(e)$? Because $p(e)$ (I suppose it is obtained in the training set) is used in the inference on test data (in Eq.(8)). If test data have a different $p(e)$, will GraphDE still work?\n8. Eq.(9) is used to predict label on test data. Do you still output a label even when OOD is detected?\n9. What is the role of GraphDE-v? Since you can compute the posterior analytically, it seems to me GraphDE-a works perfectly, and it is also confirmed in Table 1 that GraphDE-v is entirely outperformed by GraphDE-a.\n10. Figure 4 is counter-intuitive to me. Eq.(2) tells me that debiased learning can be reduced to ignoring updates from outliers (because training on outliers will hurt the performance), and proof in Appendix A.2 tells me that optimizing the proposed $L_{cl}$ is equivalent to optimizing Eq.(2). Then I suppose $L_{cl}$ is why GraphDE can boost test accuracy as in Figure 1(a). However, the ablation study shows that removing $L_{cl}$ makes literally no difference to the performance, while removing $L_{reg}$ shows more impact. Do you have any interpretation on this result? The authors addressed the limitations and societal impact in their paper (in the end of appendix)."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
3,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4,
3
] | [
"Orf7LqzZmH1",
"CtB6b_wgtPm",
"r4xrwpqF5e",
"GHTNPM0vzl",
"ZSSxDGr3FGT",
"2L7lYLEup4h",
"nips_2022_mSiPuHIP7t8",
"_XfRnkd6uGh",
"28OVnYATqxs",
"on24RX4vAb8",
"QnTZHKgx3p6",
"Orf7LqzZmH1",
"KW7z_mVFZQY",
"uc8-nsBTKCI",
"CtB6b_wgtPm",
"7ba4NhJdNki",
"ZSSxDGr3FGT",
"Dg7_l9TMqUf",
"nips_2022_mSiPuHIP7t8",
"nips_2022_mSiPuHIP7t8",
"nips_2022_mSiPuHIP7t8",
"nips_2022_mSiPuHIP7t8",
"nips_2022_mSiPuHIP7t8",
"nips_2022_mSiPuHIP7t8"
] |
nips_2022_AREqvTvv6gG | Frank-Wolfe-based Algorithms for Approximating Tyler's M-estimator | Tyler's M-estimator is a well known procedure for robust and heavy-tailed covariance estimation. Tyler himself suggested an iterative fixed-point algorithm for computing his estimator however, it requires super-linear (in the size of the data) runtime per iteration, which maybe prohibitive in large scale. In this work we propose, to the best of our knowledge, the first Frank-Wolfe-based algorithms for computing Tyler's estimator. One variant uses standard Frank-Wolfe steps, the second also considers \textit{away-steps} (AFW), and the third is a \textit{geodesic} version of AFW (GAFW). AFW provably requires, up to a log factor, only linear time per iteration, while GAFW runs in linear time (up to a log factor) in a large $n$ (number of data-points) regime. All three variants are shown to provably converge to the optimal solution with sublinear rate, under standard assumptions, despite the fact that the underlying optimization problem is not convex nor smooth. Under an additional fairly mild assumption, that holds with probability 1 when the (normalized) data-points are i.i.d. samples from a continuous distribution supported on the entire unit sphere, AFW and GAFW are proved to converge with linear rates. Importantly, all three variants are parameter-free and use adaptive step-sizes. | Accept | The scores on this paper were quite spread (and the reviews at times a little imprecise), however looking more closely at the discussion as well as reading the paper myself, I believe this paper should be accepted. | train | [
"PZZ3PU3Xve0",
"FmriOBVrOrG",
"lsNbTcBGjyJ",
"YskgiReUctZ",
"Ytlha5sVuA",
"qN89lI-IFG3",
"pH6TSn5CW9j",
"vUbt4KarAQN",
"VjHpnB7CqgP0",
"vl31Srr8Bik",
"N8RGNY3hRq",
"gLafsu2NcLon",
"RvXgN7o7jfF",
"FKqKa8pbqP",
"OhEEOjgn0nA",
"Jhlspx2RWll",
"g-x495XQdV7",
"wU-jttT-F00"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reconsidering your score! We really appreciate it. \n\nRegarding projected gradient: this will have complexity the same as fixed point iterations, since projecting onto the feasible set and inverting the matrix iterates will require O(p^3) time, and computing the gradient will take O(np^2) time - same as fixed points. This is exactly why Frank-Wolfe is so cool for this problem - it’s rank-one updates suits this problem perfectly and lead to an order of the dimension improvement in the runtime of each iteration. \n\nRegarding specific contribution: we are quite confident that additional problems of similar structure will turn out that can benefit from this approach.",
" Dear Reviewer 6Ljb,\n\nDid our response and additional graphs helped with your concerns regarding experiments?\n\nWe would be happy to answer additional questions/issues.",
" Okay, it is quite surprising that no work has developed faster algorithms than FPI for Tyler's M-estimator. After some literature survey, I could find some work in sparse settings, but it seems that there really doesn't exist algorithm that runs faster than $O(p^3)$. The paper is more about finding a Tyler's M-estimator in time less than $O(p^3)$, and having this scope in mind, now the paper makes a lot more sense. I increase my score from 3 to 5. \n\nIt would have been a lot more helpful for non-knowledgeable (on Tyler's M-estimator) readers if the scope was made much clear in the beginning, and if the paper provides a more thorough discussion on existing work for solving Tyler's M-estimator problem. \n\nI think it is a slight overclaim that the paper also contributes to theories of FW (and AFW or GAFW) for non-convex non-smooth problems, because the current analysis is very specific to and applies only to the Tyler's estimator problem. It is not clear how to extend the analysis beyond this specific problem. \n\nI still cannot agree with the technical writing style in this paper, but I agree that this is totally up to authors' freedom. \n\n\nBesides, I wonder whether other GD based algorithms such as Projected (sub)-gradient descent can achieve the same performance. There is only one stationary point which is a global optimum, and so it is kind of expected that any GD based algorithms that can work for non-smooth objectives can converge. ",
" Thank you Reviewer 6ubv for your additional comments. We answer them below.\n\n1. Misunderstanding of main messages: you seem to repeat the same mistake in both points. Our FW-based methods give the first algorithms algorithms for TME that can run in linear time! Of course this could be very significant! \n\n2. You write: '' (fixed-point iteration) does not seem very strong'': Fixed point iterations (FPI) are the method of choice for computing Tyler's estimator and you can see variants of it in basically all papers we gave in the literature review. We are not aware of any competing methods. In particular note FPI seems very strong: it has linear convergence, and it requires only ''simple'' operations like a single matrix inversion per iteration and multiplying a matrix with each data-point on each iteration. Only due to the very unique structure of the rank-one updates of Frank-Wolfe, that we are able to break these ''bottlenecks'' and obtain methods with linear runtime per-iteration, and as we write explicitly, we are not aware of any previous method with running times as ours.\n\n3. We do not think that a sentence like ''I doubt that no previous work has studied this. Positioning of the paper in literature does not seem clear in the current version'' is a professional review style. We have gave literature review. If you know of works that we missed but are important please state so, otherwise such a comment is not nor respectful. Note that we cite many recent works and to the best of our knowledge we are up to speed with relevant methods.\n\n3. You write: ''Also, it seems that Frank-Wolfe algorithm has been studied for non-convex objectives'': We split our answer to 3 parts:\n\n3.a. These methods your mention are for smooth problems. Here the problem is not smooth. Designing an efficient stochastic FW variant is interesting but is beyond the scope of our work and not-trivial since our analysis heavily relies on the fact that the algorithm is a descent method - it reduces the function value on each iteration hence remaining in the initial level set, which is difficult to argue for stochastic methods. This is a good question for future research and such a result will most definitely build on our arguments.\n\n3.b. One of our main interests in this paper is obtaining faster algorithms for Tyler's estimator. This is achieved not by using Frank-Wolfe as a ''black box'', but noticing that the specific structure of rank-one updates used in Frank-Wolfe for the spectrahedron could lead to significant acceleration of runtime of other operations: such as matrix inversion (via Sherman Morrison) and stylized highly efficient eigenvector computation (which for instance avoid computing the gradient explicitly which is expensive). Theses allow for iteration cost of O(np) which would otherwise cost (np^2) like in FPI - please refer to Section 2.1.\nSo, one of the key observations here is that there is a ''very good match'' between the updates of FW and the algebraic structure of the specific problem which makes this such an interesting combination and as a consequence allows for faster runtime. \n\n3.c. We develop unique techniques in the landscape of Frank-Wolfe methods including: 1. an adaptive step-size procedure that avoids the need to tune parameters which are not really known (like the minimial eigenvalue of a matrix on the initial level-set) and 2. a *geodesic FW variant* which we think is a very interesting contribution.\n\n4. You write: ''On the technical side...it is not clear what are the main challenge and key contributions...and some seemingly textbook derivations in Page 7, I do not see what are key technical contributions'': As we wrote. We solve a nonsmooth and nonconvex problem via FW. Of course this is a major challenge, since FW is not adapted to handling nonsmooth objectives! The derivations on Page 7 are by no means ''textbook derivations'', but they are the heart of our technical analysis, and even while appearing simple, it is this unique analysis that considers the specific algebraic structure of the problem at hand, that allows us to obtain our novel convergence results, which as we emphasize again, are non-standard for first-order methods as a whole, and FW methods in particular, since problem is non-convex and nonsmooth.\n\n5. You write ''Section 4...which step breaks down if we do not have Assumption 2...'':\n\n5a. We are limited in space and we do not think it is important to discuss what breaks in analysis if the assumption does not hold. For this please refer to the proof.\n\n5b. We cannot give a compact representation of the linear conv. constant or how bad it is. This is something that is quite common in linear rates for first-order methods and we kindly refer you the references we provided to see that it is a standard thing. It is common that such constants may have complex dependence on the data.\n\nWe kindly ask you again to reconsider your score. We are very happy to answer additional concerns.",
" Dear Reviewer 6Bnc,\n\nIn case you have missed it, we have updated our submission and add an appendix with additional numerical results as promised (these maybe integrated into the main text in the final version). Per your original review, if your main concerns have been answered, will you consider raising you score as you suggested?\n\nWe will be very happy to answer additional questions.",
" Thanks to authors for the response. The writing I suggested is just one option, and I did not mean to criticize the authors' taste. However, I still feel that the current presentation is suboptimal for several reasons.\n\nI think the paper conveys two messages: \n\n(1) FW can be a solution to Tyler's M-estimator (but why is it interesting?)\n\n(2) FW (for Tyler's M-estimator) can be faster with some modified schemes, even though it is not an order-wise improvement.\n\nFor (1), the benchmark (fixed-point iteration) does not seem very strong. Fixed-point iteration is such a simple algorithm which is appealing in practice, but if we only care to improve the order-wise computational complexity, I doubt that no previous work has studied this. Positioning of the paper in literature does not seem clear in the current version. It could be due to my lack of background on related work, but I feel that this should have been sufficiently addressed. \n\nAlso, it seems that Frank-Wolfe algorithm has been studied for non-convex objectives (quick google search shows me a few other references [1,2]). Given that Q* is the only stationary point of the objective function, which is nice, but then it is not clear why we should study FW only for Tyler's M-estimator separately. In this respect, I think that the current version fails in positioning of the work as well. \n\n[1] Stochastic Frank-Wolfe Methods for Nonconvex Optimization\n\n[2] Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems\n\nOn the technical side, related to the positioning, it is not clear what are the main challenge and key contributions. In almost all optimization papers, conclusions are like \"it converges\". I would like to see more: why some textbook derivations don't work, and thus what new ideas are required. But in the current version, other than explaining Sherman-Morrison formula, and some seemingly textbook derivations in Page 7, I do not see what are key technical contributions. The improvement of $O(p)$ merely comes from the matrix inversion update using Sherman-Morrison formula, which does not sound surprising. Yes, this is my personal opinion and evaluation, but I also think that my perception could have been different if presented better (though I don't know what would have been optimal). \n\n\nFor (2), I also would like to see why and how the modified schemes bring the improvement. But other than the mechanism explanation, I do not see the intuition behind the ideas or what are additional challenges if applying modified schemes. I thought it might have been clearer if faster methods are presented after the ideas of standard FW part become clear (again, I don't know what is optimal). \n\n\nSection 4 is also not super clear about what brings the linear rate improvement, which step breaks down if we do not have Assumption 2, what is the order of some important constants, etc. Frankly, it seems a bit suspicious because Assumption 2 is such a general statement which holds for almost all distributions with well-conditioned convariances, and as pointed out in other reviews, constants seem quite bad. \n",
" Thank you for your response and for reconsidering your score. We would like to further comment on the issues raised in your last response.\n\nExperiments: \n1. You write `` I insist that the experiments should be presented with more details and comparisons''. Could you please explain what you exactly mean by more details and comparisons? We believe we have given all the relevant details and we are not sure what more comparisons are you referring to.\n\n2. You also write ''the computational experiments should give readers more feeling about it [the constant of linear convergence]''. We would like to emphasize again that while the linear rate is definitely nice, even our sublinear rates, which do not depend on such constant, are highly novel in terms of running times implications for computing the TME (since they apply linear time iterations, while enjoying first-order like convergence rates), and thus we do not feel they should be discarded just because there are also linear rate results. \nRegarding the constant in linear rate: again, we have used the standard examples from the excellent survey on Tyler's estimator [28]. Our results show that one of our method could indeed be notably faster than fixed-point iterations. We sincerely do not feel coming up with more artificial examples to demonstrate the linear rate will be highly beneficial\n\nPresentation:\nThe key contributions of this paper are: i. the first methods for provable approximation of Tyler's estimator that has linear runtime per-iteration (as opposed to super-linear of the fixed iterations method), and in particular overall linear runtime for well conditioned instances and when the target accuracy epsilon is not very low, and ii. novel Frank-Wolfe methods (for example, a geodesic variant of Frank-Wolfe, which we believe is a novel contribution by itself, or the use of our adaptive and simple step-sizes) for solving to global optimality and nonsmooth and nonconvex problem of notable interest, which is highly likely to lead to more advances on Frank-Wolfe variants for stylized nonconvex problems with novel running times (recall Frank-Wolfe usually solve to global optimality only smooth and convex problems).\nBoth of this we believe are clearly highlighted throughout the introduction.\nCould you please expand on how do you think the current presentation is lacking in explaining these contributions?\n",
" Thank the authors for the reply. \n\nThe additional experimental results clearly show the linear convergence of the FW methods the authors proposed. Although the computational experiments are still too simple, these new results at least make the authors’ claim more convincing. Because of this, I will raise the score a little (3->4) but I insist that the experiments should be presented with more details and comparisons, even if the authors believe it is a theoretical paper. \n\nI also insist that the presentation of this paper needs more improvements, and emphasizes more key contributions. Besides, the constant of the linear convergence is also essential. Even if revealing the theoretical bounds is difficult, the computational experiments should give readers more feeling about it. Linear convergence is common in many methods but usually the bad constant prevents it from being widely used, such as subgradient methods for LP.\n",
" Dear Reviewers and AC,\n\nQuite embarrassingly, we were not aware until just now that there is an option to revise our submission during the rebuttal period and so we did not plan for it (or allocate time for it).\n\nNevertheless, we have uploaded a revised supplementary material. In the first appendix A.1 you shall find additional numerical results regarding the original two experimental setups that we had in the original submission. These additional graphs show:\n1. the approximation error in spectral norm w.r.t. Tyler's estimator (in log scale), per the comment of Reviewer 6Ljb.\n2. the approximation error w.r.t function value in log scale.\n\nWe think these new graphs better demonstrate that the variant GAFW can indeed be considerably faster, and that it indeed seems to converge with a linear rate (since the plots are in log scale).\n\nThese new plots, as well as additional fixes to the more minor comments will be integrated into our final version.\n\nIf there are any more questions we can help with, we would love to do so.",
" Thanks for the reply.\n\n1. Indeed, it looks like the AFW that you implemented looks fundamentally different than the AFW presented in Lacoste-Julien 2015. Given that you give convergence rates for this new method (and don't borrow from his), that seems kosher, but maybe consider a name change, or specify this is a different method more clearly. \n\n2. Fair enough. This point feels minor, since it doesn't take up much room in the main text.\n\n4. I agree, and I commented in a positive way: I am more curious on the intuition behind this choice of step size, and how it fits your specific problem / gives specific rates. This intuition would help the broader Frank-Wolfe community in other instances, too. \n\n3 and 5: looking forward to seeing the updates.",
" Dear reviewer,\n\nThank you for finding our work interesting and novel, we appreciate it.\n\nWe now answer the various issues:\n1. FW/AFW: AFW is expected to converge faster since it uses more sophisticated updates. Note that in our setting, and very different from AFW for polytopes (which is usually the setting studied for AFW), we can efficiently implement the ''away steps'' implicitly without storing all previous atoms, that is one of the beauties of this variant in our spectrahedron setting. Technically, since under Assumption 1, the optimum lies in the interior it follows that computing the away-step becomes just an eigenvector problem which is efficient to solve.\n\n2. Use of Sherman-Morisson: this is a very common primitive in numerical algorithms to the best of our knowledge and we are not aware of stability issues. Admittedly though, our main interest in this work is mostly on theoretical analysis and establishing the convergence results in principle.\n\n3. Lemma 3: we will revise the proof and attempt to make it more accessible. We believe this is indeed important since it is the main technical step in the analysis, and of interest to those who wish to understand the very basic idea of our analysis.\n\n4. step-size: The 2/(2+k) step-size is common for convex objectives. Here our analysis of Frank-Wolfe is for nonconvex objective (convergence to stationary points) which usually uses different step-sizes, see for instance [18]. We shall clarify this in the final version.\n\n5. Numerical experiments: Indeed, our main focus in on novel theoretical approach and analysis, and extensive experiments are beyond our interest here. The reason we used these settings is that in [28] it was shown that for these settings Tyler's estimator is indeed superior to the sample covariance and thus interesting. We did not want to come up with artificial settings that will give us nice graphs but which are pointless since they do not capture really interesting cases. Nevertheless, we feel that GAFW indeed shows promising performance: it achieves non-trivial approximation even before FPI completes a single iteration! In particular, on the left panel, by the time FPI completes the first iteration, GAFW has already obtained the minimal value which FPI achieves only in its final iteration. Moreover, in the final version we shall add a graph in which the Y-axis is in log-scale which makes it clearer that the GAFW variant is indeed notably faster than FPI throughout the run.",
" Dear reviewer,\n\nFirst we address the weakness you raise:\n1: We give all relevant references in the paper and we basically give most of the background needed for Tyler's estimator in the paper. The Frank-Wolfe method is well known and well studied and we cannot review it in the paper but point to the standard references. Note that Section 2 clearly details all variants and how they relate to classical Frank-Wolfe variants. Note also that two reviewers found the presentation to be good.\n\n2. As we write, the dependence of the PL parameter and the data does not admit a simple form and that is why we do not explicitly state it. This is very common in convex optimization and often the PL parameter has complex dependence on the data but one can establish that it exists (bounded away from zero). Studying it numerically for instance is beyond our interest which is mostly focused on rigorous theoretical analysis, and beyond the scope of this paper. This does not make Table 1 misleading since the parameter clearly appear in the bounds and should be understood that in certain cases it can be quite bad. Note, that while the linear convergence rate may perhaps be bad, our sublinear rates are still interesting and allow for fast approximation of Tyler's estimator without dependence on this parameter.\n\n3. The complexity of computing the smallest eigenvalue of FW appears in the proof of Theorem 2 in the appendix. Regarding experiments: as we wrote above, our main interest in this work is on novel theoretical approach and analysis for approximating Tyler's estimator and expanding the theory of Frank-Wolfe methods. Comprehensive numerical tests, are beyond our interest. In the experiments we did we tested on the settings considered in [28], since for these [28] showed that the Tyler's estimator is indeed interesting to you use which makes these settings of interest. Indeed the FW and AFW do not perform too well in these cases, but it is what it is. We do see that GAFW has very good performance since it achieves good approximation error even before the baseline FPI completes a single iteration. \n\nAnswers to questions:\n1+2: see comments above. \n\n3. We will include the code in final submission.\n\n4+5. This is a theoretical paper and our main interest in in obtaining novel and state-of-the-art complexity bounds, as well, as extending our understanding of optimization methods. We touch upon numerical experiments but this is not our main interest. We used these two settings, because in [28] it was shown that in these settings Tyler's estimator is indeed significantly superior to the sample covariance, and so these settings are meaningful and natural to test. Indeed in principle the constant in the exponent of the linear rate, which depends on the data in a very complicated way, can be quite bad, and this is probably the reason that FW and AFW variants do not seem to converge linearly in the experiments. \n\n6. Unfortunately, we do not understand this comment.\n\nSmaller issues:\n1. the definition of N(L) is not clear: it is the number of *data points* that lie inside the subspace L, note the data is finite.\n 2. FPI is defined in Definition 1\n3. Omega is the standard complexity lower bound notation\n\nFinally, since you clearly write that ''...in general, I think this paper has a significant contribution in decreasing the per iteration complexity, compared with the fixed-point method, and the idea of proving the convergence of FW for this nonsmooth nonconvex problem is also interesting and novel'' we cannot understand how a score of REJECT could be acceptable. It is fine if you are perhaps not very familiar with the related research and literature and if you do not like the paper much, but given that you understand that this paper contains significant and novel ideas w.r.t. both computing Tyler's estimator and the theory of Frank-Wolfe, and that you did not find a technical flaw, we do not feel this is very professional. We sincerely ask you to reconsider your score, or at least lower your confidence score.",
" Dear reviewer,\n\nThank you very much for you high appreciation of our theoretical results, we truly appreciate it.\n\nIssues:\n1. We shall add a clarification to Table 1.\n2. Measure of convergence: these are the measures of performance that arise naturally from the analysis and this is why we study them. They could be translated to other measures but it makes sense to us to report convergence on these because this is what come out of the analysis. For instance, our measure of convergence for the AFW and GAFW methods in Theorem 3 give approximation in spectral norm w.r.t. the exact Tyler estimator. This seems highly natural to us, and is also very similar to the measure of convergence in the previous excellent work [6] which studied rigouros convergence guarantees for the fixed-point iterations.\n\nRegarding experiments, we do not quite share you feelings and let us explain:\n1. We use two settings not one (corrupted gaussian and heavy tailed distribution) since these were shown in [28] to be cases in which Tyler's estimator indeed makes sense and is considerably better that the sample covariance. It is important for us not only to display nice graphs but such that concern settings of true potential interest.\n2. Measure of convergence: there are several possible measures and all are equivalent in the sense that Tyler's estimator is their minimizer. We therefor do not think that the objective function, of which Tyler's estimator is the only minimizer, is a poor choice of measure. In the final version we will add an additional measure such as the distance from the exact Tyler's estimator (in spectral norm for instance). The graphs look very similar.\n3. Measuring runtime: if we wanted to measure actual runtime that would have required us to implement on our own a specialized state-of-the-art eigenvector method such as Lanczos for computing an eigenvector corresponding to the largest (in magnitude) eigenvalue, that works with our particular efficient updates (see Section 2.1), and to run quite high-dimenisional and time-costly simulations for the differences between the methods to be clear. This is something that is beyond the scope of our current research that is mostly dedicated to novel theoretical results. \nWe feel that the estimated runtime issue you raise is much milder than you suggest: we simply make the assumption that a dense matrix - dense vector product takes O(n^2) time, where the matrix is nxn, and a vector-vector product takes O(n) time. We feel this is quite reasonable. \n4. ''The claim in Line 300 that GAFW is significantly faster than FPI is contradicting to the graphs'': there seems to be some misunderstanding here. The blue dots in the graph mark the iterations of FPI. So, we can see that GAFW makes significant progress and achieves reasonable approximation errors before FPI has even made a single iteration! This is what we meant, and we will clearly clarify it in the final version. Note also that in the left graph, by the time that FPI has completed a single iteration, GAFW has approximately reached the value FPI would achieve only on its last iteration. In the final version we shall also add a graph which plots the Y-axis in log scale which makes it much easier to see that GAFW is indeed notably faster.",
" Dear reviewer,\n\nFirst we answer your questions:\n1. Indeed the beauty of it is that Assumption 2 is so mild that generally we shall have a linear rate almost generically. However, the constant in the exponent of this rate might in general be quite bad and depends on the data in quite a complicated way (e.g., the AFW in our experiments does not seem to exhibit a linear rate), and this is why the sublinear rates are interesting - they still allow for fast approximation (when the desired error epsilon is not to small) while not being dependent to this constant.\n\n2. The PL condition and linear convergence is discussed on page 8 and defined in line 262. This inequality is by now very well known in the continuous optimization literature (see the relevant references [22, 17, 9]) and the use of it to obtain linear rates, and so we do not go into much detail. For this we have the proof with all supporting Lemmas \n\nRegarding writing style: This is a theoretical paper and our main contribution is a novel use of the classical Frank-Wolfe method to solve an important NONCONVEX NONSMOOTH optimization problem for covariance estimation. With this respect, Lemma 7 for instance is not a bunch of ''non-informative algebras'' but the main technical argument that allows us to establish convergence and as a theory paper this is important in our opinion.\n\nAs to your additional point regarding use of Frank-Wolfe: FW here is the platform that allows us to break the super-linear runtime per iteration of the FPI method and to obtain linear runtime per iteration. This is highly interesting also in the border context of FW, since we solve to global optimality a nonconvex nonsmooth problem, while FW can guarantee global convergence usually only for convex and smooth problems. We think that if you can think of another method to obtain such a result, you should definitely write a paper about it.\n\nWe understand that you did not like the way the paper is written and that you are perhaps not familiar with this line of research, but note that two reviewers found the presentation good, and we, as experienced scientific authors, are entitled for our personal taste which sould not be fully aligned with the preferences of every possible reviewer, and to us the presentation seems very natural.\n\nFinally, our paper gives highly novel results both w.r.t. the theory of Frank-Wolfe and computing Tyler's estimator, and as such, we believe it would be interesting to many in the NeurIPS community. We have worked hard for months on this project, and we truly believe our theoretical results are clearly above bar. In all honesty, we do not think that the issues you raised, and in particular your individual preferences of writing style, should be an acceptable cause for recommending rejection. You did not find any technical flaw or indicated that our results are not novel or uninteresting in some way. We ask you to seriously reconsider your score, or at least your confidence level.",
" The submission considers a Frank-Wolfe based algorithm for approximating Tyler's M-Estimator. Overall, main benefits of Frank-Wolfe based algorithms for this problem is the lower computational complexity per iteration. Compared to the standard fixed-point iteration, the computation complexity is improved from $O(np^2)$ to $O(np)$. Three variants of Frank-Wolfe algorithms are proposed, and sublinear convergence guarantees are provided for them. With a slightly milder assumption, the paper shows linear convergence. Overall, presentation of the paper is not very good. I do not see major benefits of presenting three variants in parallel. The authors could first have elaborated on the basic FW algorithm and develop the most basic ideas first, e.g., how to establish sublinear convergence and what are the main technical challenges. Then, the authors could have proposed two more variants (AFW, GAFW) that enjoys slightly better convergence, and faster convergence with slightly milder conditions. In its current form, it is very hard to understand what are the major technical challenges. Maybe more importantly, I wonder why the Frank-Wolfe approach is interesting especially when the scope of the paper is very specific to one objective function. There could be other approaches that lead to similar computational benefits other than FW. - It is hard to understand why Assumption 2 brings a major improvement (from sublinear to linear), because it sounds very natural for all i.i.d. data. It would be better to come up with more directly related assumption for the improvement. \n\n- The linear convergence results could have been more stressed, instead of filling one full page (page 7) with a bunch of non-informative algebras. Please discuss the relation to the PL condition in a more detail, starting from the definition of it, in more detail. It could have made the paper more interesting. I do not see any negative societal impact. ",
" The paper proposes a Frank-Wolfe algorithm for\napproximating a well-known covariance matrix estimator.\nThe estimator is known to minimize a particular non-convex,\nlocally gradient dominated function (as shown in Theorem 5)\nover the set of positive definite matrices with a fixed trace,\ni.e., a convex set which is a neither open nor closed.\nThese are atypical settings for a Frank-Wolfe algorithm,\nnevertheless the paper establishes strong convergence results\nfor the original Frank-Wolfe algorithm and two variants with away\nsteps, e.g., linear convergence in function value\nfor the away-step variants\nfor a non-convex function over a non-polyhedral domain.\nThe convergence is for a quantity similar to the dual gap\nfor the original algorithm.\n\nThe computational results are poor: only one setting, wrong data displayed (accuracy in a help function of a method instead of accuracy for the original problem independent of method of solution, measuring cost via theoretical bounds instead of actual performance), making a claim even contradicting the graph. The theoretical results are excellent for the theory of Frank-Wolfe\nalgorithms with correct proofs, even providing detailed discussion of theoretical upper bounds for costs of each operation. A slight weakness is the presentation of results in Table 1: the last two columns display different measures of rates without this being clearly indicated.\n\nSince the problem is specific to appplication of covariance estimators,\nit would have been good to address performance for the original\ncovariance estimating problem, which it is meant to solve.\nFor this an accuracy measure for covariance matrix estimators should\nbe studied independent of the method of solution.\n\nIn the computational experiments,\nsuch a measure ought to be displayed on the vertical axis,\ninstead of the function value of questionable practical relevance.\n\nOn the horizontal axis, the authors stated aim was a measure\nindependent of implementation issues, a worthy goal, but unfortunately\nthe chosen \"estimated runtime\" is the best worst-case known bound\non operation cost, which is heavily biased against\nalgorithms with high worst-case computational cost.\nFor example, such a measure would show the ellipsoid methpd\nas far faster for linear programming than the simplex method,\ndespite the latter being faster in practice.\nDue this error it is not possible to deduce acutal performance from the graphs,\ne.g., the baseline FPI might have outperformed all the other algorithms.\n\nThe claim in Line 300 that GAFW is significantly faster than FPI\nis contradicting to the graphs, even if we ignore the errors\ndiscussed above. FPI has the same performance on the right and nnly\nslightly worse on the left, so the difference is insignificant.\n\nOverall the computational results are worthless, and likely to cause only misinterpretations. None. Not applicable.",
" This paper discusses a new way of computing the Tyler covariance estimator, which is the solution $Q$ to the fixed point relation \n$\\frac{p}{n} \\sum_{i=1}^n \\frac{x_ix_i^T}{x_i^TQ^{\\*-1}x_i} = Q^{*}$.\nHere, $p$ is the feature dimension and $n$ is the number of samples.\nThe usual method is to evaluate this iteratively, with $Q_k^{-1}$ on the left hand side and $Q_{k+1}$ on the right hand side. There are two main computational issues here: inverting $Q$, which is $O(p^3)$ and evaluating the matrix/vector multiplications $O(np^2)$ at each iteration.\n\nThis paper tries instead to find this fixed point using a Frank-Wolfe method, where at each iteration, they first compute a $1-\\beta$-approximate LMO (a low rank matrix), and then merge it to $Q$ using a carefully designed merge parameter $\\mu_k$. The idea is that FW is solving the (nonconvex) problem of minimizing $f(Q) = \\frac{p}{n}\\sum_{i=1}^n \\log(x_i^TQ^{-1}x_i) + \\log\\det(Q)$ over the constraint $Q\\succ 0, tr(Q) = p$. By using lanczos methods, the approximate LMOs can be found of order $O(p^2+np)$ per iteration, over $O(1/k^2)$ iterations to get to $\\epsilon$ error. - The approach is very clever, and though is heavily FW based, is not a trivial extension at all. Especially, the choice of $\\mu_k$ seems to have this method deviate from the usual rates ($O(1/k)$ not $O(1/k^2)$). \n\n - After taking a \"careful skim\" of the paper, I do not see any red flags in terms of proofs or weirdly magical steps. However, there are some things I think should be clarified (see below).\n\n - First, it is confusing to me that the per iteration rate for FW and AFW are so different. I believe AFW has actually two steps, one for the min LMO and the other for the max LMO. So it puzzles me that it doesn't have about the same per-iteration rate (if not worse) than FW. Besides, AFW also has the memory overhead of holding onto all past atoms--how is this avoided in this implementation?\n\n - The idea of keeping both $Q$ and $Q^{-1}$ through the Sherman Morrison Woodbury formula is very interesting, but I wonder if numerical error will accumulate because of this. Can the authors comment?\n\n - In general, if there is to be proofs in the main paper (and not the appendix) I think it is better if each step is discussed more intuitively, with the expectation that the reader will basically follow all the main points without a ton of effort. Maybe the proof to Lemma 3 can be amended to be more clear.\n\n - I would also like to see more discussion on the construction of the $\\mu_k$ sequence, since that seems to be a significant novelty (usually, we pick $\\mu_k = 2/(2+k)$ and obtain an $O(1/k)$ iterations rate to $\\epsilon$ error, and we use a very different proof technique.)\n\n - the numerical results are not strong. Though the FPI has a large per-iteration rate, I'm not sure Fig. 1 supports this new approach well. While I do think there is enough theoretical novelty in this paper that we don't need SOTA numerical results, I do think a better motivating example would make the paper much stronger.\n\nIf all of these points are addressed clearly by the authors, I may raise my score. See previous box Not applicable",
" The paper is aimed at approximating Tyler's M-estimator by proposing a series of new methods, which, compared with the previous fixed-point approach, are cheaper in the complexity per iteration. They propose three kinds of Frank-Wolfe methods, following the framework of standard FW, away-step FW, and geodesic FW, and prove their sublinear convergence rates. Furthermore, when a mild additional assumption also holds, they can prove their linear convergence. In my view, the most significant improvement over the previous fix-point method is that they avoid the expensive matrix inverse and propose new methods which can be applied to large-scale problems. Finally, the improvements are demonstrated by some small experiments. Strength:\nI am not an expert in Tyler's M-estimator, so more literature review about the other algorithms in dealing with the problem can help me a lot. But in general, I think this paper has a significant contribution in decreasing the per iteration complexity, compared with the fixed-point method, and the idea of proving the convergence of FW for this nonsmooth nonconvex problem is also interesting and novel. \n\nWeakness:\n1. In general, the structure and logic of this paper are clear but there are still some issues of writing, which make it hard for a person without relevant knowledge to understand. \n2. The dependence of the PL parameter on the data is not studied enough. I doubt whether it can be extremely bad in the worst case. If so, the comparison in Table 1 can be very misleading and the improvement over the fixed-point method will be not enough for publication.\n3. The experiment results don't show the priority of the three FW methods enough. I think a figure of showing the convergence results (linear convergence and sublinear convergence of FW methods) should also be necessary. And I am also interested in the complexity of getting the eigenvector of the smallest eigenvalue for standard FW methods. \n\n 1. What is the relationship of the PL parameter and the data? It shows in Table 1 without a thorough explanation. Will it be super small in the worst case? I suggest the author show that it is not bad in a certain proper way.\n2. I am familiar with the Frank-Wolfe method but I still suggest authors add more literature review about the Frank-Wolfe method, and explain the idea and general framework of the Frank-Wolfe method separately (not in Algorithm 1). I believe the current layout, especially the way this paper introduces Eq. (5-7), might be very confusing for readers.\n3. The release of the codes can help make the paper more convincing.\n4. Can FW methods really exhibit linear convergence in real experiments? It is not obvious in the current experiment.\n5. Does FW methods still have advantages over the fixed-point methods in real-world datasets instead of artificial data? I observe in Table 1 that the complexity is highly related to the condition number of Q, so the dataset might play an important role too.\n6. There is inequality between the nuclear norm and the Frobenius norm. I feel changing the norm in Theorem 3 might make it clearer.\n\nSmaller issues:\nLine 24: the definition of N(L) is not clear. I think the number of points in a subspace should be infinite? Does that mean the number of points in {x_i}?\nLine 52: FPI is not defined\nLine 69: Omega is not defined\nLine 327: Typo in reference I think the authors do well in this aspect. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
3
] | [
"lsNbTcBGjyJ",
"RvXgN7o7jfF",
"YskgiReUctZ",
"qN89lI-IFG3",
"vl31Srr8Bik",
"FKqKa8pbqP",
"vUbt4KarAQN",
"gLafsu2NcLon",
"nips_2022_AREqvTvv6gG",
"N8RGNY3hRq",
"g-x495XQdV7",
"wU-jttT-F00",
"Jhlspx2RWll",
"OhEEOjgn0nA",
"nips_2022_AREqvTvv6gG",
"nips_2022_AREqvTvv6gG",
"nips_2022_AREqvTvv6gG",
"nips_2022_AREqvTvv6gG"
] |
nips_2022_9Qjn_3gWLDc | Object-Category Aware Reinforcement Learning | Object-oriented reinforcement learning (OORL) is a promising way to improve the sample efficiency and generalization ability over standard RL. Recent works that try to solve OORL tasks without additional feature engineering mainly focus on learning the object representations and then solving tasks via reasoning based on these object representations. However, none of these works tries to explicitly model the inherent similarity between different object instances of the same category. Objects of the same category should share similar functionalities; therefore, the category is the most critical property of an object. Following this insight, we propose a novel framework named Object-Category Aware Reinforcement Learning (OCARL), which utilizes the category information of objects to facilitate both perception and reasoning. OCARL consists of three parts: (1) Category-Aware Unsupervised Object Discovery (UOD), which discovers the objects as well as their corresponding categories; (2) Object-Category Aware Perception, which encodes the category information and is also robust to the incompleteness of (1) at the same time; (3) Object-Centric Modular Reasoning, which adopts multiple independent and object-category-specific networks when reasoning based on objects. Our experiments show that OCARL can improve both the sample efficiency and generalization in the OORL domain. | Accept | This paper received three positive reviews and one borderline reject. In the rebuttal, the negative reviewer did not propose a response, but the authors have given detailed responses to the problems. And the other reviewers did not propose further concerns. Thus, taking the comments of the reviewers into account, the AC decides to accept this paper. | test | [
"X-CtcKUxJuC",
"Vi-nEQpfrjW",
"JxfzaVuMrEFK",
"QlzrN4gAUAl",
"OIZPjHHaqEO",
"_q2kpbnY5wt",
"-soaAakCA1",
"bM0B-CGoEfR",
"ETYSwlqH4kn",
"ReODMDfYiU",
"T3J_RgpQSjse",
"hC15WMEZm8P",
"XtBfK4oQ3o2",
"LxMBHufC0-j",
"LHTZMomMW0K",
"V3wlNUiUyq4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your advice. We have uploaded a new revision that includes this explanation.",
" Thanks the author for the rebuttal, my concerns are resolved",
" > Yes, in the current paper, we are more interested in the generalization to unseen object combinations. Generalization to novel object instances does make sense in many applications and actually is exactly the topic of our next research. However, we think such generalization needs more effort to achieve. Generally speaking, such generalization needs the agent to interact with the novel object instances first and then infer their underlying categories according to these interactions. Such a problem is likely to be in the meta-RL domain, and we might need to train a separate inference model to predict objects' categories given their corresponding interactions.\n\nOkay, that makes sense to me. \n\n> (1) and (2) are both true and compatible. The main difference between RRL and OCARL is that OCARL harnesses the object category information discovered by UOD, which is achieved by Eq.(3) and Eq.(5). When p=1, the UOD cannot discover any objects, therefore all objects are labelled into \"background\". In such a scenario, the OCAP degenerates into a plain convolution encoder because the UOD model cannot provide any object information (i.e., Eq.(3) has not effect). The object-category-specific networks (Eq.(5)) in OCMR also degenerate into a single universal network because all object features are passed through the same network . Therefore, OCARL with p=1 is exactly the same as RRL.\n\nI see what you're saying now, but this is still unclear in the paper. It would be best to add this explanation to the paper (especially including \"The object-category-specific networks (Eq.(5)) in OCMR also degenerate into a single universal network because all object features are passed through the same network\"), or rephrase this section more clearly. ",
" \nWe thank all the reviewers very much for their insightful comments and constructive suggestions to strengthen our work. After considering their suggestions, we have uploaded a revision of our paper. Here is the main change list:\n- We added a sentence about how to choose a proper $C$ for unsupervised clustering (i.e., KMeans) in practice, which is one of the main concerns of Reviewer 1n95.\n- We re-wrote the sentences in Section3.3 to make them clearer.\n- We added a paragraph to acknowledge the limitation suggested by Reviewer 2Frs.\n- We added a section in the Appendix to discuss SPACE in more detail as suggested by Reviewer ZsUk.\n- Other minor changes.",
" Thank you very much for your comments. Please feel free to communicate with us if you have further questions or suggestions.\n\n>Q1: Why do you need to take the maximum value of Z^out and Z in Eq. 6? \n\n$\\max$ is actually an aggregation operator upon $Z^{out} + Z$, which is widely used in many applications. For example, in GNN, an aggregation operator (e.g., max, sum, mean,...) is often utilized to conclude the information of the whole graph.\n\n>Q2: In the OCAP module, it is surprising that different objects can be segmented unsupervised from the environment image only by a simple convolutional neural network. Can you explain why this works?\n\nThis is because this CNN is supervised by object category labels that come from the UOD model. Actually, the $p(z^{fg}|x)$ in SPACE also contains a CNN that infers object information from the raw images. In our opinion, CNN is naturally suitable for segmenting the image because the receptive field of each channel in CNN is different and thus can model different areas of the original image.\n\n>Q3: The proposed method does not consider the experimental environment with occlusion between objects, which is unreasonable.\n\nIn most RL tasks, there do not exist occlusion between objects. Occlusion between objects is actually a classic problem in the unsupervised object discovery domain. In general, it requires a recurrent architecture to infer the occlusion, making the whole process much more complicated. Therefore, we would like to leave this for future work. ",
" >Q5: Generalizability requires that the model be trained and applied to new data or new environments. However, the UOD is trained using the same environment as the test.\n\nThanks for your advice. We have conducted an experiment in which the UOD model is trained on Hunter-Z1C0/Z0C1 but then used to capture objects on Hunter-Z4C4; here is the result: https://anonymous.4open.science/r/NIPS2022-Rebuttal-FAD3/z1c0.png\nHere, the third image shows that the UOD model is able to correctly discover all objects and their corresponding categories (marked by different colors). The above results show that the UOD model itself is able to generalize well to novel object combinations, and therefore the results in Section4.2.2(Generalization) should not change when we use data from source tasks to train UOD.\n\n>Q6: There may be a formatting error at the top of page 5, where the first two lines have no line number and the formula is not coded, resulting in an error in the formula reference in line 259.\n\nWe have corrected this issue in our latest revision.\n\n>Q7: Are there experiments that show it is possible to continue training the UOD in the RL phase?? It would make the results stronger and less reliant on a good random exploration strategy.\n\nTraining the UOD online is actually a continual learning problem. In our early experiments, we found that these novel objects are always recognized as background (instead of the foreground) by the UOD model. We think more efforts in the UOD domain are needed to support training UOD online. \n\n>Q8: When the number of preset categories is greater than the actual number of categories (e.g. Hunter preset category number is greater than 4), is there an experimental display?\n\nIn https://anonymous.4open.science/r/NIPS2022-Rebuttal-FAD3/diff_C.png, we set C=6 in Hunter and find the results are almost the same with (C=4). See Q4 for more analysis.\n\n>Q9: The clustering results of unsupervised learning are shown in Appendix C, but I am more interested in the representation of the latent variable . Is it possible to show the prediction results of $f_{cat}$ in OCAP?\n\nActually Appendix C also shows the prediction results of $f_{cat}$, see the colored boxes in Figure10.\n\n>Q10: Does the method only apply to environments, which can be divided into grids? In line 105, the image is divided into $H\\times W$ cells, and each cell corresponds to an object. if the object is irregular or of a different size, how to handle it?\n\nObjects of different sizes can be handled by SPACE. Line 105 means that we have $H\\times W$ object slots, and each slot $(i, j)$ is tasked to capture the object $nearby$ the anchor point $(\\frac{iH_{img}}{H}, \\frac{jW_{img}}{W})$ in the image. This object is not necessarily of a regular size. For example, the bounding boxes (i.e., the captured objects) in \nhttps://anonymous.4open.science/r/NIPS2022-Rebuttal-FAD3/space_diff_obj_size.PNG (a detection result from the paper of SPACE)\nis actually of different shapes. In fact, $z_{ij}^{where}$ (one output of SPACE, see Eq.(1)) contains a set of parameters for a spatial transformer network that can automatically select the object patch and then resize it into a regular size.",
" Thanks for your very detailed review. We found your concerns (especially Q4) to be very constructive, which inspire us to improve our work further.\n\n> Q1: The paper gives an implementation that combines unsupervised learning, supervised learning, and reinforcement learning. Each of the sections chooses a classic implementation, but the reasonableness is not well illustrated and there is no experiment showing the advantages compared with other implementable methods.\n\nThe methods consist of three parts: UOD, OCAP, OCMR. In this paper, we mainly focus on harnessing the object information provided by UOD; therefore, we think the key contribution of this paper is the proposal of OCAP and OCMR. For both OCAP and OCMR, we do describe the advantages of our design at the beginning of the corresponding sections, and these advantages are also supported by experiments in Section4.3. For UOD, it is true that we do not describe why we choose SPACE & KMeans in Section3.1 because we do not take these as our main contributions, and they can be replaced by other methods whenever possible.\n\n>Q2: the paper points out that category information improves exploration efficiency and generalization, but there is no essential reason for the improvement (derivation, proof, etc.), nor is the condition and scope of its use stated.\n\nThere are several reasons that can account for the improvement:(1) By leveraging the category information, OCARL can learn to decompose the raw observations into a set of objects in an unsupervised fashion, which are disentangled representations and beneficial to the agent's understanding of the environment. The OCARL agent can focus on exploring the object functionalities without figuring out objects via reward signals, which will enhance the exploration efficiency. (2) Based on the object representations, OCARL builds a policy that makes decisions by reasoning over the functionalities and relationships of/between objects, which are invariant across different environment instances. By learning the invariant part between train and test environment, OCARL can generalize better. (3) OCARL adopts multiple independent and object-category-specific networks, each of which focuses on processing the object features of the same corresponding category. The processing logic of each independent network in OCMR is much simpler and, therefore, easier to master. \n\n>Q3: The UOD method uses data derived from the random exploration, but random policies often fail to explore the environment effectively and may result in some of the objects not being discovered.\n\nThis is exactly why we design OCAP. Thanks to OCAP, OCARL is robust to the incompleteness (i.e., ignoring some objects) of the UOD model, which has been demonstrated by experiments in Section4.3.1. Actually, such phenomenon (i.e., some objects not discovered) can be observed on the Crafter, in which OCARL has achieved impressive results. \n \n>Q4: This paper uses KMeans for clustering operation, so the number of categories k is the hyperparameter. In practical scenarios, we usually do not know the specific number of categories of objects explicitly, so there may be cases where the preset number of categories is greater or less than the actual number of categories. This paper only implements the \"less\" situation in the Crafter environment without considering the case where the preset number of categories is greater than the actual number of categories.\n\nIn our current experiment setting, the number of categories (C) is set by the oracle. However, it is easy to automatically find a proper (C). For example, we can use a clustering method (instead of KMeans) that does not need to specify (C) in advance (instead, the clustering method tells us the optimal (C)). Besides, there exist many metrics (such as Silhouette Coefficient) to measure the quality of clustering and can be utilized to find a proper (C). For example, in Hunter, we can derive Silhouette Coefficients for K=2,...,10:\n\n|K|2|3|4|5|6|7|8|9|10|\n|-|-|-|-|-|-|-|-|-|-|\n|silhouette coefficient|0.648|0.876|0.961|0.950|0.942|0.942|0.923|0.646|0.633|\n\nand automatically find that C=4 is the most proper choice. What's more, OCARL is also robust to the case when (C) is given 'more' than the ground-truth category number, as we show in this figure: \nhttps://anonymous.4open.science/r/NIPS2022-Rebuttal-FAD3/diff_C.png\nIn this figure, we set C=6 in Hunter (who has 4 ground-truth object categories), and find that the resulting performance is almost the same as C=4. This is because over-segmented clustering results provided by KMeans also contain category information, which can be used by OCAP+OCMR.\n\n\n\n",
" >Q10: Can Equation 5 be interpreted as applying attention weights based on the category probabilities predicted from Z? \n \nYes. Note that such attention weight is just a one-hot vector.\n\n>Q11: Is OCARL optimized with PPO as well? This should be clarified.\n\nYes. This has been clarified in line 198.\n\n>Q12: If you can explain the difference between RRL and OCARL and clarify the issues I have raised above I may be willing to increase my score.\n\nsee Q3.\n\n>Q13: Why were most of the results shown in Hunter rather than Crafter? \n\nThis is because Hunter is a flexible domain in which we have full control of the generating mechanism of different environment instances, making it possible for us to test OCARL's properties (such as OOD generalization). On the other hand, Crafter does not provide an easy way to control the generating progress of environment instances and runs much slower than Hunter.\n\n>Q14: Other advice\n\nWe thank you for your advice on improving the paper, and we have considered it in our latest revision.",
" We thank the reviewer for the insightful comments and suggestions.We would like to provide detailed explanations to your comments:\n>Q1: Crafter is a much harder environment of more significance to the community, so it would have been nice to see learning curves and ablations in this environment as well.\n\nThanks for your advice. Although Crafter is of more significance, it does not provides an easy way to control the generating progress of environment instances (such as the spawning distribution of each object category), which is needed in our experiments (such as Section4.2.2, Section4.3.2, and Section4.3.3). Besides, it will take much more time to run a trial in Crafter.\n\n>Q2: The current paper actually does not test generalization to novel instances of the same category\n\nYes, in the current paper, we are more interested in the generalization to unseen object combinations. Generalization to novel object instances does make sense in many applications and actually is exactly the topic of our next research. However, we think such generalization needs more effort to achieve. Generally speaking, such generalization needs the agent to interact with the novel object instances first and then infer their underlying categories according to these interactions. Such a problem is likely to be in the meta-RL domain, and we might need to train a separate inference model to predict objects' categories given their corresponding interactions.\n\n\n>Q3: [(1) Line 242-243 states that OCARL with p=1 is exactly RRL] v.s. [(2) OCARL uses the OCMR module, while RRL uses a universal network \"instead of multiple object-category-specific networks like OCARL does\"].\n\n(1) and (2) are both true and compatible. The main difference between RRL and OCARL is that OCARL harnesses the object category information discovered by UOD, which is achieved by Eq.(3) and Eq.(5). When p=1, the UOD cannot discover any objects, therefore all objects are labelled into \"background\". In such a scenario, the OCAP degenerates into a plain convolution encoder because the UOD model cannot provide any object information (i.e., Eq.(3) has not effect). The object-category-specific networks (Eq.(5)) in OCMR also degenerate into a single universal network because all object features are passed through the same network $f_{bg}$. Therefore, OCARL with p=1 is exactly the same as RRL.\n\n>Q4: The modularity experiment in Section 4.3.3 seemed interesting, but it was so hard to follow lines 272-276 \n\nThis section mainly shows that $f_C, f_Z$ (two object-category-specific networks in Eq.(5)) are one-to-one corresponding to behavior patterns (P1) chase&catch the Cow and (P2) avoid & shoot at Zombie, respectively. Line 272-276 shows that once $f_Z$ is disabled, (P2) disappears while (P1) still remains. \"Disable $f_Z$\" is achieved by replace the parameters of $f_Z$ with parameters of $f_{bg}$. Whether (P1) (or (P2)) disappears or remains is checked by running the resulting policy with certain networks disabled on Hunter-Z0C4 (or Hunter-Z4C0), which is shown in Table2. We have corrected an error in line 274, which may account for your incomprehension.\n\n>Q5: There are so many acronyms in the paper that it is hard to follow what's actually going on.\n\nThanks for your advice. There are four widely used acronyms in our paper: (1) OCARL; (2) UOD; (3) OCAP; (4) OCMR, and all these acronyms have been put together in Figure1 to enable easy reference. We think using these acronyms can make our paper more concise and accurate.\n\n>Q6: In Section 3.1, it is not clear what $z_{ij}^{what}$ actually is... is it a continuous embedding? A categorical label?\n\n$z_{ij}^{what}$ is a latent embedding of an object, which can be used to reconstruct the object. All categorical labels in the paper should be with superscript $\\cdot^{cat}$\n\n>Q7: Line 106 appears to state that $z^{fg}$ consists of a fixed number (H x W) of object representations, but the related work section states that this is only a drawback of spacial mixture models, and not SPACE (which uses a spatial attention module for the foreground).\n\nTraditional spacial mixture model methods only allow several object slots (e.g. In MONet, slot number=7). However, SPACE has $H\\times W$ object slots in total, which are sufficient for most tasks (in our experiment, it is $8\\times 8=64$). Besides, we can easily increase $H,W$ to introduce more slots.\n\n>Q8: Lines 100-101 contain an out of place detail (we run a random policy to get data and apply SPACE) that seems premature given the environment has not been introduced at this point.\n\nThanks for your advice. We have moved this sentence to the Appendix.\n\n>Q9: The explanation in lines 150-157 is unclear.\n\nLine 150-157 describes 2 processes: (1) add x-y coordinate information (2) apply a self-attention module to model the relations between objects. Eq.(4) is just a formula of self-attention, where $\\hat{Z}$ is used as query, key, and value. We have rewritten these sentences to make them clearer.\n\n",
" We thank the reviewer for the thoughtful comments.\n>Q1: Missing literature (A)(B)\n\nThanks for your advice. We have added these papers to our citation list of the latest revision. We find (A) is more interesting, which says that the neural mechanism of specialization can lead to better generalization ability (We have added a sentence in Section3.3 to discuss (A) in our latest revision). This perspective also agrees with [1], which we have cited in our paper.\n\n[1] How modular should neural module networks be for systematic generalization\n\n>Q2: Might be good to clarify why the two domains mentioned in the paper were chosen.\n\nCrater and Hunter are chosen in our paper because of the consideration of both complexity and flexibility. Crafter is a complex domain with 19 kinds of objects in total and is of much more considerable significance to the RL community, which can be utilized to test OCARL's ability to handle complicated object combinations. On the other hand, Hunter is a flexible domain in which we have full control of the generating mechanism of different environment instances, making it possible for us to test OCARL's various properties (such as OOD generalization).\n\n>Q3: Is the number of classes (C) a known, required parameter? Was this known for the games, or not?\n\nIn our current experiment setting, (C) is known and set by an oracle. However, it is easy to find a proper (C) automatically. For example, we can use a clustering method that does not need to specify (C) in advance (instead, the clustering method tells us the optimal (C)). Besides, many metrics (such as Silhouette Coefficient) exist to measure the quality of clustering and can be utilized to find a proper (C). For example, in Hunter, we can derive silhouette coefficients [0.648, 0.876, 0.961, 0.950, 0.942, 0.942, 0.923, 0.646, 0.633] for C=2,3,...,11, and automatically find C=4 is the most proper choice.\n\n>Q4: Why were Crafter and Hunter domains chosen? Were there any specific properties of these domains that made them harder/easier for OCARL?\n\nsee Q2.",
" Thanks for the thoughtful comments. We would like to clarify the concerns as follows:\n>W1: The tested benchmarks seem to be limited. I am wondering if there are some larger scale benchmarks to test the proposed method.\n\nThere do exist many benchmarks in RL. However, most of them are not object-oriented or as complicated as Crafter. Perhaps NetHack can be a candidate, however its observations are symbolic (instead of pixel images) and NetHack needs training for more than 1 billion environment steps, which is beyond the interest of this paper. Crafter is actually very complicated and can 'benchmark the spectrum of agent capabilities' (as Figure4 shows), which recently attracted researchers' interest in the RL community. Crafter features a large object category number (19) and also a large possible action number (17), making it a challenging benchmark that is not well solved by other model-free RL algorithms.\n\n>W2: The description of the proposed method is not quite clear, I would recommend the paper to at least include some preliminary on SPACE in the main paper or the supplementary instead of asking the reader to refer to the original paper, IMO SPACE is an important component of the proposed method, so it is better to describe it in the main paper.\n\nThanks for your advice. In our latest revision, we have added a section in the Appendix to describe SPACE in more detail. Actually, we have already introduced the inference model of SPACE (the most important part of SPACE which infers the object representations from raw images) in Section3.1, which should provide readers a rough understanding of SPACE. \n\n>Q1: Some technical details about the SPACE method for unsupervised object discovery is not quite clear\n\nSee W2.\n\n>Q2: The unsupervised object discovery module is trained on images generated from a random policy, it would be interesting to see if using a better policy to generate the images used for training the object discovery module could improve the performance, for example, using the PPO policy to generate images.\n\nThis is a good idea and also a promising direction for future work. However, in general, we do not have access to a well-trained policy to collect data from the environment, which means the UOD model may be incomplete (i.e., ignore some objects). This is exactly why we design OCAP, which makes OCARL robust to the UOD model's incompleteness. \n\n>Q3: Is there other tasks that are larger than Crafter and Hunter in terms of the image size and action space? I think given the impressive performance of the proposed method, it is reasonable to see if the proposed method can work on larger scale problems.\n\nSee W1.\n\n>Q4: There are other object centric representation learning method that could be discussed, such as slot attention[R1]\n\nThanks for your advice, and we added [R1] in our reference list and mentioned it in our related work. We think [R1] should fall into the spacial mixture models, because it iteratively clusters features that belongs to the same object.\n\n\n\n",
" This paper proposed to learn object representations that can utiltize the category informations for perception and reasoning in reinforcement learning.\nExperiments on two RL benchmarks show clear improvements of the proposed method over previous RL methods.\nThe ablation studies also cover a lot of aspects of the proposed method and show clear improvement of the each part of the proposed method.\n \nS1: The idea that object representations should encode category informations is interesting and IMO important for build generlizable RL methods.\n\nS2: In the evaluation of this paper, the proposed method shows a very clear and strong improvements over previous methods.\n\nW1: The tested benchmarks seems to be limited, I am wonderring if there are some larger scale benchmarks to test the proposed method?\n\nW2: The description of the proposed method is not quite clear, I would recommand the paper to at least include some preliminary on SPACE in the main paper or the supplementary instead of asking the reader to refer to the original paper, IMO SPACE is an important component of the proposed method, so it is better to describe it in the main paper.\n \nQ1: Some technical details about the SPACE method for unsupervised object discovery is not quite clear, I would \n\nQ2: The unsupervised object discovery module is trained on images generated from a random policy, it would be interesting to see if using a better policy to generate the images used for training the object discovery module could improve the performance, for example, using the PPO policy to generate images.\n\nQ3: Is there other tasks that are larger than Crafter and Hunter in terms of the image size and action space? I think given the impressive performance of the proposed method, it is reasonable to see if the proposed method can work on larger scale problems.\n\nQ4: There are other object centric representation learning method that could be discussed, such as slot attention[R1].\n\n[R1] Object-Centric Learning with Slot Attention, NeurIPS 2020\n I think the major limitation of this paper is that the evaluated tasks are only two, I would expect to see more tasks given that the proposed method works so well the task tested.\n",
" The paper proposes augmenting Object Oriented RL approaches with additional information in the form of object category. Specifically, they use a module which uses unsupervised clustering to identify object categories from the output of a (previously known) unsupervised object detection algorithm. This is additional info (predicted category) is then incorporated using a convolutional encoder (OCAP). The object-category aware information from this module is passed into the third module which contains category-specific neural networks, which finally returns the action probabilities and values. Stengths:\n\n1. The motivation is clear and well represented in the paper. The paper does a good job of placing their contributions in the context of existing works.\n\n2. OCARL does generalize better than standard OORL approaches. \n\n3. The methods section presents approach in detail, and should be reproducible for future researchers.\n\n4. Ablation study is solid and gives a sense of which module helps with the performance bump. \n\nWeaknesses:\n\n1. Missing literature: The work cited below [A,B] seem closely related to some work on OOD generalization to novel combinations. In fact, experiments in [A] might also explain why OCMR helps generalize when using separate category-specific neural networks. Would be good to include a line or two discussing how OCMR connects to this work. \n\nA. https://www.nature.com/articles/s42256-021-00437-5.pdf\nB. https://www.nature.com/articles/s41593-018-0310-2\n\n2. Might be good to clarify why the two domains mentioned in the paper were chosen. 1. Is the number of classes (C) a known, required parameter? Was this known for the games, or not? \n\n2. Why were Crafter and Hunter domains chosen? Were there any specific properties of these domains that made them harder/easier for OCARL? Yes",
" The paper proposes learning object categories in order to enhance object-oriented RL (OORL). While past work on OORL has shown the benefit of extracting objects from a scene before learning with RL, this work has not considered learning to cluster objects into related categories so that the policy can treat similar objects the same. This work uses an existing Unsupervised Object Discovery (UOD) method to discover objects, but then learns a clustering of the resulting objects. It then asks the RL network to not only be able to predict the category of objects (forcing the representation to learn about object categories), but it also uses the object categories to apply independent category-specific networks to each object, which the authors argue improves modularity and therefore generalization. The paper shows results in two gridworld domains which represent complex sequential decision making tasks, and show significant improvements over relevant baselines and ablations. ### Strengths\n**Originality**: The idea of learning categories of objects, and a policy that treats objects in the same category in a similar way, is both compelling and novel. The ablation studies reveal that the paper essentially makes two novel algorithmic contributions based on object category awareness that improve performance: OCAP (predict object cluster labels), and OCMR (having different modular independent networks for different object categories). \n\n**Significance:**\nThe paper benchmarks against two relevant OORL baselines (RRL and SMORL), and a reasonably advanced RL method (PPO), and shows clear and significant performance gains above these methods in complex sequential decision making tasks including Crafter (which is based on MineCraft). Although these environments are gridworlds, crafting is a complex problem and it is difficult for conventional methods to obtain high rewards. The idea of reasoning over object categories is likely to be useful in a broader range of tasks (e.g. object manipulation). The generalization performance is also interesting. \n\n**Quality**: \nThe design of the network architecture proposed in the paper appears to be complex but thoughtful; the authors carefully justify each design decision (e.g. why the network should predict the object category as an objective, rather than use it as input, so it can be robust if the UOD module fails to extract the right category). The ablations also back up the claims that each component is necessary. \n\n**Clarity**:\nThe description of related work is clear and gives a good overview of the field while drawing the distinction with this work.\n\n### Weaknesses:\n**Significance/Quality:**\nIn spite of the fact that experiments were conducted in Crafter, most of the analysis for the paper is shown in Hunter. Crafter is a much harder environment of more significance to the community, so it would have been nice to see learning curves and ablations in this environment as well.\n\nFurther, although the paper tests generalization, it does so in a pretty limited way, because in no experiment is the agent ever tested on a truly novel object (only on differing combos of zombies and cows, but it has always seen at least one zombie or cow previously). The current paper actually does not test generalization to novel instances of the same category. It would be interesting to extend the experiments to another domain (one example could be simulated robotic manipulation of objects on a tabletop), and test whether the method can truly generalize to new instances of the same category. For example, if the agent had trained to pick up a red cup and a blue cup, could it generalize to picking up a green cup? \n\n**Clarity**:\nThe biggest weakness of the paper is clarity. In addition to smaller grammatical errors, there are some places where explanations significantly detract from clearly understanding the paper. The single biggest issue is in the method/results. Line 242-243 states that OCARL with p=1 is exactly RRL (where p is the probability that the UOD module can't detect a category). If this were true, this would significantly detract from the novelty of the paper. However, a few sentences below in lines 253-255 the paper directly contradicts this statement, by pointing out that OCARL uses the OCMR module, while RRL uses a universal network \"instead of multiple object-category-specific networks like OCARL does\". One of these two statements must be false, and I believe it is the one about the exact equivalence with RRL. This should be clarified, and it would be good to further explain in the related work or methods how the architecture being proposed is different than RRL.\n\nThe modularity experiment in Section 4.3.3 seemed interesting, but it was so hard to follow lines 272-276 I was not sure what the experiment actually showed. Is there a table or figure describing the results?\n\n A non-exhaustive list of other clarity issues: \n- There are so many acronyms in the paper that it is hard to follow what's actually going on. Rather than OCAP and OCMR, it might be nice to use a phrase like \"category modules\" to cue the reader. \n- In Section 3.1, it is not clear what $z_{ij}^{what}$ actually is... is it a continuous embedding? A categorical label?\n- Line 106 appears to state that $z^{fg}$ consists of a fixed number (H x W) of object representations, but the related work section states that this is only a drawback of spacial mixture models, and not SPACE (which uses a spatial attention module for the foreground).\n- Lines 100-101 contain an out of place detail (we run a random policy to get data and apply SPACE) that seems premature given the environment has not been introduced at this point. It is hard to evaluate whether a random policy would be able to collect enough data to cover all object categories without knowing about the environment. This also does not seem to be part of the *method*, per se, but rather an implementation detail.\n- The explanation in lines 150-157 is unclear. \n- Can Equation 5 be interpreted as applying attention weights based on the category probabilities predicted from Z? The explanation / rationale should be improved here. \n- Is OCARL optimized with PPO as well? This should be clarified. If you can explain the difference between RRL and OCARL and clarify the issues I have raised above I may be willing to increase my score.\n\nWhy were most of the results shown in Hunter rather than Crafter? Crafter is a complex unsolved domain so it is of significant interest to the community. I would suggest moving Figure 8 from the appendix into the main text if possible, since those results are compelling.\n\nI would suggest removing the italics such as \"OCARL achieves better performance on *all* tasks\" and \"is the *only* method\". This gives an appearance of a lack of objectivity that is not appropriate for a scientific paper. \n\nI suggest adding a sentence after line 89 to make it clear how OCARL is distinct from SMORL.\n\nThe paper contains several English-language errors that should be proof-read and corrected. A non-exhaustive list:\n- Line 23: \"To deal with these limitations, OORL is a promising way\" -> \"OORL is a promising way to deal with these limitations\"\n- Line 25: invariant -> invariance\n- Line 29: \"yield in low generality\" doesn't make sense. \n- Line 158: concatenate -> concatenated \n\nI appreciated the cognitive science justification for object awareness that the authors provided. The authors acknowledge the limitations of their clustering approach in line 124-126, but did not otherwise acknowledge the limitations of their method. I would suggest acknowledging that the didn't test generalization to novel instances of the same object category, as I explained above.\n\nIt does not appear that the authors included a discussion of the societal impact of their work. Perhaps they can talk about the benefit of improved object-oriented RL to the potential for building better robots, and the consequences thereof. ",
" This paper proposes a new framework applied to Object-oriented reinforcement learning (OORL) called Object-Category Aware Reinforcement Learning (OCARL), which aims to explicitly model the similarity between different objects of the same category. It consists of three main components: 1) UOD: unsupervised learning method (SPACE+IncrementalPC+KMeans) is used to complete the identification and clustering of objects; 2) OCAP: the clustering results are used as supervised learning signals to guide the encoding of objects; 3) OCMR: a self-attention mechanism is introduced, and different category objects use independent networks to complete the reasoning process. The experiments show that OCARL can improve the sample sampling efficiency and generalization ability of the model. Strengths:\n\n1. The paper contributes to improving object representation learning in model-free RL which has practical applications in object-oriented RL.\n2. Compared with existing methods, OCARL shows outstanding generalization ability.\n3. The paper is well written and well organized. \n\nWeaknesses:\n\n1. The paper gives an implementation that combines unsupervised learning, supervised learning, and reinforcement learning. Each of the sections chooses a classic implementation, but the reasonableness is not well illustrated and there is no experiment showing the advantages compared with other implementable methods.\n2. The paper points out that category information improves exploration efficiency and generalization, but there is no essential reason for the improvement (derivation, proof, etc.), nor is the condition and scope of its use stated.\n3. The UOD method uses data derived from the random exploration, but random policies often fail to explore the environment effectively and may result in some of the objects not being discovered.\n4. This paper uses KMeans for clustering operation, so the number of categories k is the hyperparameter. In practical scenarios, we usually do not know the specific number of categories of objects explicitly, so there may be cases where the preset number of categories is greater or less than the actual number of categories. This paper only implements the \"less\" situation in the Crafter environment without considering the case where the preset number of categories is greater than the actual number of categories.\n5. Generalizability requires that the model be trained and applied to new data or new environments. However, the UOD is trained using the same environment as the test.\n6. There may be a formatting error at the top of page 5, where the first two lines have no line number and the formula is not coded, resulting in an error in the formula reference in line 259. 1. Are there experiments that show it is possible to continue training the UOD in the RL phase?? It would make the results stronger and less reliant on a good random exploration strategy.\n2. When the number of preset categories is greater than the actual number of categories (e.g. Hunter preset category number is greater than 4), is there an experimental display?\n3. The clustering results of unsupervised learning are shown in Appendix C, but I am more interested in the representation of the latent variable $Z$. Is it possible to show the prediction results of $f_{cat}$ in OCAP?\n4. Does the method only apply to environments, which can be divided into grids? In line 105, the image is divided into $H\\times W$ cells, and each cell corresponds to an object. if the object is irregular or of a different size, how to handle it? The paper doesn't really discuss its own limitations explicitly. Adding a section would be quite helpful. OCARL requires additional information about the environment (the number of categories), which may not work well if the difference between the number of preset categories and the actual number of categories in the environment is large.",
" This paper proposed a object-oriented reinforcement learning method, wihich consists of three parts: Category-Aware UOD, OCAP and OCMR. Category-Aware UOD mainly provides category supervision information for OCAP, enabling OCAP to perceive objects in the environment more accurately.OCMR learns the interactions between objects in a self-supervised manner to better predict action probabilities. Through the recognition of object categories in the environment and the learning of interrelationships, the model can learn better action strategies. It is verified that the proposed model can better perform reinforcement learning tasks in two experimental environments. Strengths:\n1)It is an interesting strategy to help the agent better perform reinforcement learning tasks by identifying the types of objects in the environment.\n2) The experimental results show that the proposed method is effective.\n\n\n\n 1) Why do you need to take the maximum value of Z^out and Z in Eq. 6?\n2)In the OCAP module, it is surprising that different objects can be segmented unsupervised from the environment image only by a simple convolutional neural network. Can you explain why this works? 1) The proposed method does not consider the experimental environment with occlusion between objects, which is unreasonable."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5,
3
] | [
"JxfzaVuMrEFK",
"T3J_RgpQSjse",
"ETYSwlqH4kn",
"nips_2022_9Qjn_3gWLDc",
"V3wlNUiUyq4",
"-soaAakCA1",
"LHTZMomMW0K",
"ETYSwlqH4kn",
"LxMBHufC0-j",
"XtBfK4oQ3o2",
"hC15WMEZm8P",
"nips_2022_9Qjn_3gWLDc",
"nips_2022_9Qjn_3gWLDc",
"nips_2022_9Qjn_3gWLDc",
"nips_2022_9Qjn_3gWLDc",
"nips_2022_9Qjn_3gWLDc"
] |
nips_2022_LdAxczs3m0 | Efficient Risk-Averse Reinforcement Learning | In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns. A risk measure often focuses on the worst returns out of the agent's experience. As a result, standard methods for risk-averse RL often ignore high-return strategies. We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a mechanism we call soft risk to bypass it. We also devise a novel cross entropy module for sampling, which (1) preserves risk aversion despite the soft risk; (2) independently improves sample efficiency. By separating the risk aversion of the sampler and the optimizer, we can sample episodes with poor conditions, yet optimize with respect to successful strategies. We combine these two concepts in CeSoR - Cross-entropy Soft-Risk optimization algorithm - which can be applied on top of any risk-averse policy gradient (PG) method. We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks, including in scenarios where standard risk-averse PG completely fails. | Accept | Overall, the reviewers were satisfied with the author response and overall recommend acceptance. However, there were many discussion points and nuanced details that arose during post-rebuttal author-reviewer discussion. Reviewers would like to see these discussion points, clarifications, and requests for revision addressed in the camera-ready. To this last point, I specifically highlight the writing/illustrative example discussion that the authors had with reviewer QDcc. I fully agree that refactoring a paper is challenging, but ultimately, the suggested modifications will improve the accessibility of the ideas and contributions in the paper. | train | [
"njPD9UUtIJ",
"_M-rH6ty4V",
"IZxh4IVxHhv",
"vwjpwaJC9WY",
"2nZhD-HV9sw",
"15ibH1DMU7",
"LAHA0oHhgT",
"nI6ZiF2DCST",
"AAMnsS1Jfcm",
"KB8iNx48_xu",
"CFwk8w96Ipj",
"zX3nR_qchmP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your response, and apologies about the late reply. \n\nIn terms of algorithm choice, I was only suggesting to extend the comparison from Guarded maze to traffic and server control domains, since I suspect (though could be wrong) that DRL should be less brittle on those problems than it would on the grid world domain. Given that Keramati's code has not been openly published, it would certainly be too much to ask for it during the rebuttal phase. However, I do recommend to a comparison in the camera ready if you can, because that algorithm seems like an obvious choice to address some of the problems also being tackled here as well, and could be seen as SOTA in some sense. My response to your claim \"that once we develop the understanding of Theorem 1, a simple and heuristic solution can indeed be sufficiently effective\" is that I do really like the connection you make between the theory and the annealing trick, which seems well-justified to me. ",
" I appreciate that the authors clarify the concerns, especially the relationship between CEM and soft-risk part. They authors also rearrange and extend the main text which makes it much clear now. I have raised my score.",
" Thank you for the detailed response to our comments!\n\n* **QR-DQN**: We have indeed begun to investigate the phenomena and solutions of our work in the framework of DRL. While the same limitations indeed apply to this framework (as discussed in our previous response), DRL has different training dynamics and thus we intend to study the solutions carefully in a separate research. Furthermore, in DRL, we suspect that clever sampling may have benefits even without risk-aversion (e.g., in face of near-degenerate returns as you mentioned), thus we see this framework as promising for future work.\n * BTW, is there any reason you referred to QR-DQN rather than IQN, besides being implemented in stable-baselines and used in our previous response?\n* **Further experiments**: following our extended experiments described in the first round of the rebuttal, we are now considering further experiments to extend our empirical evidence - both baselines and references as you mentioned - though we do not expect to have further results by the end of the discussion period tomorrow. Specifically regarding Keramati et al., do you happen to be familiar with an available code implementation? That could speed things up, but we did not find one.\n* **alpha’**: A dedicated discussion is a good idea. We’ll add it in Section 3 / Section 4.2 or in the appendix, according to space limitations. We’ll add a plot(x=iteration, y=alpha’) for a clear description of the scheme; and a discussion of why it actually bypasses the issue of Theorem 1, possibly through the Guarded Maze as a concrete example. We’ll also stress the point we mentioned in the rebuttal - that once we develop the understanding of Theorem 1, a simple and heuristic solution can indeed be sufficiently effective. Is there anything else you’d like to see in that discussion?\n* **Writing**: After addressing the questions regarding correctness and experiments, we now indeed consider both restructuring the paper (mostly switching the motivation of Section 4 with the solution of Section 3), and adding a guiding example as you suggested (probably using the Guarded Maze, unless you have another suggestion). Refactoring is always tricky, but we believe we’ll indeed be able to bring the paper to a clearer form without creating other sources of unclarity. Thanks!",
" Thank you for addressing the major concerns I had about the design of the work, and I have raised my score as a result (and possibly further depending on discussion). \n\nI agree with the findings of the additional experiments run to address my comments and those of the other reviewers. I think it will be a good idea to incorporate qrdqn and the approach of Keramati et al. as a baseline in all the experiments, especially since in my past investigations I have found DRL to work particularly poorly on the types of (near-degenerate) return distributions provided by grid-world domains. I would also recommend to evaluate the methods suggested by reviewer xSEv as well. \n\nI do like the overall approach taken and, while I do agree partly with the reviewer's concerns about the annealing trick for alpha being heuristic, the choice to bias with the soft risk is sufficiently supported by the theoretical claims. I think providing further discussion about the alpha' outside of the pseudocode would be helpful. \n\nThat said, the main concept of the paper is quite nuanced and it can be difficult to follow without a clearer road map early on. I would still encourage the authors to provide some form of illustrations of the blindness to success phenomenon and explicitly how (and why) existing PG updates failed in the risk sensitive setting, e.g. through a stylized worked example or derivation of the PG gradient update where it is zero. This might help readers to navigate the paper (which reads quite dense) and understand the phenomenon a bit clearer early on.\n",
" We thank the reviewer for their helpful comments. Please see our responses below, and in particular the clarification of our novelty. We will appreciate it if the reviewer can point to any unclarity in the paper regarding this issue, and reconsider their assessment of novelty in light of the discussion below.\n\n### Novelty\nThe reviewer wrote, “The main contributions seem to be a heuristic risk quantile hyper-parameter scheduling…”:\n* We point to a currently not discussed limitation of risk-sensitive optimization algorithms such as CVaR-PG - *blindness to success* - grounded by both theory (Section 4.1) and detailed experiments (Section 5.1). As stated by Reviewer QDcc, *“Theorem 1 is an important and interesting result that has standalone value for the general research area”*.\n* Based on this new understanding, we indeed suggest the simple and effective solution of soft-risk.\n\nThe reviewer wrote, “...and a sampling-based previous CEM”:\n* This is not accurate: the standard CEM is not applicable to a non-stationary distribution (such as the returns distribution of a training agent). We had to develop a novel dynamic-target version of the CEM, as mentioned in Section 1 and described in Section 3. As mentioned by Reviewer WUYT, *“They extend the CE method for their particular setting”* (in fact, this extension can also be applied to other non-stationary problems).\n* In addition to the sampler’s novelty, the application of any sampler to focus on high-risk parts of the environment, as well as the theoretically-analyzed motivation (sample-efficiency), are important contributions by themselves. As stated by Reviewer QDcc, *“the application of CE and its corr. Proposition 1 is novel”*.\n\nNote that the conceptual novelties discussed above are also backed by the significant results of Figure 1 and Section 5.\n\n### Responses to questions\n1. **The PG reference**: As mentioned in the beginning of Section 5, the standard PG is not a competitor of CeSoR. Rather, it is used to demonstrate the inherent tradeoff between the different objectives (mean/CVaR). In all the benchmarks, PG performs well and achieves the best *mean* return (see Figures 1a-1c), using sensible policies (see Figures 11, 15, 19 in the appendix). In all the benchmarks, PG faces an *inherent risk* of the environment (the guard in the maze, the leader behavior in the driving, and the peak-loads in the servers), and compromises it in order to improve the *average*. In the only exploration-heavy benchmark - the Guarded Maze - PG learns the short-path policy (see Figure 11), which is mean-optimal, indicating it did not suffer from lack of exploration. The inferior CVaR of the risk-neutral PG comes from objectives difference, and does not reflect a failure to learn.\n2. **CEM vs. soft-risk**: In addition to their independent motivations (sample-efficiency and blindness prevention, respectively), the soft-risk has a side effect of reducing the risk aversion in the beginning of the training, and the CEM reduces this side effect (as mentioned in the abstract, discussed in the the soft-risk paragraph of Section 3, and demonstrated in Section 5.1 and Figure 3). **We now extended the discussion in Section 3 to make this clearer**. Regarding Algorithm 1, the CEM changes phi, which changes the next-iteration trajectories, which determine the next PG step.\n\n### Other comments\n* The separation between Sections 2 and 3 is of prior work vs. novel work. Hence, the original version of the CEM is introduced in Section 2, and our novel dynamic-target CEM is in Section 3. To prevent confusion, we now changed Section 2's title to \"Preliminaries and Problem Formulation\".\n* We corrected the grammar and defined the indicator notation. Thanks!",
" We thank the reviewer for their detailed comments.\n\nAlso thanks for pointing out the two relevant works, which we now added to Section 1.1. However, please mind the significant differences between our works:\n* **Adaptive Sampling for Stochastic Risk-Averse Learning**: They indeed apply a sampling method to focus on risk, though not in an RL framework. In addition, **their discussion of zero gradients refers to a different phenomenon**: “only a fraction alpha of points will contain gradient information. The gradient of the remaining points gets truncated to zero” - this is the well-known phenomenon to which we referred as “sample inefficiency” or “throwing data away”. They do not address the phenomenon we call “blindness to success” - that is, zero gradients in the *remaining* alpha points, which completely eliminates the whole gradient and stops the entire learning.\n* **On the Convergence and Optimality of PG for Risk**: They mention the appearance of vanishing gradients in a specific example in the bandit setup, as part of their detailed discussion on non-unique stationary points. However, it seems that they do not analyze the phenomenon of the plateau in the loss; do not characterize it for general RL problems in terms of the returns distribution profile; do not make the separation between high-risk conditions and poor agent returns; and they leave the suboptimality gaps as a challenge for future research.\n\n### Responses to questions\n* **Figure 2**: Every subfigure represents a batch of episodes; every point represents a single episode; C is the context of that episode and R is its return, both known to us during training (independently of which algorithm we use). Also note that Figure 2 specifically is only a qualitative illustration: as mentioned, it is analogous to the Maze benchmark, but does not present actual data. We now made this clearer in the caption.\n* **alpha’ scheduling**: The heuristic reasoning is a simple linear-decay from 1 to alpha (we briefly discuss that at the end of Section 4.1). The decay stops a while before the training’s end, so that the last part of the training has a stable objective. We did not try any other scheduling schemes (as mentioned in Section 4.1, this is left for future work). Our main take-away is that once we understand the problem of blindness to success, the solution can be simple and does not have to be particularly accurate.\n* **Theorem 1 and m0**: The gradient vanishes if there's a beta-tail barrier for beta>alpha (i.e., if all the beta lowest returns are identical). This may happen at any time during the training. We denote such a time by m0, and claim that there will be no further learning after iteration m0. Hence, *m0 is the first iteration where we encounter a barrier*. Smaller alpha makes the barrier more probable, thus it may come earlier (i.e., potentially decreasing m0).\n* **The GCVaR baseline**: As mentioned in Sections 3 (last paragraph) and 4 (first paragraph), unlike other methods, GCVaR has certain convergence guarantees that propagate to CeSoR if GCVaR is used as a baseline. In addition, GCVaR's simplicity arguably makes the empirical comparison cleaner.\n* **Clipping**: If the reviewer refers to weights clipping, note that GCVaR (with the default sample distribution) does not require weights and thus nor weights clipping.\n* **Following the review, we ran ablation tests for the rest of the benchmarks and added a corresponding appendix**. In both benchmarks (driving game and servers allocation), both CeR and SoR still lose to CeSoR in terms of both CVaR and mean. CeR performs similarly to GCVaR (but at least converges faster). SoR is closer to CeSoR but still loses in both metrics, which may be attributed to CeSoR’s increased sample-efficiency.",
" We thank the reviewer for their helpful comments.\n\nAlso thanks for pointing out the sensitivity of Proposition 1 to the quantile accuracy assumption. While this was already discussed in Appendix B, we now added a discussion in Section 4.2 as well.\n\n### The $\\beta$ hyper-parameter\n* **Intuitively**, every CE-iteration we focus on the beta-tail of the previous iteration, until we reach the alpha-tail of the reference distribution. Hence, intuitively, we expect exponential convergence to the desired alpha-tail, and larger values of beta are expected to cause only a small delay. Furthermore, even if the sampler is biased and samples from a tail less extreme than alpha, this should still provide an improvement over a neutral sampler.\n* **CEM convergence**: while having certain convergence guarantees, the rate of the CEM’s convergence is tricky to theoretically analyze for the general case. The dynamic-target in our CEM version sets an additional challenge for such a theoretical analysis (even though the target determined by the policy typically changes more slowly than the CEM sampler). These challenges are orthogonal to beta.\n* **CeSoR convergence**: the convergence proof in Appendix C does not rely on the performance of the CEM (and in particular not on beta), as the expected bias of the gradient is bounded for *any* sample distribution (as long as it has the same support as the original distribution).\n* **Practically**, we had simply used beta=0.2, which gives a decent sample size, yet expected to bring us to the 0.01-tail of the reference distribution within a few iterations. We hadn’t had to make any tuning for this parameter. Furthermore, Figure 5 in Appendix D2 shows that as desired, the sample-mean follows the reference-CVaR quite closely (up to the exceptions discussed at the end of Appendix D2).\n* **Following the review, we ran sensitivity tests for all the environments, and added a corresponding discussion in the appendix**. In the maze and the driving game, all $\\beta \\in [0.05, 0.5]$ provided similar test results, and only the highest ($\\beta=0.5$) caused any visible delay in training convergence. In the servers allocation problem, the sampling task is more challenging due to the combination of small alpha (0.01) and poor distribution parameterization (Binomial, as discussed in Appendix G); there, beta<0.3 still performs similarly to the original CeSoR, but higher values fail to sample the tail, and begin to deteriorate towards GCVaR performance. Note that even under such a unique combination of poor choices (Binomial parameterization and very high beta), the failure of the CEM is easy to notice (in Figure 5c, the sample-mean fails to deviate from the reference-mean), and thus is easy to fix.",
" We thank the reviewer for their detailed and helpful comments.\n\n### Distributional RL\nNote that DRL in general is not a risk-averse algorithm, but rather risk-neutral. While the learned distribution can be leveraged to prefer risk-averse actions (e.g., distributional-rl.org, chapter 10, page 318), this approach suffers from similar difficulties to those of PG, as discussed below.\n\n* **DRL with CVaR on inference**: Consider a standard DRL agent (i.e., trained to optimize the *mean*), that is set to choose actions on inference according to *CVaR*. The distribution is learned wrt the training policy. Hence, similarly to other methods, the values would be incorrect once we changed the policy: the CVaR of the current action does not take into account the change in the next action. Thus, this naive approach would not truly optimize the CVaR of the return.\n* **DRL that uses CVaR consistently on both training and inference**: This approach still suffers from similar issues to PG.\n * Regarding sample-efficiency, while not completely ignoring most of the data, still only a small portion of the data corresponds to high-risk conditions, making it difficult to learn how to perform well under them. Over-sampling of risk could still improve the accuracy of the learned distribution’s tail.\n * Regarding blindness to success, this method is still prone to miss beneficial strategies: it still directs the policy according to the worst performance rather than the hardest conditions, and learns the distribution wrt that policy.\n* **We added corresponding experiments**: We ran both approaches mentioned above on the Guarded-Maze benchmark. We used the framework of stable-baselines3-contrib / qrdqn, and inserted the CVaR by replacing the mean when aggregating over quantiles in *policies.py* and in *qrdqn.py*. The results fit the discussion above.\n * Switching to CVaR after training: this resulted in a messy and seemingly meaningless policy, obtaining worse CVaR than GCVaR.\n * Using CVaR for both training and inference: **identically to GCVaR, this learned to avoid both the short path and the long path and obtained a constant return of -32**. These results indeed indicate that it suffers from the same limitations as GCVaR.\n * We hope that the new experiments also address the reviewer’s concern about the lack of comparative baselines besides GCVaR and PG.\n\n* **In summary, we argue that blindness to success and sample-inefficiency are general phenomena in risk-averse RL, and in particular apply in DRL in addition to PG**. In this sense, in addition to our direct contribution to risk-averse PG, we hope to pave the way for other efficient risk-averse RL methods (as mentioned in the last paragraph of the paper). **We added the corresponding discussion and results to the appendix**.\n\n### Responses to the other questions\n2. **CEM vs. soft-risk**: In addition to their independent motivations (sample-efficiency and blindness prevention, respectively), soft-risk has a side effect of reducing the risk aversion in the beginning of the training, and the CEM reduces this side effect (as mentioned in the abstract, discussed in the the soft-risk paragraph of Section 3, and demonstrated in Section 5.1 and Figure 3). **We now extended the discussion in Section 3 to make this clearer**. It is not clear that this is a case of bias-variance tradeoff, since the bias of the soft-risk is a tool and not a compromise (see (4) below).\n3. **Function approximation errors**: Even in complicated environments, the rewards may still be sparse or discrete, hence blindness to success still applies. Consider Montezuma’s Revenge for example: even if the agent can reach rewarding states, without the soft-risk the optimizer would simply discard them. In that sense, the environment complexity only increases the importance of sample-efficiency and of acknowledging beneficial strategies using soft-risk.\n4. **Soft-risk causes bias**: The modified alpha intentionally modifies the objective, and thus indeed creates a bias. This bias is not arbitrary: it is designed to bypass the potential loss plateau (Section 4.1) and guide the policy towards the more successful strategies. Since it bypasses a problem in the loss-landscape itself, its “error” wrt the true gradient is in fact a desired deviation. For this reason, as mentioned in the beginning of Section 4.2, Proposition 1 refers to the last phase of the training (which uses the \"true\" alpha). As discussed above, before that phase, the CEM has another critical role in preservation of risk aversion. **We now elaborated on this in Section 4.2**.",
" - This paper studies the problem of optimizing CVaR-alpha of returns using the policy gradient method. \n- The paper identifies and tackles two deficits of existing PG approaches to CVaR optimization:\n1. the \"blindness to success\" phenomenon, which clips returns above the alpha-quantile of the return distribution and causes the corresponding gradients to be uninformative w.r.t. high-return scenarios -- the approach suggested by the authors is to start with a higher-alpha (e.g. 1) to allow PG to learn the high return scenarios, then gradually decrease the risk tolerance to the desired level as training progresses\n2. sample efficiency, which is addressed by a cross-entropy method that learns to weigh low-return experience more than high-return ones during training\n- Finally, the paper provides theoretical and empirical results on three domains that argue in favour of these two strategies Strengths:\n- the problems described pertaining to the optimization of CVaR seem important to address\n- the application of CE and its corr. Proposition 1 is novel to my understanding, and Theorem 1 is an important and interesting result that has standalone value for the general research area\n- the Algorithm choices are clearly motivated by and well connected to the theory (Th. 1, Pr. 1, Lem. 2)\n\nWeaknesses:\n- a number of important algorithms discussed in the related work section are not compared against in the empirical evaluation: specifically, the paper by Keramati et al., 2020 appears to be similar in terms of the problem being tackled as well as their methodology\n- while I think the CE approach is interesting and appreciate the authors' theoretical insight and strong motivation for tackling the problem, my biggest concern is that it is largely unclear to me whether DRL and related approaches are already solving (or could readily solve) the same problem, and the distinction w.r.t. current work is a bit blurred (see question 1 below as well). I am willing to revise my score if additional experiments/discussion in the paper disprove this claim. \n- the experiments are fairly low dimensional and it is difficult to determine if the effects of functional approximation errors could indirectly solve one of the problems this paper is tackling, on more challenging domains.\n\nOrganization:\n- I think it would be helpful to the reader to provide a stylized example demonstrating empirically how CVaR optimization naively leads to the problems claimed in the intro. this might make it clear to the reader that the benefits seen in figure 1 do indeed arise from tackling the problems as claimed. \n- I would also suggest to consider moving the discussion of blindness of success before the algorithm presentation, and use it as principled explanation for why a larger alpha will be necessary. 1. clearly, the authors assert some additional assumptions about the structure of the problem to allow the CE method to gravitate towards the high risk areas. in contrast, distributional RL does not make such assumptions, and it can reduce the model uncertainty through bootstraps (unlike PG which inherently has high variance). how does the proposed cross entropy approach differ from learning a distribution over return? what is the advantage and the drawback of making such assumptions compared to DRL? \n2. the cross entropy and \"alpha-annealing\" approaches seems to be largely orthogonal to one another. is there a stronger connection between them w.r.t. bias-variance trade-offs? \n3. in relation to the weakness point made above, does the existence of function approximation errors limit the effectiveness of the alpha reduction method?\n4. In sect. 4.2., variance reduction is connected to sample efficiency by showing that reducing variance reduces one term in the general error bound [Xu et al., 2020] . however, viewed in the same lens, doesn't annealing alpha play a contradictory role by increasing the bias of the CVaR estimate J_\\alpha, and hence increasing the first term in that bound? - limitations are briefly discussed in the conclusion of the paper, but it would be nice to discuss them in more details\n- while this work is theoretical/abstract in nature, there could be some societal impacts in areas where RL is starting to become more practical (e.g. financial trading, health-care) - the implications of this with regards to the modeling assumptions (e.g. contextual MDP) used in this work should probably be mentioned",
" The authors provide different contributions, all related to the CVaR policy gradient method:\n1) They analyze the phoenomenon named \"blindness to success\", which affects the CVaR gradient optimization.\n2) They propose soft risk scheduling as a heuristic way of circumventing the problem.\n3) They extend the CE method for their particular setting, obtaining the CeSor algorithm, which consists in applying the two enhancements on top of the original GCVaR algorithm.\n4) They analyse the theoretical advantages in employing the CE method in the exact case, showing that it allows to reduce the policy gradient variance, thus, to reduce the sample complexity of a variance reduced PG algorithm.\n5) They compare the performance of the proposed method with the original version on some simple domains, moreover, they provide an ablation study in order to analyse the contribution of the different components of the algorithm.\n # Strenghts\nThe has the following points of strength:\n- The empirical analysis shows the advantages of employing both the proposed enhancements, and an ablation study clarifies that applying just one of them alone is not sufficient to reach the optimal policy.\n- The problem of blindness to success is analysed in a formal and exhaustive way.\n- The theoretical contribution about CEM is sound and it allows to clearly highlight the advantages of employing this sampling strategy.\n\n# Weakesses\nThe article presents the following weaknesses:\n- The soft risk approach, while intuitive and justified from an empirical viewpoint is not analyzed from a theoretical perspective, thus, it can be considered just as an heuristic to avoid the problem.\n- The theoretical results provided for the CE method hold only in the exact case, i.e., when the quantile extimation has no error and the CE method allows to match exactly the desired distribution. \n- The role of the hyper-parameter $\\beta$ is to guarantee a minimun number of samples in the CE update. However, this can introduce a bias in the distribution found by CE. The sensitivity of the approach w.r.t. to this hyper-parameter is not discussed.\n\n## Minor\n- Arguably, from the point of view of exposition, the article may benefit from a gradual introduction of the two main innovations propose the application of CE method and the risk-scheduling, instead of directly introducing the whole algorithm.\n- Section 5.1 comment the performance of CeR, however, they are only showed in the appendix. \n - Can the authors please discuss how sensitive the approach is to the hyper-parameter $\\beta$?\n- In particular, is the choice of $\\beta$ important to guarantee convergence to the optimal solution? \n- If the approach is sensitive to this hyper-parameter, how should it be chosen? As highlighted by the authors too, the main limitation is constituted by the necessity of having the possibility of conditioning sampling w.r.t. contexts. This requires to have a high-level control over the environments, an hypothes which can hold true in simulation, but which is usually more difficult to enforce in real-world environments.\n\nAnother limitation is constituted by the assumptions of Proposition 1, which are almost never satisfied in practical applications of the algorithm, as highlighted by the authors too.",
" The paper studies policy gradient under the conditional value at risk (CVaR) objective. The proposed method contains two important components: (1) using CEM for the better sampling of data so as to obtain a better estimate of the value at risk; and (2) using soft-risk scheduling on alpha (the CVaR parameter) to address the blindness to success problem. \n Strength: \n\nThe paper proposed interesting heuristics to address the problem that CVaR is hard to optimize (especially in the RL setting). Algorithm 1 seems to be generic to any CVaR policy gradient method, which is ideal. The experiments have demonstrated the efficacy of the proposed methods.\n\n\nWeakness: \n\nThe problem of estimating the VaR and the problem of zero gradients for optimizing CVaR is not new in the risk-sensitive learning field. For example: In \"Adaptive Sampling for Stochastic Risk-Averse Learning (Curi et al.)\", the authors have considered using adaptive sampling of the data to overcome these issues (both the estimation of VaR and the zero-gradient problem). In \"On the Convergence and Optimality of Policy Gradient for Markov Coherent Risk (Huang et al.)\", the authors have showcased the zero-gradient problem of optimizing CVaR in a bandit setup (where the Markov CVaR is equivalent to CVaR). The proposed heuristics are interesting but it is unclear why certain design choices (e.g., the particular schedule of alpha') are made. - Figure 2 is a nice illustration. But could the authors provide how it is generated in detail? For example, how are R and C obtained for each method?\n- Is there any additional reasoning for choosing alpha' the way presented in L13?\n- This is just a small suggestion: For theorem 1, it might be more readable for the users to present it directly for B instead of negating the B. How should one interpret m_0? Does it depend on \\alpha itself?\n- For the experiments:\n - Why is GCVaR chosen as the PG method? \n - How big of a role does clipping play? Have the authors compared the performance of GCVaR without clipping with different methods?\n - In the first experiment, the authors have done an ablation study and compared CESoR with SoR and CEM separately. What about the second and third experiments? How does CESoR compare with SoR and CEM?\n Yes.",
" The authors study risk-averse reinforcement learning with the specific problem of how to keep risk-averse while using the collected samples efficiently. To this end, a soft risk-level scheduling mechanism is proposed where all samples are used at the initial stage, and it gradually shifts to high-risk (lower return quantile) samples. Additionally, a cross entropy method (CEM) is used to sample risky trajectories among soft risk samples. The proposed method is evaluated on three benchmarks to show its effectiveness. **Strengths**\n\nThe risk-averse property plays an essential role in reinforcement learning (RL), especially for real-world problems. Therefore, the research topic of this paper is interesting and important for the RL community. The proposed mechanisms for improving risk sample usage are effective on the evaluated benchmarks. The authors also give a theoretical analysis of the ignorance of high-return samples, which leads to a local optimum.\n\n**Weaknesses**\n\nThe writing need to be polished carefully. Although the paper is well-motivated, the methodology is hard to follow. Specifically, it is hard to understand how soft risk-level scheduling works together with CEM since the optimization in Algorithm 1 line 11:12 is independent of policy learning line 13:15. Additionally, a lot of details of the methodology, e.g. CEM are introduced in Problem Formulation sections, which further prevent the understanding. For the novelty, it seems unclear. The main contributions seem to be a heuristic risk quantile hyper-parameter scheduling and a sampling-based previous CEM.\n\n\nMinor:\n\n1. Symbol 1_{R(tau)<q_alpha} is undefined in Eq. 3. It seems to be the indicator function.\n2. Grammar error: Are —> is in line 133.\n 1. Some risk may caused by an insufficient exploration of the environment. What is the performance if using more advanced RL method, e.g. SAC instead of policy gradient?\n2. How to understand the relationship of CEM and soft risk-level scheduling in Algorithm 1? It seems they are independent.\n\nUpdate after the rebuttal.\nIt is clear now about the contributions and the relationship between CEM and soft risk-level scheduling. No."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
2
] | [
"IZxh4IVxHhv",
"2nZhD-HV9sw",
"vwjpwaJC9WY",
"nI6ZiF2DCST",
"zX3nR_qchmP",
"CFwk8w96Ipj",
"KB8iNx48_xu",
"AAMnsS1Jfcm",
"nips_2022_LdAxczs3m0",
"nips_2022_LdAxczs3m0",
"nips_2022_LdAxczs3m0",
"nips_2022_LdAxczs3m0"
] |
nips_2022_qtZac7A3-F | Enhance the Visual Representation via Discrete Adversarial Training | Adversarial Training (AT), which is commonly accepted as one of the most effective approaches defending against adversarial examples, can largely harm the standard performance, thus has limited usefulness on industrial-scale production and applications. Surprisingly, this phenomenon is totally opposite in Natural Language Processing (NLP) task, where AT can even benefit for generalization. We notice the merit of AT in NLP tasks could derive from the discrete and symbolic input space. For borrowing the advantage from NLP-style AT, we propose Discrete Adversarial Training (DAT). DAT leverages VQGAN to reform the image data to discrete text-like inputs, i.e. visual words. Then it minimizes the maximal risk on such discrete images with symbolic adversarial perturbations. We further give an explanation from the perspective of distribution to demonstrate the effectiveness of DAT. As a plug-and-play technique for enhancing the visual representation, DAT achieves significant improvement on multiple tasks including image classification, object detection and self-supervised learning. Especially, the model pre-trained with Masked Auto-Encoding (MAE) and fine-tuned by our DAT without extra data can get 31.40 mCE on ImageNet-C and 32.77% top-1 accuracy on Stylized-ImageNet, building the new state-of-the-art. The code will be available at https://github.com/alibaba/easyrobust. | Accept | This paper proposes a discrete adversarial training scheme for improving the robustness of vision models. Reviewers find the paper is well written, the proposed idea seems to be novel/interesting, and the approach leads to improved empirical performance. This work may also inspire new approaches for improving both robustness as well as generalization together. Therefore, I recommend accepting the paper, while also encourage the authors to address the remaining issues pointed out by the reviewers.
| train | [
"peS_HFyyRE9",
"e2EpqV79LYG",
"SrtGTL8rOU6",
"CuEabBdxPA",
"MNaDTdG2UXg",
"gXeYEd7rA55",
"bPvciiSlgz5",
"hLc9NFqHrGG",
"JKF_iztQzR7",
"h6XcIEiKUHC",
"r7DYUmhUdGP",
"6iwIrEWuzy4O",
"Juif_1WK9iNS",
"V996WVz7bpR",
"oxv-Q87Mmu",
"cOA0WnGQMdn",
"dC1nWx-ayqR",
"reuDqe3USKT",
"CUe1oHD9WwA",
"mo8NOey8nMw",
"j5t2FUQUsu",
"S5PpiPlQ1pW",
"RIV7qyHwdie",
"s5Xzr0RBQ3s",
"GLwQt-bz9eH",
"79QncuKlygx",
"iilWEQjXtgO",
"j5YCCP4Yckz",
"O5Wm2NX7rJa",
"JsnM536AE3m"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer wZbH,\n\nWe are appreciate for getting an affirmation from you about our response. Many thanks again for your precious review time and valuable comments to help us improve the paper. \n\nBest, \n\nAuthors of Paper 2664",
" Authors response has convincingly addressed my concerns and I am willing to increase the score.",
" We would appreciate your above suggestions and comments. \n\nPlease let us know whether we have addressed your concerns? \n\nBest regards,\n\nAuthors of Paper 2664",
" Dear reviewer haXr,\n\nWe are happy to get affirmation from you about the experiment results. We will add these experiments in our revision. Many thanks again for your precious review time and valuable comments to help us improve the paper. \n\nBest, \n\nAuthors of Paper 2664",
" agreed - thank you for the evaluation of adversarial robustness.",
" Dear reviewer haXr,\n\nWe are encouraged and thankful for your rating improving. However, the rating is still negative. We think that we've addressed your concerns through detailed experiments including generalization of adversarial trained models and fair comparison between DAT and PGD adversarial Training. The experimental results also suggest that our method has comparable adversarial robustness. \n\nIf there are remaining unresolved concerns, we are happy to continue the discussion. \n\nThanks!!\n\nAuthors of Paper 2664",
" okay, thank you for the clarification",
" We do not use any hyperparameters search algorithms, since ImageNet training is very costly. In our implementation, we just use the default parameters for imagenet training[22], the compared traditional AT baseline is using the same hyperparameters too. For clarity, we list the training hyperparameters here:\n\n**basic hyperparameters:**\n\nepochs: 90\n\nbatch_size: 128\n\noptimizer: sgd\n\nlr: 0.1\n\nlr_schedule: step\n\nlr_decay_epochs: 30\n\nlr_decay_rate: 0.1\n\nweight_decay: 1e-4\n\n**attacker hyperparameters (DAT without discrete procedures is the same with L2 based PGD):**\n\nperturbation_type: L2\n\nepsilon: 0.1 ( \\alpha: 0.1 in DAT)\n\nattack_step: 1 \n\nattack_step_size: 0.1\n\n**used data augmentation:**\n\ndata_augmentation: https://github.com/MadryLab/robustness/blob/a9541241defd9972e9334bfcdb804f6aefe24dc7/robustness/data_augmentation.py#L40\n\nThe training is conducted on single machine with 8 GPU cards. We will also open source the training code, for the reproducibility of our work. Hope the above will solve your concerns. Thanks for your time and comments.\n",
" Thank you for the evaluation. In terms of fair comparison, I meant that the hyperparameters need to be finetuned for both settings (assuming that they are finetuned for DAT already). In this respect, the proposed evaluation is NOT fair. I understand that a fair comparison is likely not possible given the short time of the discussion phase . \nHow are the hyperparameters selected for DAT (grid search, BO, trial&error)? ",
" **(3) The fair comparison of DAT and traditional AT**\n\nThanks for your suggestion about the fair comparison. Here, we provide a totally fair comparison between DAT and traditional Adversarial Training by keeping consistent on the hyperparameters of AT and DAT. Specially, we delete the discretization step, such that our DAT can degenerate into the traditional PGD-based AT. Since our DAT adopts one attack step and directly uses normalized gradients multiplied by $\\alpha=0.1$ to craft AEs (without sign operation), it actually can be regarded as L2 adversarial training with attack_step=1 and $\\epsilon=0.1$. So, we can use these hyperparameters to train a L2-robust model, and compare it fairly with our DAT models. The results are shown below: \n| Models | $\\epsilon$ | Perturbation Type | Attack Steps | ImageNet-Val | L2 AutoAttack $\\epsilon=0.1$ | L2 AutoAttack $\\epsilon=0.5$ | L2 AutoAttack $\\epsilon=3$ | A | C(mCE↓) | V2 | R | Sketch | Stylized |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| Traditional PGD-based AT | 0.1 | L2 | 1 | 74.48 | 60.58 | 9.71 | 0 | 1.65 | 80.48 | 61.72 | 36.30 | 22.90 | 7.37 |\n| DAT (Ours) | unrestricted | Discrete | 1 | 76.52 | 60.74 | 9.6 | 0 | 4.38 | 74.16 | 65.02 | 41.90 | 27.27 | 10.8 |\n\nAll the above models are using resnet50 as the backbone. From the results, we can see a clear quantification of the benefit of our proposed discrete AT scheme compared with traditional AT. \n\n**On clean performance:** traditional AT plays negative impact on ImageNet-val clean performance. However DAT can reduce the negative impact and achieve 2 points higher accuracy on ImageNet-val. It even surpasses the normal training on clean performance of imagenet-val dataset.\n\n**On adversarial robustness:** we should admit that DAT indeed cannot yield significantly better adversarial robustness compared with traditional AT. But by presenting the L2 AutoAttack evaluation with different $\\epsilon$, we show DAT at least can achieve a comparable adversarial robustness with traditional AT. We also add L2 AutoAttack evaluation with $\\epsilon=3$, in this condition, both traditional AT and our DAT drop to 0% accuracy. It is expected because only $\\epsilon=0.1$ is used in training. \n\n**On generalization:** the result shows DAT achieves significant improvement on generalization compared with traditional AT. It is the main contribution of this work, that is DAT can enhance the quality of learned representation. We also give some insights to explain why discrete representation can help generalization in Q1 of Reviewer oF1f. You can also refer to it for more details.\n\nFor experiments with larger perturbations and more attack steps, sorry for that we do not compare them here because the training time is too long. We will add these comparisons in the final revision.\n \nLastly, we hope we have clarified some of our procedures in the adversarial training and allayed some of your concerns. Moreover, we hope that the reviewer appreciates the value of discrete adversarial training in ideas and methods for boosting the performance on representation learning. Thank you for the comments. Please don't hesitate to let us know if you have any remaining questions or concerns.\n\n------\n\n**Reference**\n\n[h] https://github.com/MadryLab/cifar10_challenge/blob/f15682d9f1e26eb47a2d3b371ef8b6c7abcf6276/config.json#L27\n\n[i] https://github.com/yaodongyu/TRADES/blob/6e8e11b7c281371c2f027ffadfbaea80361f09de/train_trades_cifar10.py#L32\n\n[j] https://github.com/microsoft/robust-models-transfer\n\n[k] Zhang, Hongyang, et al. \"Theoretically principled trade-off between robustness and accuracy.\" International conference on machine learning. PMLR, 2019.\n\n[l] Kireev, Klim, Maksym Andriushchenko, and Nicolas Flammarion. \"On the effectiveness of adversarial training against common corruptions.\" UAI (2022).\n\n[m] Shafahi, Ali, et al. \"Adversarial training for free!.\" Advances in Neural Information Processing Systems 32 (2019).",
" Thank you for your time and helpful suggestions. We answer your questions as follows.\n\n**(1) About the training cost**\n\nWe agree with you that training time is indeed an important factor in model performance. As far as we know, in traditional PGD-based AT[5] and other variants like TRADES[k], the used PGD attacker is not only one step. From the implementation in GitHub [h, i], AT always use PGD with 10 attack steps for training. Both of them require multiple gradient backward, while our DAT only need once. We show the training budget below:\n| Training Strategies | Attack steps used for training | Training budget | \n| ---- | ---- | ---- |\n| Normal | 0 | 1× |\n| AdvProp [26] | 1 | 3× |\n| Adversarial Training [5] | 10 | 11× | \n| DAT(Ours) | 1 | 3.5× | \n\nBased on the above table, it needs to be clarified that our training cost is much less than the traditional pgd adversarial training. Traditional AT with 10 attack steps has nearly 3× training costs than ours.\n\n**(2) The study of the generalization on traditional AT models**\n\nThanks for your suggestion about generalization on traditional AT models. We first explore how a traditional AT scheme effects on the model generalization. We collect some open-sourced robust models[j] using resnet50 as backbone on ImageNet and test them on OOD datasets. The training cost of each robust model is also counted. The results are shown in below table:\n| AT models | Training Cost | ImageNet-Val | A | C(mCE↓) | V2 | R | Sketch | Stylized |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| Normal training, $\\epsilon=0$ | 1× |76.13 | 0.0 | 76.70 | 63.20 | 36.17 | 24.09 | 7.38 |\n| L2-Robust, $\\epsilon=0.01$ [g] | 4× | 75.68 | 2.11 | 75.33 | 64.00 | 35.98 | 23.55 | 7.47 |\n| L2-Robust, $\\epsilon=0.03$ [g] | 4× | 75.76 | 2.17 | 75.36 | 63.66 | 36.18 | 23.98 | 8.18 |\n| L2-Robust, $\\epsilon=0.05$ [g] | 4× | 75.59 | 2.19 | 75.65 | 63.37 | 36.48 | 23.90 | 8.51 |\n| L2-Robust, $\\epsilon=0.1$ [g] | 4× | 74.78 | 2.13 | 75.42 | 62.64 | 36.90 | 23.85 | 9.18 |\n| L2-Robust, $\\epsilon=0.25$ [g] | 4× | 74.14 | 2.28 | 75.79 | 62.20 | 37.57 | 24.33 | 10.07 |\n| L2-Robust, $\\epsilon=0.5$ [g] | 4× | 73.16 | 2.19 | 75.91 | 60.48 | 38.03 | 23.49 | 10.99 |\n| L2-Robust, $\\epsilon=1.0$ [g] | 4× | 70.43 | 2.19 | 78.36 | 57.36 | 38.21 | 22.63 | 11.07 |\n| L2-Robust, $\\epsilon=3.0$ [g] | 4× | 62.83 | 1.97 | 83.84 | 49.45 | 36.48 | 20.40 | 10.48 |\n| L2-Robust, $\\epsilon=5.0$ [g] | 4× | 56.13 | 1.71 | 88.98 | 43.04 | 32.75 | 16.82 | 9.13 |\n| Linf-Robust, $\\epsilon=0.5/255$ [g] | 4× | 73.73 | 2.35 | 76.86 | 61.88 | 38.54 | 23.79 | 10.94 |\n| Linf-Robust, $\\epsilon=1.0/255$ [g] | 4× | 72.05 | 2.53 | 78.34 | 59.60 | 40.13 | 23.70 | 12.10 |\n| Linf-Robust, $\\epsilon=2.0/255$ [g] | 4× | 69.10 | 2.52 | 80.09 | 56.64 | 38.65 | 22.14 | **12.36** |\n| Linf-Robust, $\\epsilon=4.0/255$ [g] | 4× | 63.86 | 2.25 | 85.14 | 51.39 | 38.25 | 20.94 | 11.70 |\n| Linf-Robust, $\\epsilon=8.0/255$ [g] | 4× | 54.53 | 2.12 | 91.59 | 42.16 | 34.40 | 18.10 | 9.58 |\n| FreeAT, $\\epsilon=4.0/255$ [m] | **1×** | 59.96 | 1.62 | 90.26 | 47.39 | 35.72 | 17.46 | 10.34 |\n| **DAT(Ours)** | 3.5× | **76.52** | **4.38** | **74.16** | **65.02** | **41.90** | **27.27** | 10.8 |\n\nFrom the experimental results, we can summarize the following points:\n- Compared to normal training, adversarial training will hurt the clean performance in the imagenet-val dataset even with extreme small perturbations.\n- The results suggest AT with a very small $\\epsilon$ can slightly benefit from the generalization, e.g., with L2-Robust, $\\epsilon=0.01$ , ImageNet-C mCE value from 76.70 dropped to 75.33, lower mCE means better common corruption generalization. But with the \\epsilon becoming larger, AT greatly damages the generalization, e.g. with L2-Robust, $\\epsilon=5.0$, ImageNet-C mCE value increases to 88.98. This finding is also revealed by [l].\n- Compared with traditional AT models, our DAT can improve generalization more significantly. It even surpasses the normal training on clean performance of imagenet-val dataset. \n\n",
" Thank you for your feedback. We are very happy to see that most of your concerns have been resolved, and many thanks to the reviewer for helping us improve the paper. \n\n**(1) Reply of Weakness 3:** We appreciate the good advice of FID as the photorealism comparison. The results of FID score w.r.t. clean examples vs. discrete AEs and clean examples vs. pixel-level AEs are shown in below table: \n| Settings | FID score |\n| ---- | ---- |\n| original input vs. reconstructed images | 1.14 |\n| original input vs. pixel-level AEs | 65.18 |\n| original input vs. discrete AEs | 14.65 |\n\nFor fairness, we guarantee that all compared AEs have same attack ability. The result suggests our discrete AEs are more photorealistic than traditional pixel-level AEs. It is also consistent with our visual presentation result in Figure 8. We will also put the FID results in revision paper.\n\n**(2) Reply of Q1/Q2/Weakness 1 (about DG):** Thanks for your valuable suggestion, we will add these PACS experiments into the final version. We also will do our best to experiment with other datasets in DG for deeper exploration.\n\n**(3) Reply of Q3 and Q4:** Thanks for correcting our inappropriate expression. We have checked, and altered these inappropriate expressions in revision. \n\n------\n\nMany thanks for your time! We hope that our response has addressed your remaining question.\n",
" Thank you for your reply.\n\n- If I understand the new numbers correctly, the results proposed in the new table show that indeed the proposed model is robust against the attacks it is trained on. This is a valuable evaluation. However, this result is not surprising. The other evaluation, the number for all these models on RobustBench, would be more interesting in my opinion, because it would further show the generalization of the proposed model beyond the training data.\n- If I understand the method correctly, it is at least as expensive as pgd adversarial training (because pgd attack is the first step). It is even more expensive because of the forward pass through the AE (which might be fast). Therefore, in my understanding, the resulting model should also be at least as robust as a model trained using pdg AT. Of course, the trade-off can be different, say, higher clean accuracy, slightly lower robust accuracy. I would still like to see the numbers to get an impression!\n- [g] implies that adversarially robust models can generalize better. Therefore, the finding proposed in this work, that an adversarial training scheme can generalize better to e.g. common corruptions, is to be expected. If you can not show robust accuracy on RobustBench, I would like to see a quantification of the benefit of this more complicated training scheme with extra hyperparameters over (finetuned!) traditional (pgd) adversarial training to see a fair comparison.\n",
" I appreciate the detailed response. Here are my thoughts:\n\n- **Weakness 3:** Fig 8 is useful, thanks for adding it. The points raised about low-frequency vs high-frequency noise and \"invalid colors\" is also useful. However I don't see a photorealism comparison still. One way to achieve this would be to compute FID score between distribution of real images and distribution of DAT attacked images ; and compare it with FID of real vs pixel-level. See this library for a quick implementation of FID https://github.com/mseitzer/pytorch-fid\n\n- **Q1/Q2/Weakness 1 (about DG):** Thank you for the PACS experiment. The table for ME-ADA vs DAT is useful. In the final version of the paper, I would encourage authors to also add these results (and if time permits, also replicate it on other datasets, for eg. the benchmark of Table 1 in ME-ADA paper). This will strengthen the evidence for the efficacy of DAT, and make a larger impact on readers interested in robustness, DG, AT, etc.\n\n- **Q3 and Q4** - Thanks for the answers and supporting experiments. For Q4 you may rephrase the statement to something like \"it is empirically observed that $\\hat{x} \\sim x $ ... \" rather than saying \"it is established\".\n\n- **Q5/references/typos/limitations** thanks for the update.\n\nOverall comment:\n*My original rating was Weak Accept (6). Based on your response, I am inclined to increase my rating.*\n\n\n",
" Thank you for your comments of our reply. We are very happy that part of the concerns has been resolved, and many thanks to the reviewers for helping us improve the paper. \n\nIn this reply we will discuss the concerns about adversarial robustness.\n\n**(1)** Different from that PGD attack is usually used in traditional AT [c,d,e], DAT generates discrete AEs which exceed Lp bound for training. We have discussed the difference between discrete AEs and PGD AEs in **Weakness3 of reviewer C7HU**. Under this AT scheme, DAT actually builds the adversarial robustness against discrete adversarial attacks beyond Lp bound. We conduct an experiment to validate our claims. We use unrestricted AEs in discrete space to attack some robust models on ImageNet from https://github.com/microsoft/robust-models-transfer. The results are shown below: \n| Models | Clean Accuracy | Robust Accuracy |\n| ---- | ---- | ---- |\n| Vanilla R50 | 76.13% | 13.56% | \n| Linf_robust R50 with eps=4/255 [g] | 68.46% | 60.12% |\n| L2_robust R50 with eps=3.0 [g] | 62.83% | 53.20% |\n| DAT (Ours) | 76.52% | 72.89% |\n\nDAT achieves 72.89% robust accuracy which is 12.77% higher than 4/255-linf-robust model. **Therefore, even if linf-robust R50 gets top@1 rank on RobustBench, in this experiment we suggest linf-robust models are only shown certain robustness under specific settings (Linf bound adversarial attacks).** It is less robust than our DAT on unrestricted attacks which appear more commonly in the real world. \nOverall, we think a suitable adversarial robustness metric is decided by the configuration of AEs used for training. RobustBench adopts 4/255 AutoAttack for evaluation since all the compared methods bound the training AEs into (-4/255, 4/255). But our DAT is trained beyond lp norm bounding, so evaluating DAT by 4/255 AutoAttack is a biased comparison (same as evaluating linf robust models by unrestricted attacks).\n\n**(2)** Our DAT does have expensive adversarial training scheme, however adopting AT scheme is not always meaning to SoTA adversarial robustness. There are multiple previous significant works [26, 28, f] adopt expensive adversarial training scheme, but they do not train a SoTA linf-robust model in RobustBench, instead they use AT scheme to improve the generalization. Their results are also of great significance in the field of image classification. Similarly, we should denote that the goal of this work is also improving generalization, but not achieving Lp adversarial robustness. \n\nWe hope that our response has addressed all of your concerns. Thank you for your time and feedback! Please don't hesitate to let us know if you have any remaining questions or concerns.\n\n------\n\n**Reference**\n\n[f] Improving Vision Transformers by Revisiting High-frequency Components, Bai, Jiawang and Yuan, Li and Xia, Shu-Tao and Yan, Shuicheng and Li, Zhifeng and Liu, Wei; In European Conference on Computer Vision, 2022. \n\n[g] Salman, Hadi, et al. \"Do adversarially robust imagenet models transfer better?.\" Advances in Neural Information Processing Systems 33 (2020): 3533-3545.\n\n",
" Thank you for the reply, the clarification and the additional results on straight though gradients. To be honest, the results indicate that it would actually be beneficial to compute the gradients and that the crude approximation of the opimization without gradients bares disadantages. Still, these results are of course valuable, especially when asusming that the computation might be affordable in specific scenarios (e.g. very low resolution data). \n\nWhile this addresses some of my concerns, I am still not convined by the argumentation on the significance of results. Also, I am not convinced by the argumentation w.r.t. adversarial robustness. The proposed method is a particularly expensive adversarial training scheme - therefore it should also be possible to train an adversarially robust model, i.e. a model that shows at least some robustness in the standard setting employed in RobustBench.",
" Dear reviewer haXr,\n\nWe did not receive any feedback on our response yet.\n\nPlease can you let us know whether you've read our rebuttal and whether we addressed your concerns?\n\nIf we did not, please let us know what we failed to address appropriately.\n\nThanks!!\n\nAuthors of Paper 2664",
" We would like to thank all of the reviewers again for helping us improving our paper. We uploaded a revised version of our paper and marked the major modifications in blue for visibility. In short, \n\n- We have carefully checked the typos and improved the writing in the revised manuscript.\n- We have cited and discussed the papers the reviewers provided.\n- We discussed the VILLA and AGAT in the Section.2 and declared the difference.\n- We add discussion of limitation in Section.5 \n- We add a discussion of the necessity of bounding the per-pixel values of \\delta in Appendix B.6\n- We add a comparison of discrete perturbations with traditional pixel-space perturbations in Appendix D.5\n- We add visualization of the straight-through gradients and directly backward gradients in Appendix D.6\n\nThank you all again for your precious and insightful suggestions. Please let us know if you have additional questions or ideas for improvement.\n",
" **Q2:** citation and justification of straight-through gradient estimator\n\n**Reply:** There are two places using straight-through gradient estimator. The first is in Line 159, it is widely used in VQVAE [36]. VQVAE adopts the technique proposed in [a] to learn the non-differentiable vector quantization module. [a] shows straight-through method has the right sign to “back-propagate” the non-smooth neurons. Another work [b] provided the theoretical justification for how the straight-throught estimator minimizing the training loss. Till now, straight-through gradient estimator is good performed in VQGAN optimization. So we followed this method. The citation of two papers ([a], [b]) has also been added in latest revision. \n\nThe second straight-through gradient estimator from $Q(x)$ to $x$ is in Line 169. It is necessary for DAT, because the GPU memory cost will become 4× and training time will get 8× longer if we do not make this assumption. It is indeed a strict assumption. The reviewer may concern such hypothesis will fail in practice. Therefore, we show some empirically results to explore how this hypothesis behaves in not idea setting: \n\n**(1)** The similarity of the backward gradients on $Q(x)$ and $x$. We add the visualization of backward gradients on $Q(x)$ and $x$ in Appendix D.6. It suggests the high visual similarity of the gradients on $Q(x)$ and $x$. \n\n**(2)** The attack ability of discrete adversarial examples under this assumption. The reviewer may concern, in practice (not in ideal setting), if the straight-through method can still accurately estimate the direction of the adversarial gradient. To relieve this concern, we present some results in Q3, which suggest the discrete adversarial examples crafted by straight-through gradient is still with strong attack ability. It demonstrates the gradient of $Q(x)$ is effective for approximating the adversarial gradients on $x$. \n\n------\n\n**Q3:** check that these adversarial inputs generated by VQGAN still mislead the classifier F\n\n**Reply:** We compare the attack strength of discrete adversarial examples bellow: \n| Type of AEs | $\\epsilon$ | Different VQGAN | Attack Suc. Rate |\n| ---- | ---- | ---- | ---- |\n| FGSM [45] | 1/255 | - | 87.81% |\n| Discrete AE w/ Backward Gradients [36] | - | VQGAN with FID=1.14 | 84.62% |\n| Discrete AE w/ Straight-Through Gradients | - | VQGAN with FID=1.14 | 82.44% |\n| Discrete AE w/ Backward Gradients [36] | - | VQGAN with FID=4.98 | 83.56% |\n| Discrete AE w/ Straight-Through Gradients | - | VQGAN with FID=4.98 | 80.17% |\n| Discrete AE w/ Backward Gradients [36] | - | VQGAN with FID=7.94 | 82.90% |\n| Discrete AE w/ Straight-Through Gradients | - | VQGAN with FID=7.94 | 79.27% |\n\nWe adopt pretrained resnet50 as target model. The attack success rate (87.81%) of FGSM [45] in pixel space is shown for reference. The better VQGAN model is used, the higher Attack Success Rate (ASR) can be achieved. Compared with FGSM, the AEs in discrete space with backward gradients [36] get slight drop on ASR caused by the information compression in discrete spaces. The straight-through method used in our DAT can produce AEs with 82.44%, 80.17% and 79.27% ASR for VQGAN with FID=1.14, 4.98, 7.94 respectively. It is few points lower than using the directly backward gradients, but can still keep relatively high attack strength, making nearly 80% examples misclassified. \n\n------\n\nWe hope that our response has addressed all of your concerns. Thank you for your time and feedback on our submission! Please don't hesitate to let us know if you have any remaining questions or concerns.\n\n**Reference:** \n\n[a] “Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation”, Yoshua Bengio, Nicholas Leonard and Aaron Courville\n\n[b] “Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets”, Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin\n",
" We thank the reviewer for the time and constructive comments. \n\n**Q1:** “We delete the constraint term since there is no need to bound the per-pixel values of $\\delta$” why this constraint is not needed?\n\n**Reply:** The reason of “no need to bound the per-pixel values of \\delta” lies in three aspect:\n\n**(1)** DAT bounds the final perturbation by first effecting on pixel space and further impact the discrete space. Therefore $\\delta$ is just an intermediate result in the process of computing the final symbolic perturbations. To study the effect of per-pixel bound on $\\delta$, we first use $\\alpha=0.1$ to generate 1000 samples and count the proportion of the $\\delta$ in different perturbation intervals:\n| Intervals | Proportion of the $\\delta$ in the interval |\n| ---- | ---- |\n| (-1/255, 1/255) | 28.3% |\n| (-2/255, 2/255) | 95.9% |\n| (-4/255, 4/255) | 100% |\n\nIt shows 95.9% of the $\\delta$ is in (-2/255, 2/255), and all $\\delta$ are in (-4/255, 4/255). Using magnitude $\\alpha$ has almostly regulated the $\\delta$ into (-4/255, 4/255). So adding the other per-pixel bound on $\\delta$ seems unnecessary. \n\n**(2)** It is unclear if adding per-pixel bound on \\delta will impact the performance. To study this problem, we bound the $\\delta$ with different epsilon and re-run the DAT. \n| Linf Bounds on $\\delta$ | ImageNet-Val | FGSM | DamageNet | A | C↓ | V2 | R | Sketch | Stylized |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| no bounds | 76.52 | 30.66 | 14.42 | 4.38 | 74.16 | 65.02 | 41.90 | 27.27 | 10.8 |\n| (-4/255, 4/255) | 76.47 | 31.43 | 14.25 | 4.31 | 74.12 | 65.07 | 41.68 | 26.99 | 10.62 |\n| (-2/255, 2/255) | 76.16 | 29.75 | 13.24 | 3.75 | 74.87 | 64.32 | 40.38 | 25.53 |9.31 |\n| (-1/255, 1/255)| 76.10 | 29.41 | 12.00 | 3.53 | 75.53 | 64.11 | 39.05 | 25.04 | 8.69 |\n\nAs shown in above table, DAT achieves best performance when $\\delta$ is not bounded. The worst result is appeared when $\\delta$ is bounded between (-1/255, 1/255). With larger $l_{\\infty}$ bound, the results become better. This experiment is added into Appendix B.6 of the revised paper. \n\n**(3)** As stated in Line 59, a good property of DAT is that it can produce diverse adversarial inputs beyond Lp bound for training. Bounding the per-pixel values of $\\delta$ may potentially reduce the diversity of the discrete adversarial examples for training in our DAT. ",
" **Q2:** unfair comparison to DrVit\n\n**Reply:** The biggest difference with DrViT is that DrViT only discretizes the input for training, while our DAT conducts an adversarial process in discrete space to generate more diverse and harder discrete adversarial examples for training. So DAT regularizes the model learning more robust and generalized representation. \n\nWe then give a completely fair experimental comparison. After checking the official implementation, we find DrViT actually uses VQ-GAN model with $k=1024$, $d=256$ for discretization. Instead of training both VQ-GAN and ViT classification model from scratch, DrViT consists of two stage training: 1) it first pretrains the VQ-GAN encoder and decoder on ImageNet; 2) then it finetunes the discrete embeddings learned in first stage. \nTo provide more fair comparison, we modify our DAT to use the VQ-GAN model with $k=1024$, $d=256$ pretrained on ImageNet. By aligning all settings on dataset and architecture, a completely fair comparison to DrViT is below:\n\n| Models | ImageNet-Val | FGSM | DamageNet | A | C↓ | V2 | R | Sketch | Stylized |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| DrViT | 79.48 | 45.76 | 44.91 | 17.20 | 46.22 | 68.05 | 44.77 | 34.59 | 19.3 |\n| DAT (Ours) | 80.46 | 51.17 | 49.49 | 24.75 | 45.14 | 68.71 | 47.51 | 35.64 | 22.96 |\n\nUnder this fair comparison, our DAT still achieves better performance on both robustness and generalization.\n\n------\n\n**Q3:** why the winner between DrVit and AugReg-Vit keeps changing on different metrics?\n\n**Reply:** We admit that DrViT and AugReg-ViT exactly do inconsistent effect on different robustness metrics in Table 1. After careful check, we confirm all the results are accurate and convincing. Such a phenomenon is expected, as DrViT strengthens the robustness by improving the ability of capturing shape features, and this behaviour is just beneficial for some robustness metrics. For example, learning shape feature shows effectiveness on ImageNet-C, -R, -Sketch, while it is not useful for ImageNet-A, which contains hard nature samples. Other inconsistence on DamageNet is rational, as the gap is marginal, showing both DrViT and AugReg-ViT perform similarly on DamageNet. \n\n------\n\nWe hope that our response has addressed all of your concerns. Thank you for your time and feedback on our submission! Please don't hesitate to let us know if you have any remaining questions or concerns.",
" We thank the reviewer for the time and insightful comments.\n\n**Q1:** why discrete representation helps\n\n**Reply:** It is an interesting problem and worth deeply studying. Since there are few works exploring the discrete representation in AT before, why discrete representation helps AT is still an open question.\n\nWe provide our insights about the help of discrete representation in DAT, which lies in two aspect:\n\n**(1)** As shown in [36], instead of focusing or spending capacity on image pixel level noise and imperceptible local details, discrete representation captures important features, preserves the global structure and semantics of an object. So crafting AEs in discrete symbolic space yields more meaningful semantic perturbations for our DAT training. To provide some evidence for the more meaningful semantic perturbations on discrete representation, we add more discussion and comparison of discrete AEs and pixel-space AEs in Appendix D.5. For fairness, we keep the same attack success rate of all compared AEs. The study has three aspects:\n\n- *Discrete perturbations create more realistic AEs.* We add a visualization of pixel-wise AEs and discrete AEs in Figure 8 of Appendix D.5 for subjective photorealism comparison. Pixel-wise perturbations lead to noisy images. By calculating the number of colors [17], we find pixel-wise AEs add more invalid colors, resulting in a noisy image. While discrete perturbations have minor changes on the color numbers of original image. Such subtle change is hard to be perceived by humans. \n- *Discrete perturbations have more low frequency component.* We conduct frequency analysis on compared AEs in second row of Figure 8. Pixel-wise perturbations introduce more high frequency component. It may lead the pixel-wise AEs to far away from natural distributions. However discrete perturbations will not introduce unnecessary high-frequency components in original image. \n- *Discrete perturbations are more structural.* From the perturbation visualization in third row of Figure 8 , we find discrete perturbations have more structured information about objects, shown it attends to more important locations. While pixel-wise perturbations are noisy and disordered. \n\n**(2)** From the perspective of distribution, we show in Line 193 that the AEs crafted based on discrete representation are closer to the natural image distribution. It can reduce the underlying distribution shift caused by pixel-space AEs [26], and enhance the robustness and generalization without sacrificing clean performance. \n\nAbove we show some superior properties of discrete AEs. Several of properties are shown benefit for AT in previous works [26,28,a]. [28] shows the structured adversarial perturbations can achieve significant performance gains over non-adversarial baseline and adversarial training with pixel perturbations. [26] shows solving the distribution shift problem of clean and adversarial examples in AT can help the clean performance and generalization. [a] thinks the latent space contains compressed semantic-level features. So AT on the perturbations generated in latent space may guide the classification model to use robust features instead of achieving high accuracy by exploiting non-robust features in the image space. \n\nOverall, we think such superior properties of discrete AEs contribute to the good performance of our DAT. We are also welcome any other insights in the future study to discuss this open problem. \n\n**Reference:** \n\n[a] “Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks”, Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi\n",
" **Q3:** lack of adversarial robustness and comparison to more SotA methods on RobustBench; report robust accuracy values for epsilon = 4/255 on ImageNet and L_inf for AutoAttack; report robust accuracy values for the epsilon=8/255 on CIFAR10 and L_inf for AutoAttack\n\n**Reply:** It should be denoted that our DAT is not belonging to the field of adversarial robustness research [c,d,e]. The pure adversarial robustness [c,d,e] only cares about the performance under worst $l_{p}$ bounded perturbations, but sacrificing the clean performance. Instead, DAT focuses on general robustness, where many aspects need to be considered: clean performance, corruption robustness, generalization ability, adversarial robustness, transferability to downstream tasks, etc. Recently, a line of research [26, 27, 28, 32, 39, 53] is proposed in this general robustness field. Our DAT is the follow up work in this area. **All of them do not claim the SoTA results on $l_{p}$ bounded adversarial robustness, but aim to achieve comprehensive improvement on multiple types of robustness.**\n\n**Therefore, it is unreasonable to compare our method with SoTA adversarial robustness methods on RobustBench, since they actually belong to different research areas.** The standard protocol of AutoAttack (4/255 on ImageNet, 8/255 on CIFAR10) is proposed for evaluating methods in pure adversarial robustness area, which is also not appropriate for DAT evaluation. RobustBench actually has multiple leaderboards. We show on Leaderboard of ImageNet Common Corruptions (ImageNet-C) in RobustBench, our DAT can achieve the SoTA results with 73.61% robust accuracy. This result is also higher than many other methods not in the leaderboard, such as MAE[19], DrViT[39], PyramidAT[28], etc. \n\nHowever, the kind suggestions of the reviewer is appreciated. It is interesting to apply the idea of discrete adversarial training into the field of adversarial robustness research. We are happy to do some exploration along this direction.\n\n------\n\nWe hope that our response has addressed all of your concerns. Thank you for your time and feedback on our submission! Please don't hesitate to let us know if you have any remaining questions or concerns.\n\n**Reference:** \n\n[c] “Towards Deep Learning Models Resistant to Adversarial Attacks”, Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu\n\n[d] \"Theoretically Principled Trade-off between Robustness and Accuracy\", Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael I. Jordan\n\n[e] “Adversarial Weight Perturbation Helps Robust Generalization”, Dongxian Wu, Shu-tao Xia, Yisen Wang\n\n",
" We thank the reviewer for the time and constructive comments.\n\n------\n**Q1:** missing implementation details of the adversarial robustness experiments - FGSM and Damagenet. There are some typos which needs to be proofread.\n\n**Reply:** Sorry for the unclarity. For FGSM, we take sign operation on backward adversarial gradient, and multiply it with epsilon=1/255 to get the perturbation. We add the perturbation on original input to get adversarial examples, which is inferred by model to calculate top@1 accuracy. For DamageNet, it consists of 50000 adversarial examples which is pre-generated in [46]. [46] claims these images can fool ImageNet models to have error rate up to 85%, and can be used for evaluating the model performance against transferable adversarial attacks. In this work, we directly inference on these examples and calculate top@1 accuracy. \n\nWe have carefully checked the typos and improved the writing in the revised manuscript.\n\n------\n\n**Q2:** The paper lacks theoretical intuition and the experimental section needs more stronger baselines for robustness tasks. For example the intuition that the straight-through gradient computation and backpropagation can just be skipped seems very odd to me. At least some empirical evidence (comparison of the adversarial examples that are actually computed by backpropagation) should be reported.\n\n**Reply:** There are two places using straight-through gradient estimator. The first is in Line 159, it is widely used in VQVAE [36]. VQVAE adopts the technique proposed in [a] to learn the non-differentiable vector quantization module. [a] shows straight-through method has the right sign to “back-propagate” the non-smooth neurons. Another work [b] provided the theoretical justification for how the straight-throught estimator minimizing the training loss. Till now, straight-through gradient estimator is good performed in VQGAN optimization. So we followed this method. The citation of two papers ([a], [b]) has also been added in latest revision. \n\nThe second straight-through gradient estimator from $Q(x)$ to $x$ is in Line 169. It is necessary for DAT, because the GPU memory cost will become 4× and training time will get 8× longer if we do not make this assumption. It is indeed a strict assumption. The reviewer may concern such hypothesis will fail in practice. Therefore, we show some empirically results to explore how this hypothesis behaves in not idea setting: \n\n**1) The similarity of the backward gradients on $Q(x)$ and $x$.** We add the visualization of backward gradients on $Q(x)$ and $x$ in Appendix D.6. It suggests the high visual similarity of the gradients on $Q(x)$ and $x$. \n\n**2) The attack ability of discrete adversarial examples under this assumption.** The reviewer may concern, in practice (not in ideal setting), if the straight-through method can still accurately estimate the direction of the adversarial gradient. To relieve this concern, we present some results below, which suggest the discrete adversarial examples crafted by straight-through gradient is still with strong attack ability. It demonstrates the gradient of $Q(x)$ is effective for approximating the adversarial gradients on $x$. \nWe compare the attack strength of discrete adversarial examples bellow: \n\n| Type of AEs | $\\epsilon$ | Different VQGAN | Attack Suc. Rate |\n| ---- | ---- | ---- | ---- |\n| FGSM [45] | 1/255 | - | 87.81% |\n| Discrete AE w/ Backward Gradients [36] | - | VQGAN with FID=1.14 | 84.62% |\n| Discrete AE w/ Straight-Through Gradients | - | VQGAN with FID=1.14 | 82.44% |\n| Discrete AE w/ Backward Gradients [36] | - | VQGAN with FID=4.98 | 83.56% |\n| Discrete AE w/ Straight-Through Gradients | - | VQGAN with FID=4.98 | 80.17% |\n| Discrete AE w/ Backward Gradients [36] | - | VQGAN with FID=7.94 | 82.90% |\n| Discrete AE w/ Straight-Through Gradients | - | VQGAN with FID=7.94 | 79.27% |\n\nWe adopt pretrained resnet50 as target model. The attack success rate (87.81%) of FGSM [45] in pixel space is shown for reference. The better VQGAN model is used, the higher Attack Success Rate (ASR) can be achieved. Compared with FGSM, the AEs in discrete space with backward gradients [36] get slight drop on ASR caused by the information compression in discrete spaces. The straight-through method used in our DAT can produce AEs with 82.44%, 80.17% and 79.27% ASR for VQGAN with FID=1.14, 4.98, 7.94 respectively. It is few points lower than using the directly backward gradients, but can still keep relatively high attack strength, making nearly 80% examples misclassified. \n\n**Reference:** \n\n[a] “Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation”, Yoshua Bengio, Nicholas Leonard and Aaron Courville\n\n[b] “Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets”, Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin",
" **Q3:** \"We delete the constraint term since there is no need to bound the per-pixel values of $\\delta$\" --what are the effects on the performance if the perturbation is bounded? is it better / worse in general? Is is better for some eval datasets but worse for others?\n\n**Reply:** A similar question is proposed by Reviewer wZbH, where we explain why we do not bound the per-pixel values of $\\delta$. Here we address your concerns about the effects on the performance if the perturbation is bounded. We choose $l_{\\infty}$ bound to conduct some analytical experiments as follows: \n\n**(1) How per-pixel bound effects on $\\delta$.** To study the effect of per-pixel bound on $\\delta$, we first use $\\alpha=0.1$ to generate 1000 samples and count the proportion of the $\\delta$ in different perturbation intervals:\n| Intervals | Proportion of the $\\delta$ in the interval |\n| ---- | ---- |\n| (-1/255, 1/255) | 28.3% |\n| (-2/255, 2/255) | 95.9% |\n| (-4/255, 4/255) | 100% |\n\nWith above table, using bound of (-4/255, 4/255) almostly has no impact of $\\delta$. While using bound of (-1/255, 1/255) and (-2/255, 2/255) may potentially impact the training performance, which is discussed in below experiment. \n\n**(2) How per-pixel bound effects on the performance of DAT.** We rerun DAT on ResNet50 by adding different $l_{\\infty}$ bound on $\\delta$, and keeping other settings unchanged. \n| $l_{\\infty}$ Bounds on $\\delta$ | ImageNet-Val | FGSM | DamageNet | A | C↓| V2 | R | Sketch | Stylized |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| no bounds | 76.52 | 30.66 | 14.42 | 4.38 | 74.16 | 65.02 | 41.90 | 27.27 | 10.8 |\n| (-4/255, 4/255) | 76.47 | 31.43 | 14.25 | 4.31 | 74.12 | 65.07 | 41.68 | 26.99 | 10.62 |\n| (-2/255, 2/255) | 76.16 | 29.75 | 13.24 | 3.75 | 74.87 | 64.32 | 40.38 | 25.53 |9.31 |\n| (-1/255, 1/255)| 76.10 | 29.41 | 12.00 | 3.53 | 75.53 | 64.11 | 39.05 | 25.04 | 8.69 |\n\nAs shown in above table, DAT achieves best performance when $\\delta$ is not bounded. The worst result is appeared when $\\delta$ is bounded between (-1/255, 1/255). With larger $l_{\\infty}$ bound, the results become better.\n\n------\n\n**Q4:** Line 166: Since xˆ ≃ x established for an ideal discretizer Q -- this connects to my question-3: does the magnitude of the perturbation affect this assumption? Will a large perturbation make this assumption false?\n\n**Reply:** To assist our response of Q3, here we give some intuitive results to show the proximity between $Q(x)$ and $x$. We choose 5 different magnitudes by setting different $\\alpha$ (0.0, 0.1, 0.2, 0.3, 0.4), and compute the L2 distance, LPIPS between original image and its corresponding reconstruction.\n\n| Alpha | Interval of per-pixel value of $\\delta$ | L2 | LPIPS |\n| ---- | ---- | ---- | ---- | \n| 0.0 | - | 5.4e-3 | 0.082 |\n| 0.1 | (-1.14/255, 1.14/255) | 5.4e-3 | 0.083 |\n| 0.2 | (-2.29/255, 2.28/255) | 5.4e-3 | 0.083 |\n| 0.3 | (-3.44/255, 3.41/255) | 5.4e-3 | 0.083 |\n| 0.4 | (-4.58/255, 4.57/255) | 5.4e-3 | 0.083 |\n\nWhen $\\alpha=0.0$, $Q(x)$ has a very small L2 distance to original $x$. It suggests the strong reconstruction ability of VQGAN model, which greatly supports our assumption. Meanwhile, adding larger perturbation on $x$ has little effect on the reducibility of the reconstruction process. Our assumption is not false under such condition. \n\n------\n\n**Q5:** Line 184: Previous work has pointed out that the underlying distributions of adversarial examples are different from clean images. -- which paper? please cite.\n\n**Reply:** Sorry for the unclarity, we have added the citation ([26]) in revised paper. \n\n------\n\n**Suggestions:** add discuss about VILLA and AGAT \n\n**Reply:** Thanks for your kind suggestion, we have discussed the VILLA and AGAT in the Related Work of the revised version. VILLA is also a representation enhancement technique using AT, but it is only applied for vision-and-language representation learning task. AGAT relies on a set of pre-defined attributes, this constraint makes it hard to transfer broader tasks where attributes are not given. As compared, DAT is more generic to most vision tasks, and it does not require any additional attribute annotation. \n\n------\n\n**Suggestions:** other typos\n\n**Reply:** Thanks very much for pointing out the typos in our paper. We really appreciate you for your carefulness and conscientiousness. We have carefully checked the typos and improved the writing in the revised manuscript.\n\n------\n\n**Limitations:** add discussion of limitations\n\n**Reply:** We have discussed the limitation in section of Conclusions. See in revised paper. \n\n------\n\nWe hope that our response has addressed all of your concerns. Thank you for your time and feedback on our submission! Please don't hesitate to let us know if you have any remaining questions or concerns.\n",
" \nWe thank the reviewer for the time and insightful comments. \n\n**Weakness3:** The effect of discrete perturbations is not discussed. In pixel-wise AT, the perturbation leads to noisy images. But that need not be true for DAT -- a comparison should be included, perhaps in terms of photorealism comparison between \"real\" images and perturbed images.\n\n**Reply:** An advanced property of discrete perturbations is that it does not change the distribution of the original image in most case. As discussed in Line 182, such property reduces the distribution shift between clean images and AEs in traditional AT, yielding both robustness and clean performance improvement. \n\nTo explore what effect of discrete perturbations contributed to this advanced property, and what is the difference of discrete perturbations with pixel-wise perturbations, we add more discussion in Appendix D.5. For comparative fairness, we keep the same attack success rate of the generated discrete and pixel-wise AEs. The study has three aspects:\n\n- *Discrete perturbations create more realistic AEs.* We add a visualization of pixel-wise AEs and discrete AEs in Figure 8 of Appendix D.5 for subjective photorealism comparison. As said for the reviewer, pixel-wise perturbations lead to noisy images. By calculating the number of colors [17], we find pixel-wise AEs add more invalid colors, resulting in a noisy image. While discrete perturbations have minor changes on the color numbers of original image. Such subtle change is hard to be perceived by humans.\n- *Discrete perturbations have more low frequency component.* We conduct frequency analysis on compared AEs in second row of Figure 8. Pixel-wise perturbations introduce more high frequency component. It may lead the pixel-wise AEs to far away from natural distributions. However discrete perturbations will not introduce unnecessary high-frequency components in original image.\n- *Discrete perturbations are more structural.* From the perturbation visualization in third row of Figure 8 , we find discrete perturbations have more structured information about objects, shown it attends to more important locations. While pixel-wise perturbations are noisy and disordered. \n\n------\n\n**Q1, Q2:** How DAT performs on DG tasks? if some of the baselines that you have used have been shown to be better than GUD, MADA?\n\n**Reply:** Thanks for your good advice. Results on Domain Generalization (DG) can demonstrate the ability of our DAT on small-scale tasks rather than ImageNet. It helps to further improve the significance of DAT. \n\nWe carefully read the recommended papers: GUD, MADA, ME-ADA. Fortunately, we find on CIFAR-10-C benchmark, a part of compared baselines in this work have been shown to be better than GUD, MADA and ME-ADA. Specifically, the GUD, MADA and ME-ADA get 58.26%, 65.59%, and 80.5% average accuracy on CIFAR-10-C respectively. While in our paper, the compared baseline Augmix [32] achieves 87.5% avg. accuracy, surpassing above three methods with a large gap.\n\nBut we think it is still necessary to evaluate DAT on DG benchmarks. We choose PACS datasets, and train on 3 domains for generalizing to remaining unseen domain. Standard AlexNet is used to keep the setting consistent with previous works. The results are shown in below table: \n\n| DomainID | ME-ADA | DAT |\n| ---- | ---- | ---- |\n| Art | 67.1% | 67.3 |\n| Cartoon | 69.9% | 71.3 |\n| Photo | 88.6% | 87.8 |\n| Sketch | 63.0% | 64.1% |\n| Avg | 72.2% | 72.6% |\n\nDAT can also achieve better performance on DG task such as PACS. It has slight drop on domain of photo, but improves the transferability on other three domains.\n\n",
" The paper proposes Discrete Adversarial Training (DAT) strategy for vision tasks that avoids continuous-Lp based perturbations and encourages adversarial examples generated through image discretization using VQGAN. Authors derive the motivation for using image discretization from the observation of improved generalization via discrete adversarial training in NLP tasks. Adversarial examples generated using this discretization process are used for training different architectures for different downstream tasks. Unlike traditional adversarial training (AT), DAT shown to improve network generalization across different tasks, different distribution shifts and also improve adversarial robustness. Major improvements are noticed in image classification task and minimal or comparable performance seen on object detection and semantic segmentation tasks. Strengths:\n1)\tWell written paper. Easy to understand.\n2)\tMotivation is explained clearly.\n3)\tMajor strength lies in the extensive experimental evaluation.\n4)\tShown improvements of network generalization across different ImageNet distribution shift datasets and higher adversarial robustness on both ResNet50 and ViT.\n5)\tConducted ablation studies to understand the effect of different components.\n\nWeakness:\nDifferent assumptions are made to ease the computation of gradients, backpropagation through VQGAN to generate adversarial examples. No theoretical details are shown to support the assumptions and approximations are made through heuristics.\n In line 148, it is mentioned that “We delete the constraint term since there is no need to bound the per-pixel values of \\delta”. Would you elaborate on why this constraint is not needed?\n\nIn line 157, authors mention “as proposed in previous work, a straight-through gradient estimator can be used by copying the gradients from v_q to v”. Which previous work is referred here? Would you provide justification how the gradients can be estimated and copied from v_q to v? \n\nSimilarly a straight-through gradient estimator was used between Q(x) and x. How does this hold in practice (not in ideal setting) ?\n\nIn line 179, it is mentioned that “x + \\delta is discretized by VQGAN again and acts as the adversarial input”. Do authors cross check that these adversarial inputs generated by VQGAN still mislead the classifier F?\n Authors have not discussed about their limitations. I find no potential negative impact from this work. The method requires heavily parameterized VQGAN to train different networks. This imposes heavy computation time and budget during the training process.",
" This paper proposes discrete adversarial training to boost the performance on representation learning. Extensive results on image classification, object detection and self-supervised learning validate the effectiveness. ## Strengths:\n\nThe presentation is clear and extensive experimental results are convincing.\n\n## Weaknesses:\n\n1. My biggest concern is why discrete representation helps?\n\n2. DrVit is very similar except for inner optimization and quantization training. DrVit is training VQVAE from scratch. Will that become unfair comparison? For example, you are using different data for training or pretraining. Could you provide the results if the proposed method is also trained from scratch using the same dataset, loss and architecture? Or DrVit is also using VQGAN\n\n3. In Table 1, why the winner between DrVit and AugReg-Vit keeps changing on different metrics?\n\n See above. n/a",
" The paper proposes a discrete adversarial training strategy to alleviate the robustness-generalization trade-off in vision tasks. Motivated from the adversarial training used in NLP models, the authors claim that a discrete representation of the image space will aid in more robust models without much drop in generalization. The method utilizes a VQGAN model to generate discrete adversarial samples for adversarial training. The authors show that the adversarial training strategy enhances the performances of various vision tasks such as classification, object detection and self-supervised learning. + Originality - The idea to use discrete adversarial samples to enhance the performance of vision tasks seems novel/interesting. \n\n+ Clarity - The paper is well written and can be easily understood. \n\n- Clarity - I felt some of the experimental details are missing. For example, the implementation details of the adversarial robustness experiments - FGSM and Damagenet. There are some typos which needs to be proofread.\n\n- Quality - The paper lacks theoretical intuition and the experimental section needs more stronger baselines for robustness tasks. For example the intuition that the straight-through gradient computation and backpropagation canjust be skipped seems very odd to me. At least some empirical evidence (comparison of the adversarial examples that are actually computed by backpropagation) should be reported. \n\n- Significance - It is difficult to analyze the superiority of the method in adversarial robustness since the robustness experiments is missing baselines from RobustBench. However, number for much smaller epsilons than the ones usually evaluated in robustBench indicate that the method strongly underperforms in this respect. \n- Without error bars, it is unclear how significant the results are.\n\n\nAfter the rebuttal and extentive discussion, I see that the proposed approach might have potential w.r.t. model generalization. My orignial argument w.r.t. adversarial robustness holds, but I am willing to increase my score based on the presented results. - Please report robust accuracy values for epsilon = 4/255 on ImageNet and L_inf for AutoAttack.\n\n- I understand that error bars on ImageNet are not possible - but for CIFAR, that would be easier - and allow for a comparison to more SotA methods on RobustBench. For CIFAR, the epsilon for the comparison should be 8/255 for L_inf on AutoAttack.\n\n\n\n The authors already mention the limitations of the proposed method – the computational cost and lack of theoretical intuition. Although there are some empirical intuitions provided in the paper (fig 2), the missing error bars in all experiments make the empirical findings less convincing.",
" This paper studies the problem of adversarial training for vision tasks by borrowing ideas from NLP. A new method called \"DAT\" (Discrete Adversarial Training) is proposed. DAT uses a VQA-GAN encoder to convert images into discrete tokens, by learning a vocabulary of words (aka image codebook). Adversarial perturbations are applied on these tokens. Performance is reported on:\n1. 6 versions of Imagenet for evaluating robustness (Imagenet-A/C/V2/R/ Stylized/Sketch)\n2. against adversarial attacks (FGSM, DamageNet)\n3. COCO-C for object detection ## Strengths\n1. The motivation of the paper comes from an informed perspective of both vision and NLP literature on AT, and I like the fact that the phenomenon from NLP of AT improving both robustness and clean accuracy is used as a goal for AT in vision. This exchange of ideas often leads to improvements across various application domains of machine learning. \n2. Performance is evaluated extensively and reported \n - 3 benchmarks (classification, detection, self-supervised image classification) \n - multiple architectures\n3. The paper is well written, especially the methods section. I have a few questions about the method though (see below)\n\n## Weaknesses\n1. One major weakness is that previous work on adversarial training has been used for improving domain generalization (see my question1). However, experiments have not been performed in this paper on DG benchmarks.\n2. The methods section has some assumptions (see Q 3 and 4) -- these assumptions are not clearly justified (either theoretically or intuitively). More study along these assumptions is needed. \n3. The effect of discrete perturbations is not discussed. In pixel-wise AT, the perturbation leads to noisy images. But that need not be true for DAT -- a comparison should be included, perhaps in terms of photorealism comparison between \"real\" images and perturbed images. ## Questions \nAdversarial training has also been used in image classification literature for domain generalization: see (GUD: Volpi et al NeurIPS 2018 https://arxiv.org/abs/1805.12018, MADA: Qiao et al. CVPR 2019 https://arxiv.org/abs/2003.13216, ME-ADA: Zhao et al. NeurIPS 2020 https://arxiv.org/abs/2010.08001). This leads me to two questions:\n1. How does DAT perform on domain generalization benchmarks such as Digits, PACS, OfficeHome etc.?\n2. On the imagenet benchmarks presented in this paper, how does DAT compare with GUD, MADA? (if some of the baselines that you have used have been shown to be better than GUD, MADA -- please let me know which paper).\n\n3. Line 148: `We delete the constraint term since there is no need to bound the per-pixel values of $\\delta$` -- what are the effects on the performance if the perturbation is bounded? is it better / worse in general? Is is better for some eval datasets but worse for others?\n4. Line 166: `Since xˆ ≃ x established for an ideal discretizer Q` -- this connects to my question-3: does the magnitude of the perturbation affect this assumption? Will a large perturbation make this assumption false?\n5. Line 184: `Previous work has pointed out that the underlying distributions of adversarial examples are different from clean images.` -- which paper? please cite.\n\n## Suggestions (about References, Structure, Grammar, etc.)\n1. References: you may also add references to \"vision-like\" AT that have been explored in NLP. You have mentioned one of them (FreeLB). Another one is:\n - https://arxiv.org/abs/2006.06195 (NeurIPS 2020) -- here \"VILLA\" perturbs encoded features (of images and/or text) encoded by pretrained V&L model. In that sense, it is similar to perturbing the VQGAN encoding of images (DAT) is similar to VILLA, but DAT perturbs in the symbolic codebook space.\n2. References: other forms of adversarial training (beyond pixel-wise perturbations) have been explored for vision tasks:\n - https://arxiv.org/abs/2012.01806 (AAAI 2021) -- \"AGAT\" is an adversarial training pipeline which starts with a given knowledge of symbolic attributes of images, and then perturbs images along those attributes. A comparison should be made between DAT and AGAT: in my view DAT indirectly uses the symbolic knowledge from VQGAN encoder, whereas AGAT assumes that such symbols/attributes will be given.\n3. Typo: Fig 4 caption: last word should be \"attackers\".\n4. Fig 3 caption: please mention in words what $\\alpha$ is (step size/magnitude of adversarial update).\n5. Fig 2: please label the x-axis\n6. Line 201: `Using robustness`: please rephrase this to `using \"robustness\" library` since robustness is a standard ML term, so people might get confused. Limitations are not discussed clearly. Please add a section in/after conclusion to discuss them. Especially about the assumptions in the method."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5
] | [
"e2EpqV79LYG",
"CUe1oHD9WwA",
"V996WVz7bpR",
"MNaDTdG2UXg",
"h6XcIEiKUHC",
"O5Wm2NX7rJa",
"hLc9NFqHrGG",
"JKF_iztQzR7",
"h6XcIEiKUHC",
"r7DYUmhUdGP",
"Juif_1WK9iNS",
"V996WVz7bpR",
"oxv-Q87Mmu",
"GLwQt-bz9eH",
"cOA0WnGQMdn",
"dC1nWx-ayqR",
"O5Wm2NX7rJa",
"nips_2022_qtZac7A3-F",
"mo8NOey8nMw",
"iilWEQjXtgO",
"S5PpiPlQ1pW",
"j5YCCP4Yckz",
"s5Xzr0RBQ3s",
"O5Wm2NX7rJa",
"79QncuKlygx",
"JsnM536AE3m",
"nips_2022_qtZac7A3-F",
"nips_2022_qtZac7A3-F",
"nips_2022_qtZac7A3-F",
"nips_2022_qtZac7A3-F"
] |
nips_2022_8AB7AXaLIX5 | Concept Activation Regions: A Generalized Framework For Concept-Based Explanations | Concept-based explanations permit to understand the predictions of a deep neural network (DNN) through the lens of concepts specified by users. Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the DNN's latent space. When this holds true, the concept can be represented by a concept activation vector (CAV) pointing in that direction. In this work, we propose to relax this assumption by allowing concept examples to be scattered across different clusters in the DNN's latent space. Each concept is then represented by a region of the DNN's latent space that includes these clusters and that we call concept activation region (CAR). To formalize this idea, we introduce an extension of the CAV formalism that is based on the kernel trick and support vector classifiers. This CAR formalism yields global concept-based explanations and local concept-based feature importance. We prove that CAR explanations built with radial kernels are invariant under latent space isometries. In this way, CAR assigns the same explanations to latent spaces that have the same geometry. We further demonstrate empirically that CARs offer (1) more accurate descriptions of how concepts are scattered in the DNN's latent space; (2) global explanations that are closer to human concept annotations and (3) concept-based feature importance that meaningfully relate concepts with each other. Finally, we use CARs to show that DNNs can autonomously rediscover known scientific concepts, such as the prostate cancer grading system. | Accept | All reviewers have found the paper as a solid contribution on a highly important topic, addressing the major shortcomings of the notable work, CAV in concept-based explainability area. On such shortcoming is that CAV assumes that examples corresponding to a concept are all mapped in a fixed direction in the DNNs latent feature space, which can be restrictive in practice. The proposed technique relaxes a fundamental assumption made in CAV, thereby increasing its effectiveness. As one main contribution, the reviewers have found the relaxation of the linear separability in the latent space sound, and the implemented concept activation regions well capturing the spread of concept-related features in the latent space. There are some concerns on the experimental analysis, that not many DNN architectures have been considered, lack of results without human annotations for concepts, and through robustness analyses. The authors have somewhat addressed these, although there is still some room for improvement. Overall, the positive aspects of the paper overweigh and I suggest acceptance of the paper. | train | [
"XbjrLkV6s2x",
"V3acIj-TF84",
"VR1w_6n_-j",
"EZk7ggxpWW8",
"i3mzRkyWEP0",
"qQILpanzQWX",
"CIWceAjIOsG",
"XlPO4pEmRzF",
"M0kQoVrLlEH",
"3-hn0-auiDo",
"WaGm0j74oR3",
"PcgZJkSKQd",
"jeFKFsFAsJT",
"_2ibg3z6imj",
"1xwZWXvoM4P",
"ZW-GK-V7W2u",
"Y45Kj6l2rv",
"dR7XgsdFH7f"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the extensive rebuttal. \n\nAfter reading the other reviews and the authors answers, I find that the authors addressed most of the concerns. I therefore raise my score accordingly. ",
" Dear reviewer,\n\nas requested, we have performed an analysis of TCAR by using the CAR sensitivity\n\n$$ S^c_k(x) \\equiv (\\nabla_{h} \\rho^c [g(x)])^{\\intercal} (\\nabla_{h} l_k [g(x)]),$$\n\nand by then computing the modified TCAR score as in Section 2.1 of our paper:\n\n$$\\mathrm{TCAR-S}^c_k = \\frac{| \\\\{ x \\in \\mathcal{D}_k \\mid S^c_k(x) > 0 \\\\} |}{| \\mathcal{D} |}.$$\n\nIn the same setting as in the CUB Experiment from Section 3.1.2 of our paper, we compute the correlation between this modified TCAR score and the ground-truth proportion of examples within a class that exhibit the concept. We obtain a correlation of $r(\\mathrm{TCAR-S}, \\mathrm{TrueProp}) = .5$, which is similar to the correlation obtained with TCAV and inferior to our standard TCAR score. This support the explanation provided in Point 3 of the rebuttal: the concept sensitivity does *not* seem to be the most appropriate way to detect the association between a class and a concept. Using the standard definition of TCAR based on the concept activation regions leads to substantially better result, hence we recommend to use this definition to generate global explanations. ",
" as title",
" Dear Authors,\n\nThanks for addressing/answering all my questions, and I am satisfied with the responses. I have updated my scores to reflect the same.",
" Dear reviewer,\n\nOnce again, we would like to thank you for your feedback. We hope that our rebuttal has addressed any questions or concerns you may have had about the paper. If you have any other comments or concerns, please let us know. We would be happy to do our utmost to address them during the author-reviewer discussion period,which ends this Tuesday.\n\nBest regards.",
" Dear reviewer,\n\nOnce again, we would like to thank you for your feedback. We hope that our rebuttal has addressed any questions or concerns you may have had about the paper. If you have any other comments or concerns, please let us know. We would be happy to do our utmost to address them during the author-reviewer discussion period, which ends this Tuesday.\n\nBest regards.",
" Dear reviewer,\n\nOnce again, we would like to thank you for your feedback. We hope that our rebuttal has addressed any questions or concerns you may have had about the paper. If you have any other comments or concerns, please let us know. We would be happy to do our utmost to address them during the author-reviewer discussion period,which ends this Tuesday.\n\nBest regards.",
" We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:\n\n1. Generalize the TCAV sensitivity metric with our CAR formalism.\n2. Demonstrate the utility of CAR explanations to understand abstract concepts discovered in an unsupervised fashion.\n3. Demonstrate that our explanations are robust with respect to adversarial perturbations and background shifts.\n4. Demonstrate that CAR classifiers can be used with NLP models.\n5. Clarify the existing contributions for discovering concepts without human intervention.\n\nWe believe that all of these points make a great addition to the manuscript.\n\n## 1. Generalizing CAV Sensitivity Interpretations\n\nWe thank the reviewer for suggesting this interesting extension. In our formalism, it is perfectly possible to define a *local* concept activation vector through the concept density $\\rho^c : \\mathcal{H} \\rightarrow \\mathbb{R}^+$ defined in Definition 2.1 from the main paper. Indeed, the vector $\\nabla_{h}\\rho^c[h] \\in \\mathcal{H}$ points in the direction of the representation space $\\mathcal{H}$ where the concept density (and hence the presence of the concept) increases. Hence, this vector can be interpreted as a *local* concept activation vector. Note that this vector becomes global whenever we parametrize the concept density $\\rho^c$ with a linear kernel $\\kappa(h_1, h_2) = h_1^{\\intercal} h_2$. Equipped with this generalized notion of concept activation vector, we can also generalize the CAV concept sensitivity $S^c_k$ by replacing the CAV $w^c$ by $\\nabla_{h}\\rho^c[h]$ for the representation $h = g(x)$ of the input $x \\in \\mathcal{X}$: \n\n\n$$S^c_k(x) \\equiv (\\nabla_{h} \\rho^c [g(x)])^{\\intercal} (\\nabla_{h} l_k [g(x)]).$$\n\nIn this way, all the interpretation provided by the CAV formalism are also available in the CAR formalism. This discussion has been added in Appendix B of the manuscript.\n\n## 2. Using CAR with Unsupervised Concepts\n\n\nOur CAR formalism adapts to a wide variety of neural network architectures. As suggested by the reviewer, we use CAR to analyze the concepts discovered by a self explaining neural network (SENN) trained on the MNIST dataset. As in *Alvarez-Melis, D., & Jaakkola, T. (2018). Towards Robust Interpretability with Self-Explaining Neural Networks.*, we use a SENN of the form\n\n$$f(x) = \\sum_{s=1}^S \\theta_s (x) \\cdot g_s(x),$$\n\nWhere $h_s(x)$ and $\\theta_s(x)$ are respectively the activation and the relevance of the synthetic concept $s \\in [S]$ discovered by the SENN model. We follow the same training process as Alvarez-Melis et al. This yields a set of $S = 5$ concepts explaining the predictions made by the SENN $f : \\mathcal{X} \\rightarrow \\mathcal{Y}$.\n\nWe use our CAR formalism to study how the synthetic concepts $s \\in [S]$ discovered by the SENN are related to the concepts $c \\in \\{ \\mathrm{Loop}, \\mathrm{Vertical \\ Line}, \\mathrm{Horizontal \\ Line}, \\mathrm{Curvature} \\}$ introduced in our paper. With our formalism, the relevance of a concept $c$ for a given prediction $x \\mapsto f(x)$ is measured by the concept density $\\rho^c \\circ g (x)$. To analyze the relationship between the SENN concept $s$ and the concept $c$, we can therefore compute the correlation of their relevance:\n\n$$r(s, c) = \\mathrm{corr}\\_{X \\sim P_{\\mathrm{empirical}}(\\mathcal{D}_{\\mathrm{test}})} [\\theta_s(X) , \\rho^c\\circ(X)]. $$\n\nWhen this correlation increases, the concepts $s$ and $c$ tend to be relevant together more often. We report the correlation between each pair $(s, c)$ in the bellow table.\n\n| Correlation $r(s , c)$ | **Loop** | **Vertical Line** | **Horizontal Line** | **Curvature** |\n|:---------------:|:----------:|:----------------:|:------------------:|:------------:|\n| **SENN Concept 1** | -0.28 | -0.12 | 0.26 | 0.11 |\n| **SENN Concept 2** | -0.50 | 0.71 | -0.03 | -0.69 |\n| **SENN Concept 3** | -0.47 | 0.10 | 0.71 | -0.14 |\n| **SENN Concept 4** | -0.33 | 0.02 | -0.06 | -0.01 |\n| **SENN Concept 5** | 0.57 | -0.0 | -0.63 | 0.07 |\n\nWe note the following:\n\n1. SENN Concept 2 correlates well with the Vertical Line Concept.\n2. SENN Concept 3 correlates well with the Horizontal Line Concept\n3. SENN Concept 5 correlates well with the Loop Concept.\n4. SENN Concepts 1 and 4 are not well covered by our concepts.\n\nThe above analysis shows the potential of our CAR explanations to better understand the abstract concepts discovered by SENN models. We believe that the community would greatly benefit from the ability to perform similar analyses for other interpretable architectures, such as disentangled VAEs.",
" ## 3. Robustness of CAR Explanations\n\nAs suggested by the reviewer, we perform an experiment to evaluate the robustness\nof CAR explanations. We start with adversarial perturbations.\nIn this experiment, we work with the MNIST dataset in the same setting as\nthe experiment from Section 3.1.2 from our paper. We train a CAR concept classifier\nfor each MNIST concept $c \\in [C]$. We use the CAR classifier to output TCAR scores\nrelating the concept $c$ with each class $k \\in [d_Y]$. As in the main paper, since the ground-truth association between concepts and classes is known (e.g. the class corresponding\nto digit 8 will always have the concept loop), we can compute the correlation $r(\\mathrm{TCAR}, \\mathrm{TrueProp})$ between\nour TCAR score and the ground-truth proportion of examples that exhibit the concept.\nIn this experiment, this correlation is evaluated on a test set $\\mathcal{D}\\_{\\mathrm{test}} = \\mathcal{D}\\_{\\mathrm{adv}} \\ \\sqcup \\mathcal{D}\\_{\\mathrm{orig}}$ that contains adversarial\ntest examples $\\mathcal{D}\\_{\\mathrm{adv}}$ and original test examples $\\mathcal{D}\\_{\\mathrm{orig}}$. Each adversarial MNIST image $x\\_{\\mathrm{adv}} \\in \\mathcal{D}\\_{\\mathrm{adv}}$ is constructed by finding a small (w.r.t. the $\\| \\cdot \\|_{\\infty}$ norm) perturbation $\\epsilon \\in \\mathbb{R}^{d_X}$ around an original test image $x \\in \\mathcal{X}$ that maximizes the prediction shift for the black-box $f : \\mathcal{X} \\rightarrow \\mathcal{Y}$:\n\n$$\\epsilon = \\arg \\max_{\\tilde{\\epsilon} \\in \\mathbb{R}^{d_X}} \\mathrm{Cross Entropy}[f(x), f(x + \\tilde{\\epsilon})] \\ s.t. \\ \\| \\tilde{\\epsilon} \\|_{\\infty} < .1$$\n\nThe adversarial image is then defined as $x_{\\mathrm{adv}} \\equiv x + \\epsilon$. We measure the correlation $r(\\mathrm{TCAR}, \\mathrm{TrueProp})$ by varying the proportion $\\frac{|\\mathcal{D}\\_{\\mathrm{adv}}|}{|\\mathcal{D}_{\\mathrm{test}}|}$ of adversarial examples in the test set. The results are reported bellow:\n\n| Adversarial % |$r(\\mathrm{TCAR}, \\mathrm{TrueProp})$| \n|----------------:|---------:|\n| 0 | .99 |\n| 5 | .99 |\n| 10 | .99 |\n| 20 | .99 |\n| 50 | .97 |\n| 70 | .96 |\n| 100 | .92 |\n\nWe observe that the TCAR scores keep a high correlation with the true proportion of examples that exhibit the concept even when all the test examples are adversarially perturbed. We conclude that TCAR explanations are robust with respect to adversarial perturbations in this setting.\n\nFor completeness, we have also adapted the background shift robustness experiment in Section 7 from *Koh, P. et al. (2020). Concept Bottleneck Models*. As in our paper, we use CAR to explain the predictions of our Inception-V3 model trained on the original CUB training set. The explanations are made on test images where the background has been replaced. As Koh et al., we use the segmentation of the CUB dataset to isolate the bird on each image. The rest of the image is replaced by a random background sampled from the *Place365* dataset. This results in a test set $\\mathcal{D}_{\\mathrm{test}}$ with a background shift with respect to the training set. By following the approach from Section 3.1.2 of our paper, we measure the correlation $r(\\mathrm{TCAR}, \\mathrm{TrueProp})$ between the TCAR score and the true proportion of examples in the class that exhibit the concept for each $(\\mathrm{class}, \\mathrm{concept})$ pair. We measured a correlation of $r(\\mathrm{TCAR}, \\mathrm{TrueProp}) = .82$ in the background-shifted test set. This is close to the correlation for the original test set reported in the main paper, which suggests that CAR explanations are robust with respect to background shifts. Note that this correlation is still better than the one obtained with TCAV on the original test set.",
" ## 4. CAR for NLP\n\nCAR is a general framework and can be used in a wide variety of domains that involve neural networks. In our paper, we show that CAR provides explanations for various modalities:\n\n1. Large image dataset\n2. Medical time series\n3. Medical tabular data.\n\nAs suggested by the reviewer, we perform a small experiment to assess if those conclusions extend to the NLP setting. We train a small CNN on the IMDB Review dataset to predict whether a review is positive or negative. We use Glove to turn the word tokens into embeddings. We would like to assess whether the concept $c = \\mathrm{Positive \\ Adjective}$ is encoded in the model's representations.\nExamples that exhibit the concept $c$ are sentences containing positive adjectives. We collect a positive set $\\mathcal{P}^c$ of $N^c = 90$ such sentences. The negative set $\\mathcal{N}^c$ is made of $N^c$ sentences randomly sampled from the Gutenberg Poem Dataset. We verified that the sentences from $\\mathcal{N}^c$ did not contain positive adjectives. We then fit a CAR classifier on the representations obtained in the penultimate layer of the CNN.\n\n We assess the generalization performance of the CAR classifier on a holdout concept set made of $N^c = 30$ concept positive and negative sentences (60 sentences in total). The CAR classifier has an accuracy of $87 \\%$ on this holdout dataset. This suggests that the concept $c$ is smoothly encoded in the model's representation space, which is consistent with the importance of positive adjectives to identify positive reviews. We deduce that our CAR formalism can be used in a NLP setting. We believe that using CARs to analyze large-scale language model would be an interesting study that we leave for future work. \n\n\n## 5. Using Concept Explanations without Human Annotations\n\n\nAlthough concept discovery without human intervention is a very interesting area, it is not the focus of our paper. However, we would like to point out that recent works have proposed to relax the necessity for human concept annotation by extracting the concepts from the model's representation space directly. For instance, in *Ghorbani, A., Wexler, J., & Kim, B. (2019). Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks.*, the authors automatically extract visual concepts in the form of image segmentations. We note that the extraction happens without any human annotation. Once the concepts are identified, it is perfectly possible to use TCAR/TCAV to obtain global explanations in terms of the discovered concepts.\n\n\n## 6. Minor Remarks\n\n\nWe thank the reviewer for these additional remarks. We will make sure that to implement those changes in the manuscript.",
" We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:\n\n1. Propose a principled way to tune the CAR classifiers hyperparameters.\n2. Analyze the quality of TCAR and TCAV explanations on another layer of the Inception-V3 neural network.\n3. Better justify the discreancy between the quality of CAV classifiers and TCAV explanations.\n4. Clarify the utility of our experiment with concept-based feature importance.\n\nWe believe that all of these points make a great addition to the manuscript.\n\n## 1. Hyperparameter Choice\n\nSince our CAR classifiers are kernel-based, they indeed come with extra hyperparameters (e.g. the kernel\nwidth). We would like to emphasize that, to ensure fair comparisons with the CAV classifiers,\nnone of these hyperparameters has been optimized in the experiments from Section 3.1.1 and 3.1.2.\nWe have used the default hyperparameters in the Scikit-Learn implementation of support vector classifiers.\nIn all our experiments, CAR classifiers substantially outperform CAV hyperparameters without having to tune the hyperparameters.\n\nIn the case where the user desires a CAR classifier that generalizes as well as possible, tuning these hyperparameters might be useful. We propose to tune the kernel type, kernel width and error penalty of our CAR classifiers $s^c_{\\kappa}$ for each concept $c \\in [C]$ by using Bayesian optimization\nand a validation concept set:\n\n1. Randomly sample the hyperparameters from an initial prior distribution $\\theta_h \\sim P_{\\mathrm{prior}}$.\n2. Split the concept sets $\\mathcal{P}^c, \\mathcal{N}^c$\ninto training concept sets $\\mathcal{P}^c\\_{\\mathrm{train}}, \\mathcal{N}^c\\_{\\mathrm{train}}$ and validation concept sets $\\mathcal{P}^c_{\\mathrm{val}}, \\mathcal{N}^c_{\\mathrm{val}}$.\n3. For the current value $\\theta_h$ of the hyperparameters, fit a model $s^c_{\\kappa}$ to discriminate\nthe training concept sets $\\mathcal{P}^c_{\\mathrm{train}}, \\mathcal{N}^c_{\\mathrm{train}}$.\n4. Measure the accuracy $\\mathrm{ACC}\\_{\\mathrm{val}} = \\frac{\\sum_{x \\in \\mathcal{P}^c_{\\mathrm{val}}} \\boldsymbol{1}(s^c_{\\kappa}\\circ \\ g(x)=1) \\ + \\\n \\sum_{x \\in \\mathcal{N}^c_{\\mathrm{val}}} \\boldsymbol{1}(s^c_{\\kappa}\\circ \\ g(x)=0)}{|\\mathcal{P}^c_{\\mathrm{val}} \\ \\cup \\ \\mathcal{N}^c_{\\mathrm{val}}|}$.\n5. Update the current hyperparameters $\\theta_h$ based on $\\mathrm{ACC}_{\\mathrm{val}}$\nusing Bayesian optimization (Optuna in our case).\n6. Repeat 3-5 for a predetermined number of trials.\n\nWe applied this process to the CAR accuracy experiment (same setup as in Section 3.1.1 of the main paper) to tune the CAR classifiers for the CUB concepts. Interestingly, we noticed no improvement\nwith respect to the CAR classifiers reported in the main paper: tuned and standard CAR classifier have an average accuracy of $(93 \\pm .2) \\%$ for the penultimate Inception layer.\nThis suggests that the accuracy of CAR classifiers is not heavily dependant on hyperparameters\nin this case. That said, we believe that the above approach to tune the hyperparameters\nof CAR classifiers might be useful in other cases, hence it has been added in Appendix A of the revised manuscript.\n\nWe agree that not all concepts can be captured by CAR classifiers, even after hyperparameter\noptimization (e.g. Figure 4.c from the paper, we see that CAR classifiers don't generalize well on the layer Mixed-5d). As we argue in the paper, this is a strong indication that our concept\nsmoothness assumption (Assumption 2.1) is violated. This implies that concepts are\nnot smoothly encoded in the geometry of the model's representation space $\\mathcal{H}$. The user can therefore deduce that the concept is unlikely to be salient to interpret $\\mathcal{H}$. In that sense, the inability to fit CAR classifiers that generalize well is as informative as the ability to fit CAR classifiers that generalize well. Note that this whole reasoning is made more quantitative through statistical hypothesis testing in Section 3.1.1 of our paper.",
" ## 2. Layer Selection\n\nWe would like to emphasize our CAR formalism, the user is free to to choose the layer they want to interpret. In many use cases, the user might want to interpret specific layers of the neural network based on their knowledge of the architecture. In our case, we decided to select the layer for which the concept classifiers (for both CAR and CAV) generalize better to unseen examples, as measured in our experiment from Section 3.1.1 from our paper.\n\nFollowing the reviewer's recommendation, we decided to repeat the comparison between TCAV and TCAR from Section 3.1.2 with the layer Mixed-7b of our Inception-V3 classifier. In doing so, we measured the following correlation between the scores and the ground-truth proportion of examples within a class that exhibit the concept:\n\n$$r(\\mathrm{TCAV}, \\mathrm{TrueProp}) = .46 \\hspace{2cm} r(\\mathrm{TCAR}, \\mathrm{TrueProp}) = .71$$\nFor both TCAV and TCAR, these correlations are lower than those obtained in the model's penultimate layer. In this case, it appears that the association between classes and concepts are more meaningfully encoded in the deeper layers of the neural network. We believe that the machine learning community would greatly benefit from the ability to perform this type of analysis for various architectures.\n\n## 3. Discrepancy between CAV and TCAV Accuracy\n\n\nWe would like to thank the reviewer for pointing this out. After double checking our implementation, everything seems consistent and we are confident about the results reported in our paper. We would like to emphasize that the high accuracy of a CAV concept classifier does not guarantee that TCAV score correlates well with the ground-truth association between classes and concepts. This can be understood in the following way:\n\n* A highly accurate CAV classifier occurs when the concept sets are linearly separable in the model's representation space $\\mathcal{H}$. This means that the feature extractor $g : \\mathcal{X} \\rightarrow \\mathcal{H}$ tends to linearly separate examples that exhibit the concept from the ones that don't.\n* A high correlation between the TCAV score and the ground-truth association between classes and concept occurs when the model's prediction for each class is sensitive to the presence of the appropriate concepts. This means that for each class $k \\in [d_Y]$, the label map $l_k : \\mathcal{H} \\rightarrow [0, 1]$ is sensitive to concepts $c \\in [C]$ that are truly relevant to describe this class (e.g. MNIST images of digit 9 are sensitive to the loop concept).\n\nFrom the above discussion, we immediately notice that the two previous situations depend on two orthogonal parts of the model: the accuracy of the CAV classifier depends on the feature extractor $g$ and the correlation of the TCAV score with ground-truth depends on the label map $l$. In that light, these two situations appear independent from each other: it is perfectly possible to have highly accurate CAV classifiers and poor TCAV scores if concepts are well separated in the model's representation space $\\mathcal{H}$ but the model's predictions are not sensitive to the right concepts. We note that this is precisely what occurs in the CUB setting, as we can observe from Figures 5.c in the main paper and Figure 15 in the supplementary material. In these figures, we observe that TCAV suggests non-existent associations, such as one between the class *black crow* and the concept *yellow wing colour*. \n\nA possible explanation for the better agreement between the quality of CAR classifiers and TCAR scores is the fact that, unlike TCAV, TCAR scores are not computed by using the sensitivity metric $S^c_k$ defined in Section 2.1 of the paper. As explained in Section 2.2, we use the concept activation regions *directly* to compute TCAR scores. This implies that TCAR scores are computed in the model's representation space $\\mathcal{H}$ *directly* by analyzing how different classes are scattered across the concept clusters. We believe that this different characterization might explain the gap between TCAV and TCAR scores in terms of correlation with the ground-truth. This would suggest that TCAV's sensitivity $S^c_k$ might not be the most appropriate way to detect the association between a class and a concept. We will make sure to add this discussion in the manuscript.",
" ## 4. Significance of Feature Importance Evaluation\n\n\nWe believe that our consistency checks for concept-based feature importance demonstrate two crucial and non-trivial points on various datasets:\n\n1. **Concept-based saliency maps are not generic.** The low correlation between vanilla saliency maps and concept-based saliency maps indicates that the latter are concept-specific. This is consistent with the fact that the features that are salient to identify a concept are not necessarily the same as the ones that are salient to predict the class.\n2. **Concept-based saliency maps are consistent with human intuition.** The correlation between the saliency maps of each pair of concept $(c_1, c_2)$ appears to be important when $c_1$ and $c_2$ can be identified through the same input features (e.g. the *loop* and the *curvature* concepts in MNIST are both identified through pixels in the curved part of the digit). This is a way to confront our concept-based saliency maps with the ground-truth human knowledge, in the same spirit as the assessment of global explanations in Section 3.1.2. We note that the former are more difficult in practice, since no concept-specific ground-truth saliency map is available for the investigated datasets. For this reason, we use the saliency maps correlations between concepts and compare them with the human associations between those concepts. ",
" We would like to thank the reviewer for taking the time to make encouraging comments and constructive criticisms. By following the reviewer's suggestions, we were able to:\n\n1. Extend our analysis beyond the architectures considered in the main paper.\n2. Propose a new model training process to incorporate our insights on concept-based explainability.\n3. Demonstrate that our explanations are robust with respect to adversarial perturbations and background shifts.\n\nWe believe that all of these points make a great addition to the manuscript.\n\n## 1. Analysis with Alternative Architectures\n\n\nAs suggested by the reviewer, we extended our analysis to a ResNet-50 architecture.\nWe fine-tuned the ResNet model on the CUB dataset and reproduced the experiment from Section 3.1.1 of our paper with this new architecture. In particular, we fit a CAR and a CAV classifier on the penultimate layer of the ResNet. We report the accuracy averaged over the $C = 112$\nCUB concepts bellow:\n\n| ResNet Layer | CAR Accuracy (mean $\\pm$ sem) | CAV Accuracy (mean $\\pm$ sem) | \n|:--------|:------------------:|:------------------:|\n| Layer4 | .89 $\\pm$ .01 | .87 $\\pm$ .01 |\n\nAs we can see, CAR classifiers are highly accurate to identify concepts in the penultimate ResNet layer. As in our paper, we observe that CAR classifiers outperform CAV classifiers, although the gap is smaller than for the Inception-V3 neural network. We deduce that our CAR formalism extends beyond the architectures explored in the paper and we hope that CAR will become widely used to interpret anby more architectures.\n\n\n## 2. Increasing Explainability at Training Time\n\n\nImproving neural networks explainability at training time constitutes a very interesting area of research but is beyond the scope of our paper. That said, we believe that our paper indeed contains insights that might be the seed of future developments in neural network training. As an illustration, we consider an important insight from our paper: the fact that the accuracy of concept classifiers seems to increase with the depth of the layer for which we fit a classifier. In our paper, this is mainly reflected in Figure 4. This observation has a crucial consequence: it is not possible to reliably characterize the shallow layers in terms of the concepts we use.\n\nIn order to improve the explainability of those shallow layers, one could leverage the recent developments in contrastive learning. The purpose of this approach would be to separate the concept set $\\mathcal{P}^c$ and $\\mathcal{N}^c$ in the representation space $\\mathcal{H}$ corresponding to a shallow layer of the neural network. A practical way to implement this would be to follow *Chen, T. et al. (2020). A Simple Framework for Contrastive Learning of Visual Representations.* Assume that we want to separate concept positives and negatives in the representation space $\\mathcal{H}$ induced by the shallow feature extractor $g : \\mathcal{X} \\rightarrow \\mathcal{H}$. As Chen et al., one can use a projection head $p : \\mathcal{H} \\rightarrow \\mathcal{Z}$ and enforce the separation of the concept sets through the contrastive loss\n\n$$ \\mathcal{L}^c_{\\mathrm{cont}} = \\sum_{(x_i,x_j) \\in (\\mathcal{P}^c)^2} -\\log \\frac{\\exp( \\tau^{-1} \\cdot\\cos[p \\circ g (x_i), p \\circ g (x_j)])}{\\sum_{x_k \\in (\\mathcal{P}^c \\cup \\mathcal{N}^c) \\setminus \\{ x_i \\}} \\exp(\\tau^{-1} \\cdot\\cos[p \\circ g (x_i), p \\circ g (x_k)])},$$\n\nwhere $\\cos(z_1, z_2) \\equiv \\frac{z_1^{\\intercal}z_2}{\\| z_1 \\|_2 \\cdot \\| z_2 \\|_2}$ and $\\tau \\in \\mathbb{R}^+$ is a temperature parameter. The effect of this loss is to group the concept positive examples from $\\mathcal{P}^c$ together and apart from the concept negatives $\\mathcal{N}^c$ in the representation space $\\mathcal{H}$. To the best of our knowledge, concept-based contrastive learning has not been explored in the literature. We believe that it would constitute an interesting contribution to the field based on the insights from our paper. For this reason, we added this discussion in Appendix H of the revised manuscript.",
" ## 3. Sensitivity to Adversarial Attacks\n\nAs suggested by the reviewer, we perform an experiment to evaluate the robustness\nof CAR explanations with respect to adversarial perturbations.\nIn this experiment, we work with the MNIST dataset in the same setting as\nthe experiment from Section 3.1.2 from our paper. We train a CAR concept classifier\nfor each MNIST concept $c \\in [C]$. We use the CAR classifier to output TCAR scores\nrelating the concept $c$ with each class $k \\in [d_Y]$. As in the main paper, since the ground-truth association between concepts and classes is known (e.g. the class corresponding \nto digit 8 will always have the concept loop), we can compute the correlation $r(\\mathrm{TCAR}, \\mathrm{TrueProp})$ between\nour TCAR score and the ground-truth proportion of examples that exhibit the concept.\nIn this experiment, this correlation is evaluated on a test set $\\mathcal{D}\\_{\\mathrm{test}} = \\mathcal{D}\\_{\\mathrm{adv}} \\ \\sqcup \\mathcal{D}\\_{\\mathrm{orig}}$ that contains adversarial\ntest examples $\\mathcal{D}\\_{\\mathrm{adv}}$ and original test examples $\\mathcal{D}\\_{\\mathrm{orig}}$. Each adversarial MNIST image $x\\_{\\mathrm{adv}} \\in \\mathcal{D}\\_{\\mathrm{adv}}$ is constructed by finding a small (w.r.t. the $\\| \\cdot \\|_{\\infty}$ norm) perturbation $\\epsilon \\in \\mathbb{R}^{d_X}$ around an original test image $x \\in \\mathcal{X}$ that maximizes the prediction shift for the black-box $f : \\mathcal{X} \\rightarrow \\mathcal{Y}$:\n\n$$\\epsilon = \\arg \\max_{\\tilde{\\epsilon} \\in \\mathbb{R}^{d_X}} \\mathrm{Cross Entropy}[f(x), f(x + \\tilde{\\epsilon})] \\ s.t. \\ \\| \\tilde{\\epsilon} \\|_{\\infty} < .1$$\n\nThe adversarial image is then defined as $x_{\\mathrm{adv}} \\equiv x + \\epsilon$. We measure the correlation $r(\\mathrm{TCAR}, \\mathrm{TrueProp})$ by varying the proportion $\\frac{|\\mathcal{D}\\_{\\mathrm{adv}}|}{|\\mathcal{D}\\_{\\mathrm{test}}|}$ of adversarial examples in the test set. The results are reported bellow:\n\n| Adversarial % |$r(\\mathrm{TCAR}, \\mathrm{TrueProp})$| \n|----------------:|---------:|\n| 0 | .99 |\n| 5 | .99 |\n| 10 | .99 |\n| 20 | .99 |\n| 50 | .97 |\n| 70 | .96 |\n| 100 | .92 |\n\nWe observe that the TCAR scores keep a high correlation with the true proportion of examples that exhibit the concept even when all the test examples are adversarially perturbed. We conclude that TCAR explanations are robust with respect to adversarial perturbations in this setting.\n\nFor completeness, we have also adapted the background shift robustness experiment in Section 7 from *Koh, P. et al. (2020). Concept Bottleneck Models*. As in our paper, we use CAR to explain the predictions of our Inception-V3 model trained on the original CUB training set. The explanations are made on test images where the background has been replaced. As Koh et al., we use the segmentation of the CUB dataset to isolate the bird on each image. The rest of the image is replaced by a random background sampled from the *Place365* dataset. This results in a test set $\\mathcal{D}_{\\mathrm{test}}$ with a background shift with respect to the training set. By following the approach from Section 3.1.2 of our paper, we measure the correlation $r(\\mathrm{TCAR}, \\mathrm{TrueProp})$ between the TCAR score and the true proportion of examples in the class that exhibit the concept for each $(\\mathrm{class}, \\mathrm{concept})$ pair. We measured a correlation of $r(\\mathrm{TCAR}, \\mathrm{TrueProp}) = .82$ in the background-shifted test set. This is close to the correlation for the original test set reported in the main paper, which suggests that CAR explanations are robust with respect to background shifts. Note that this correlation is still better than the one obtained with TCAV on the original test set.",
" The paper proposes a novel concept-based explanation for neural networks. It builds on previously proposed formalism of concept activation vectors. This formalism is built on the idea that the features related to inputs containing a given concept and the features related to inputs that don't are separated by a hyperplane. The main contribution of the paper is to relax the latter: The core idea is that a neural nework latent space can be divided into non-linearly separated clusters. Such a network appears to encode well the presence or absence of the related concepts. The authors also adapt the CAV importance score (TCAV) to the new formalism, and show that these concepts are invariant to latent space isometries if they are based on support vector classifiers with a radial kernel. The authors then show several desirable properties for these explanations, including supporting scientific evaluation of models. The experimental validation is based on multiple datasets from different domains. Strengths: \n* The paper is clearly structured and well written.\n* The relaxation of the linear separability in the latent space is sound, and the implemented concept activation regions seems to capture better the spread of concept-related features in the latent space. \n* The experimental evaluation is well designed and extensive, and shows many desirable properties of the proposed approach. \n\nWeaknesses: \n* The paper considers mostly convolutional networks. It lacks analysis of more state-of-the art architectures like residual networks or transformers. \n\n * Would it be possible to get inspiration from the insights in the paper to improve neural network training, with the purpose of increasing explainability? \n\n* How are the proposed explanations sensitive to adversarial attacks? As indicated in the previous section, it would be interesting to add to the paper an analysis of the sensitivity of the proposed approach to adversarial attacks. ",
" The paper generalized TCAV's linear separable assumption to an additional kernel space separable, which is simply the smoothness theoretically. The linear classifier is replaced by a kernel SVC, and the TCAV score is replaced by a counterpart TCAR score. They then identify the SVC classifier has a higher accuracy, and results in more stable concept explanations. strength\n-- the paper targets an important question and limitation of TCAV\n-- the empirical evaluation is thorough\n\nweakness\n-- the introduction of kernels may lead to much more hyper-parameter choice to use in practice\n-- for very complicated concepts and networks, an additional RBF kernel may be insufficient to make the data linearly separable in practice\n-- the layer selection problem is not dealt with, as TCAV suggested that sometimes earlier layers may be more fruitful even if the accuracy seems lower (for more low-level concepts). In my personal experience, using the mixed-7b layer inception-V3 for TCAV usually produces a much better result than the penultimate layer. Moreover, how does one choose the layer to apply TCAR is not addressed. \n\nnote:TCAV does not work well with the penultimate layer since d c/d activation = W_c (which is independent to the instance), and thus for all instances in the same class the directional derivative to the same concept would be fixed.\n\n---------------------------------------------------------\n\nI see that the advantage of CAR lies also on not using the sensitivity score, which makes sense. Changing my score to 7. -- In the inception experiment, CAV has a pretty close accuracy to CAR (showing that the linear separability is not a huge issue), but the evaluation of TCAV score in table 1 seems completely wrong. The failure of TCAV in MNIST and ECG is understandable since the model seems under-represented, and an additional kernel classifier would help a lot. However, the result in Inception-v3 is not convincing.\n\n-- the evaluation of concept-based feature importance is only a sanity check, how is this useful?\n\n weakness 1 and 2 are not fully covered, but they do cover some limitations.",
" Recently, there has been a lot of focus on opening up the black-box DNNs via explanations. One such technique is Concept Activation Vectors (CAV), which explains the predictions of DNNs through user-specified concepts. One main shortcoming of CAV is that it assumes that examples corresponding to a concept are all mapped in a fixed direction in the DNNs latent feature space, which can be restrictive in practice. In this paper, the authors propose Concept Activation Regions (CAR) based on kernel trick and support vector classification that relaxes this assumption. Strengths:\n- The proposed technique relaxes a fundamental assumption made in CAV, thereby increasing its effectiveness.\n- The explanation generated by the proposed method is invariant under latent space isometrics.\n\nWeaknesses:\n- One of the critical weaknesses of CAR is that since explanations are generated through a Kernel-based technique, it loses the nice interpretations CAV offers, like if one increases the presence of a concept, how does it affect the model predictions. - Would it be possible to replicate the interpretations CAV offers like concept senstivity? An alternate would be to perform test time interventions on the Kernel-based Concept classifier.\n- Is it possible to run CAR with concepts (or latent variables) generated by Autoencoders or VAE that are inherently noisy or abstract? Could CAR be used to analyze the concepts generated by techniques like SENN?\n- How robust are explanations generated by CAR? Is it robust to change in backgrounds of the images?\n- Would CAR work for other domains like NLP?\n- Is it possible to relax the assumption that such a class of techniques requires additional annotation of concepts? As of now, CAR requires a user to specify the positive and negative examples for each concept. \n\nGiven that CAR assumes to have access to the feature extractor of the model, it isn't truly a black-box setup, unlike paper portrays. I would encourage the authors to list the limitations of the proposed approach."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"CIWceAjIOsG",
"VR1w_6n_-j",
"PcgZJkSKQd",
"i3mzRkyWEP0",
"dR7XgsdFH7f",
"Y45Kj6l2rv",
"ZW-GK-V7W2u",
"dR7XgsdFH7f",
"dR7XgsdFH7f",
"dR7XgsdFH7f",
"Y45Kj6l2rv",
"Y45Kj6l2rv",
"Y45Kj6l2rv",
"ZW-GK-V7W2u",
"ZW-GK-V7W2u",
"nips_2022_8AB7AXaLIX5",
"nips_2022_8AB7AXaLIX5",
"nips_2022_8AB7AXaLIX5"
] |
nips_2022_pUPFRSxfACD | ZIN: When and How to Learn Invariance Without Environment Partition? | It is commonplace to encounter heterogeneous data, of which some aspects of the data distribution may vary but the underlying causal mechanisms remain constant. When data are divided into distinct environments according to the heterogeneity, recent invariant learning methods have proposed to learn robust and invariant models using this environment partition. It is hence tempting to utilize the inherent heterogeneity even when environment partition is not provided. Unfortunately, in this work, we show that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional information. Then, we propose a framework to jointly learn environment partition and invariant representation, assisted by additional auxiliary information. We derive sufficient and necessary conditions for our framework to provably identify invariant features under a fairly general setting. Experimental results on both synthetic and real world datasets validate our analysis and demonstrate an improved performance of the proposed framework. Our findings also raise the need of making the role of inductive biases more explicit when learning invariant models without environment partition in future works. Codes are available at https://github.com/linyongver/ZIN_official . | Accept | This paper has been well received by the reviewers - all reviewers are positive including significant revisions upwards after rebuttal. Notable strengths are clarifying when you can/cannot identify environments for invariant learning and proposing sufficient and necessary conditions for the same. Further some reviewers have expressed positive opinion on the experiments of the paper which is valuable as well.
To the authors: Please do take into account reviewers questions when preparing camera ready.
| train | [
"6yR8MQqJosg",
"KvTDupQjSKy",
"l3e478f4PfI",
"xQDBdjdA2I",
"hL_xAI3_O7f",
"Ch8o837tuqV",
"KL7bGF6PuWH",
"DNRB-mo-6Q",
"q6fW52PRhUw",
"-wUvmA28Qij",
"yjja9FTUBR",
"YQc6V0k_ff",
"-y-YuaiNEPJ",
"RzrgE6Wm5Nu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarifications! Then I have no further questions. ",
" Thanks for the responses and congratulations on a nice paper.",
" Thanks for clarifying the questions and taking the suggestions into account!",
" I thank the authors for providing their feedback and addressing all my concerns. \n\nI was already convinced of the quality of the paper. Reading comments by the authors did nothing other than reinforce my assessment. After the rebuttal, I would like to raise my score from 6 to 8.",
" ## Weakness\n\n### Q1 How to find $Z$ in practice. \n\nThank you for the insightful question and we indeed miss this part in our original submission. In the revised version, we add a section (Appendix F) on the causal interpretations of $Z$ with specific causal graphs and examples. We will try to fit this part into the main paper in the future version because it is of vital importance.\nWe briefly summarize the results in Appendix F.\n* **How to satisfy Condition 1**. The path between $Z$ and $Y$ should be $D$-separated by $X_v$. Notably, $Z$ can’t be the parent or the child of $Y$ as shown in Figure 6. Figure 4 also shows a concrete example of the meta information in the image classification tasks together with a illustrating causal graph. The meta information of a image, e.g., the time slot, coordinate and temperate, does not have a direct effect on the target. It has some correlations with some nodes of the causal system. Then we have $H(Y|X_v)=H(Y|X_v, Z)$. This meta information serves as valid $Z$. Figure 5 illustrates more valid choices of $Z$ with causal explanations. \n* **How to satisfy Condition 2**. Condition 2 is hard to check in practice. While $Z$ cannot be the direct parent or the child of $Y$ (otherwise violating Condition 1), $Z$ should be correlated with some nodes in the interested causal graph. Further, we show an additional theoretical result in Appendix F.2: even if Condition 2 is only partially satisfied, we can still discard the spurious features that are distinguishable by the collected $Z$. At the same time, the invariant features are preserved. As we collect more $Z$ satisfying Condition 1, we can discard more spurious features. \n\nIn conclusion, we should try to **find as many $Z$, which satisfies Condition 1 and is not independent of the nodes in the causal graph, as possible**.\n \n**Please refer to Appendix F for the detailed contents**.\nThanks again for the constructive suggestions.\n\n## Questions\n\n### Q1 Why we make Assumption 2.\n\nAssumption 2 is made mainly due to technical concerns. It is *only* used in the proof of Proposition 2 in Section 5.3. Proposition 2 says if there is a spurious feature $X_s^k$ that looks stable in the environments, then a feature mask $\\Phi_{v+k}$ selecting this spurious feature $X_s^k$ and all invariant features $X_v$ will achieves smaller risk than ideal feature mask $\\Phi_{v}$. In the proof, we need to show $\\Phi_{v+k}$ will not induce a penalty because both $X_k$ and $X_v$ look ‘invariant’. \n\nWe have also been thinking about dropping this assumption. Actually we did not find any failure cases of Assumption 2. However, we also find it difficult to provide rigorous and general justification for Assumption 2. So we will keep working on this question and hopefully figure out how to relax it. \n\n### Q2 Regarding adjustment set and confounding bias.\n\nThis is a great question. Indeed, we did not think about the role of auxiliary information in this way, but rather place some conditions (conditions 1 and 2) on the auxiliary information $Z$. Perhaps the very first thing is to introduce the environment in the causal graph or the SCM. A possible way is to put it in structural equation of $X_s$. Intuitively, we need auxiliary information to have some information about the environment, and $Z$ can be a child of the environment in this sense. So $Z$ may not necessarily be the adjustment variable that blocks the paths into environment.\n\nWe have to say that we cannot provide a firm conclusion at present. We again greatly appreciate the reviewer’s insightful question and will continue this direction to come with sufficient conditions for selecting the auxiliary information.",
" \n## Q1. Response to ''The author makes a very strong ... resampling-based techniques like LrF''.\n\n### Q1-1. For the implication of Example 1. Whether the resampling-based techniques like LrF can work.\n\nThe counterexample shows that when we observe a joint distribution $P(X_1, X_2, Y)$, there may exist two possible SCMs governing the data generation process. In the first SCM, $X_1$ is invariant and $X_2$ is spurious. In the second, $X_1$ is spurious. There is only one true SCM but we cannot know which one from the joint distribution. Notice that the underlying SCM determines the desired invariance, and in fact the two processes can generate very different test distributions (depending on what is the spurious feature). Indeed, our impossibility result for this setting is in the same spirit with the identifiability issue in causal discovery and ICA, where one has to introduce additional assumptions/conditions to have a meaningful problem.\n\nConcretely, consider that a deterministic algorithm $A$ is applied to the joint distribution (mixed dataset). It would output some invariant features. WLOG, assume that $A(P(X_1, X_2, Y)) = X_1$. Then $A$ will also return $X_1$ as the invariant feature in the second process because it induces exactly the same joint distribution. Since $X_1$ is the spurious feature in the second case, algorithm $A$ relies on $X_1$ and would fail in the testing distribution where $X_1$ can change dramatically. One can also check when $A$ deterministically outputs $X_2$ or both, or $A$ is a randomized algorithm.\n\nDoes the method “LrF” refer to “Learning from Failure” [1]? If so, we think that this method also fails in the above example, because it would select the same \"failure\" samples and will deterministically some feature as invariant feature depending on its inductive bias. \n(We also discuss this problem further in the next sub question).\n\n[1] Learning from Failure: Training Debiased Classifier from Biased Classifier\n\n### Q1-2. The data generation process of the counterexample. Connection to the applications. More discussion. More experiments.\n\nWe add a section in Appendix D in the revised manuscript to provide more discussions and results on the impossiblity theory. We briefly summarize it as follows:\n\nIn our example in Section 4, we consider binary features and label. This is inspired by the popular classification tasks in the literature of IRM, e.g., CMNIST, CifarMnist, ColoredObject, Waterbirds and CelebA. Take CMNIST for example. The label is 0 or 1. The invariant semantic feature, $X_1$, is the semantic feature ‘0’ or ‘1’ of the digit shape. The spurious feature $X_2$ (color) is also binary: either red or green. So we also denote the binary invariant and spurious features as 0/1 in the following discussion.\n\nIn Appendix D, we show that the construction of CMNIST is analogous to first data generation process in our Example 1 in Section 4. Moreover, we construct a new dataset MCOLOR (short for MnistColor) that is analogous to the second data generation process in Example 1. In MCOLOR, the digit shape is the spurious feature. We show that CMNIST and MCOLOR have the same joint training distribution of color, digit shape and label. Our task is to learn invariant features from the joint distribution. Due to EIIL’s inductive bias, it will deterministically rely on the digit shape as the invariant feature, resulting in poor performance in MCOLOR. However, since we have no prior knowledge of the invariant feature or the data generation process, the TRUE invariant feature could be color like the MCOLOR dataset. LrF will also rely on either color or digit as the invariant feature, similar to EIIL. **Please see the detailed empirical results in Appendix D.** (We add the results of EIIL in Appendix D and we are still working on the experiments of LrF.)\n\n\n### Q1-3. The SCM for the general impossibility theoretical result.\n\nThanks for this comment. If we understand correctly (please correct us if we have any misunderstanding about ‘specific SCM’), in Theorem 1 we do not assume any specific form of SCMs, and each structural equation has a noisy (or exogenous) term and the causal mechanism can be either linear or non-linear. Specifically, Theorem 1 can work with the SCM of Equation (1) in our paper, and Equation (1) is already more general than Assumption 1 of [1] because $g_v(X_v, \\epsilon_v)$ includes $g_v(X_v)+\\epsilon_v$ and [1] uses linear function for $g_v(\\cdot)$. \n\n[1] Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization.\n",
" ## Q2. Response to \"The Methodology section’s structure could be enhanced.\"\n\nWe have reformulated our methodology section in the revised version according to your advice. Specifically, we introduce the involved variables and terms before presenting the formulation. We also first introduce the invariance penalty before the joint environmental inference framework. See Section 5.1 (marked in blue) for details.\n\nSuppose that the environments $(1,...,k,...,K)$ have been given according to a fixed $\\rho(\\cdot)$. Recall that IRM [1] learns an invariant representation, upon which there is a classifier that is simultaneously optimal in all environments. To measure the optimality of a classifier $\\omega$ in environment $k$, we can fit an environment dependent classifier $\\omega_k$ on the data from environment $k$. If $\\omega_k$ achieves a smaller loss than $\\omega$, it means $\\omega$ is not optimal in this environment. We can further train a set of environment dependent classifiers $\\\\{\\omega_k\\\\}_{k=1}^K$, to measure whether $\\omega$ is simultaneously optimal in all environments.\nThen we search for environment partition that induces the maximum invariance penalty. Hopefully, this kind of environments can help us to distinguish between spurious and invariant features.\n\n[1] Invariant Risk Minimization\n\n## Q3 Additional experimental results of GroupDRO on House Price and CelebA\n\nWe have added GroupDRO for the House Price and CelebA experiments, where we assume the underlying environments are known, similar to IRM. The results are included in Tables 2 and 3. We observe that GroupDRO is inferior to IRM (with environment indexes) and ZIN (without environment indexes).\n\n## Q4 The hyperparamter K on real data (CelebA). The distribution in the two environments of CelebA.\n\nWe conduct experiments of different $K$ (K=2,3,4,6,8,10) on CelebA and add more results in Appendix E.2. As Figure 2 (Right) shows, the results are quite stable when we change K between 2-8.\n\nFurthermore, we visualize the distributions of the spurious feature in CelebA in the two environments (K=2) in Appendix E.3. Specifically, we calculate the spurious correlation as the percentage of samples whose target (Smiling/Not Smiling) aligns with its gender (Female/Male). The results are shown in Figure 3 of Appendix E.3. We can see the spurious correlation differs greatly in the learned environments at the end of training. This means that ZIN can generate environments among which the spurious feature exhibits non-invariance and further we can apply IRM to learn the invariant features. \n\n## Q5. For the regression task, why the built year is auxiliary information for ZIN?\n\nThis is a good point. Typically, the built year is a cause of the price. Here our task was to predict the *ranking* of the house price in the same built year. Thus, the prices of houses with the same built year are normalized, and in this way, the built year is no longer a cause of the target. We forgot to mention this task explicitly and have added a description in the revision. Again we appreciate the reviewer's effort. \n",
" ## Weakness\n\n### Q1 More discussion/demonstration on Z\n\nThank you for the insightful question and we indeed miss this part in our original submission. In the revised version, we add a section (Appendix F) on the causal interpretations of $Z$ with specific causal graphs and examples. We will try to fit this part into the main paper in the future version because it is of vital importance.\n\nWe briefly summarize the results in Appendix F.\n\n* **How to satisfy Condition 1**. The path between $Z$ and $Y$ should be $d$-separated by $X_v$. Notably, $Z$ can’t be the parent or the child of $Y$ as shown in Figure 6. Figure 4 also shows a concrete example of the meta information in the image classification tasks together with a illustrating causal graph. The meta information of a image, e.g., the time slot, coordinate and temperate, does not have a direct effect on the target. It has some correlations with some nodes of the causal system. Then we have $H(Y|X_v)=H(Y|X_v, Z)$. This meta information serves as valid $Z$. Figure 5 illustrates more valid choices of $Z$ with causal explanations. \n\n* **How informative are Z** (**How to satisfy Condition 2**). Condition 2 is hard to check in practice. While $Z$ cannot be the direct parent or the child of $Y$ (otherwise violating Condition 1), $Z$ should be correlated with some nodes in the interested causal graph. Further, we show an additional theoretical result in Appendix F.2: even if Condition 2 is only partially satisfied, we can still discard the spurious features that are distinguishable by the collected $Z$. At the same time, the invariant features are preserved. As we collect more $Z$ satisfying Condition 1, we can discard more spurious features. \n\nIn conclusion, we should try to **find as many $Z$, which satisfies Condition 1 and is not independent of the nodes in the causal graph, as possible**. \n\nPlease refer to Appendix F for the detailed contents.\nThanks again for the constructive suggestions.\n\n## Question\n\n### Q1 The Z in the synthetic process in Section 7.1\n\nThank you for pointing out the confusion in this part. We use the time index $t$ as the auxiliary information $Z$. We have added this in Section 7.1.",
" ## Weakness\n### Q1-1 The results are not particularly surprising for those familiar with causality, e.g., Example 1.\n\nWe agree that the impossibility part in our paper is indeed the identifiability issue, a very fundamental and important concept. This issue is well-known to those who are familiar with causal inference or ICA, like the reviewer, and has been discussed in recent ML applications as well, e.g., [1][2]. However, it seems that for this new and practically meaningful setting (invariance learning without environment indexes, which has already attracted some attention in existing works) considered in our paper, the identifibility issue is omitted once again. \n\nWe believe that a contribution of ours is to formulate this setting in the causal language and then adapt the causality techniques accordingly, as Reviewer 9XzW wrote “I’ve long thought that environment inference was asking too much so I really appreciate a clear counter example that shows where it can fail ... and the auxiliary information perspective is a nice way around the negative examples. ” So our result again points out an important issue and we do hope the identifibility issue can draw even more attention in the ML community. \n\n[1] Challenging common assumptions in the unsupervised learning of disentangled representations. ICML 2018.\n\n[2] Variational autoencoders and nonlinear ICA: a unifying framework. AISTATS, 2020.\n\n### Q1-2 Some of the results seem tautological.\n\nThanks for the insightful comment. Violating Condition 2 indeed leads to the conclusion of Proposition 2 that some spurious features can be selected by the feature mask. It may seem straightforward for one who is familiar with the proof of Theorem 2 to reach the conclusion of Proposition 2. We believe that adding Proposition 2 may help readers who are not familiar with the feature selection proof, to see the necessity of Condition 2. \n\nWe will try to make this part more concise and reduce the duplicated contents, or put some of them into the appendix. Thanks again for pointing this out.\n\n## Questions\n\n### Q2-1 Major questions : - What is the intuition behind Assumption 3?\n\nThis assumption aims to ensure the penalty not degenerate when $X_v$ is too informative of $Y$. Suppose $X_v$ is the parent of $Y$ and $X_s$ is the child of $Y$. If $X_v$ is not fully informative of $Y$, then $H(Y|X_v)>0$. So if $H(Y|X_s, \\rho(Z)) > H(Y|X_s)$, then we typically have $H(Y|X_s, X_v, \\rho(Z)) > H(Y|X_s, X_v)$. The following two cases are troublesome and are ruled out by Assumption 3:\n* If $X_v$ is fully informative of $Y$, i.e., $H(Y|X_v)=0$, then it will lead to $H(Y|Xs, Xv, \\rho(Z)) = H(Y|X_s, X_v) = 0$. In this case, the penalty will be 0 no matter what the spurious feature $X_s$ is.\n* If $X_v$ is highly informative of $Y$, i.e., $H(Y|X_v)=10^{-6}$, then $H(Y|X_s,X_v,\\rho(Z)) <= 10^{-6}$ and $H(Y|X_s,X_v)<=10^{-6}$. It leads to the penalty $H(Y|X_v, X_s) - H(Y|X_s,X_v,\\rho(Z)) <= 10^{-6}$. Then the penalty is too small.\nAssumption 3 excludes the above cases where $X_v$ is too informative and the penalty vanishes. \n\n### Q2-2 Major Questions: questions on Corollary 1 (Q2). ''I did not understand Corollary 1...same one-hot vector?''\n\n- Yes, it is '(a) or (b) or (c)'. \n- By 'Index', we mean that each sample has a distinct number associated with it. Taking a dataset with N data points for example. We can assign in $k=1,...,N$ to the samples. Then $h(Index(X, y))$ assigns individual weights to each sample in the training dataset. We will add an explicit definition. \n- The injective condition just makes it easy for the proof. The injective requirement is not necessary. For example, the results still hold if $h(Y)$ (or $h(Index(X, Y))$) contains additional information of $Y$ conditional on $X_v$. Then we have $H(Y|X_v, \\rho(Z)) = H(Y|X_v, h(Y)) < H(Y|X_v)$, which violates Condition 1.\n\nThanks for this insightful comment to help improve our work.\n\n### Q3-1 Suggestions (Q1-Q2): Describe new quantities, define the invariance penalty.\n\nThanks a lot for the suggestions. We have modified this part accordingly in Section 5.1 in the revised manuscript, marked in blue. \n\n### Q3-2 Suggestions: The relationship between our Theorem 3 and the results in IRM.\n\nWe agree that our Theorem 3 is similar to the results of IRM. We provide Theorem 3 to illustrate that our framework is applicable to the linear feature learning setting. We will add more discussion on this result to enhance the readability and will consider to name it as a ‘Proposition’, as we do not have many technical contributions here. \n\n### Q4-1 Minor Points. TYPOs.\n\nThanks a lot for pointing out these typos. We have corrected them accordingly.\n\n",
" We thank all the reviewers for their time. We have uploaded a revised version of our paper, following the suggestions/comments from all the reviewers. Some major changes are:\n\n- We add a detailed discussion on how to choose the auxiliary information in Appendix F, together with some causal interpretations of $Z$ and an additional theoretical result. \n- We add more discussions on Example 1 in Appendix D. We also construct a dataset and provide additional empirical results to illustrate the impossibility example.\n- We rephrase the methodology part in Section 5.1 for a better readability.\n- We have also reported more experimental results, including GroupDRO as baselines, different choices of $K$ on CelebA data, visualizing the inferred environments in the CelebA experiment.\n\nThanks a lot for the effort from all the reviewers and program committee.\n",
" The authors propose an approach to learning an invariant representation, which simultaneously learns the representation along with an environment partition over samples based on auxiliary measurements, such as time indices or location metadata. They provide sufficient conditions for their proposed method to identify invariant features in both the feature selection and the linear feature learning setting. They also describe necessary conditions for the identifiability of invariant features in the feature selection setting. Finally, experiments demonstrate that the proposed method is effective at learning invariant features, almost matching the performance of IRM which has access to a ground-truth partition of the variables. **Strengths**\n+ The paper is well-written in most aspects: assumptions are cleanly stated and intuitively explained after they are introduced. The feature selection example provides an intuitive illustration of the ideas.\n+ The paper is “significant” in an uncommon way. In particular, it seems that it will play an important regularizing role to correct the confusion present in some other works on invariant learning, which have sought to learn an environment partition without metadata. For instance, the authors correctly point out that EIIL can only be successful when the spurious features are *more* correlated with the label than the invariant features are. The presentation of necessary conditions in Propositions 1 and 2 is especially helpful for streamlining future literature on this topic.\n+ The experiments are well-done, with the proposed method performing near the “oracle” baseline of IRM across a variety of different domains (synthetic, house price prediction, a CelebA experiment, and a land cover prediction task). \n\n**Weaknesses**\n+ Most of the results are not particularly surprising for those familiar with causality. For instance, Example 1 is a standard example of Markov equivalence.\n+ Some of the results seem tautological, e.g., violating Condition 2 seems equivalent with the conclusion of Proposition 2 that a spurious feature will be included in the feature selection.\n **Major questions**\n+ What is the intuition behind Assumption 3? In particular if we think of $X_v$ as the parents of $Y$ and $X_s$ as the children of $Y$?\n+ I did not understand Corollary 1. First, I assume it is meant to be (a) _or_ (b) _or_ (c), not _and_. Second, what does “Index” denote? Why is $h$ supposed to be injective - can’t it map the same inputs to the same one-hot vector?\n\n**Suggestions to help reader understanding**\n+ It helps to describe new quantities directly after they are introduced. For example, after Equation (3), mention that $\\rho$ softly partitions environments before introducing the other definitions, mention that $\\mathcal{R}_{\\rho^{(k)}}(\\omega, \\Phi)$ is the loss of $f_w$ in environment $k$, etc.\n+ It may help to separately define the invariance penalty, including the max term. This emphasizes that the max term is only involved in this term, and directly highlights what is being proposed.\n+ It would help to highlight the differences between Theorem 3 and the results for IRM, since they are so similar. It appears simply that you allow the environment partition to be learned from auxiliary information instead of being given, are there any other significant differences?\n\nMinor points\n+ Assumption 3 is unclear: you say adding another feature does not make the penalty diminish, but in the second sentence, it seems that it is allowed to become smaller, just that it is not allowed to vanish to zero.\n+ After Equation (3), you use $w$ to subscript $f$ instead of $\\omega$\n+ Assumption 4: any “distinct” features, not “distant”\n Limitations and potential negative impacts are adequately addressed.",
" This paper theoretically proves that it's impossible to infer the environments purely from the heterogeneous data; consequently, additional information is definitely needed. It proposes a framework that jointly learns environment partitions and invariant representations with the additional information. Experimental results demonstrate the effectiveness of this method. Strengths: \n\n1. This paper theoretically proves that it's impossible to identify the invariant features purely from heterogeneous data, which is meaningful. The toy example is intuitive and easy to understand. \n\n2. This paper proposes a framework to jointly learn environment partition and invariant representation, which is simple but equipped with a theoretical guarantee, which is also meaningful. \n\n3. This paper is well-written and easy to follow. \n\nWeaknesses:\n\n1. I haven't seen the guarantee to demonstrate how informative the auxiliary information $Z$ should be to identify the invariant features. Looking through Assumptions 1-4 and Conditions 1-2, there seems to be no discussion on $Z$, which should be included in the paper. \n 1. What is the auxiliary information $Z$ in the experiment synthetic dataset shown in Section 7.1? There is no $Z$ in the synthetic process. I haven't seen any obvious limitations except for the one mentioned in the last part of the paper. ",
" This work explores the feasibility of utilizing cheaply available auxiliary features to uncover latent environment partition and help to train an invariant model. The proposed method, namely ZIN, is a min-max game. Specifically, the learned environment partition aims to split samples into groups with different distributions by maximizing the performance gap between environments. The feature extractor and the classifier aim to learn causally invariant features by minimizing such performance gaps. The proposed model is significant when ground environment labels are costly to obtain while additional information that correlates with environments is easy to acquire. From a personal perspective, the key strength of this paper is its training of an end-to-end model with additional auxiliary information to address environment inference. This idea, for me, is novel and interesting. \n\nHowever, I still have some concerns about the content of this paper. \n\n1. The author makes a very strong conclusion - without inductive biases or additional information, environment inference is impossible, but neither the example nor the theorem convinced me. Does it contest the theoretical viability of resampling-based techniques like LrF?\n\n2. The Methodology section's structure could be enhanced.\n\n3. Some important baselines, such as \"Group DRO\" are missing.\n\n4.Important case studies are missing - including the visualization of environments and the effect of K on real data.\n\nThe following part has more information.\n 1. According to the author, it is theoretically impossible to learn invariant features from heterogeneous data without environment indexes. A counter-example and a general theoretical proof are provided by the author.\na). For counter-example, two examples are symmetric. In other words, the causal graph cannot be identified given the data. Both X1 and X2 can be treated as either invariant features or spurious features. It is not indistinguishable, but more like both are right. Additionally, the data generation process is impractical. It is not a reasonable example, more like a math game.\nb). For the general theoretical result, the proof is under the specific SCM (fully informative invariant features). However, there are other widely used assumptions [1]. Personally, I don't think the timing is ripe to come to such a firm conclusion when a theoretical proof under only one specific assumption is proposed.\n\n2. More information and discussion can be proposed for the methods section. Why the invariance penalty is effective? Why are different classifiers being used for various estimated environments?\n\n3. Only IRM, used as an Oracle baseline, has been compared in the experiments. What about a different class of generalization-enhancing techniques, like group DRO?\n\n4. Figure 1 shows the correlation between inferred environments and true environments in synthetic data, which is important. Also, Figure 2 demonstrates the effect of K - the number of inferred environments, which is crucial for ZIN. Since K, as a hyperparameter, is pre-specified in ZIN. Then how about the situation in real data? Visualization of the environment and the impact of K on real data is essential to demonstrating the efficacy of ZIN. Could you provide, for instance, the distribution of two environments for gender in CelebA?\n\nA small question: It seems that the additional auxiliary variables should have a strong correlation with spurious variables, so that the distribution of spurious variables is distinct in different environments. And the classifier can further remove the dependency. For the regression dataset of house sales price, why the built year is auxiliary information for ZIN? I think it is a cause for predicting the price. \n\n\n\n[1] Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization.\n\n\n\nPost rebuttal comment \n---------------------------------------------------\nI thank the authors for providing their feedback and addressing all my concerns.\n\nI was already convinced of the quality of the paper. Reading comments by the authors did nothing other than reinforce my assessment. After the rebuttal, I would like to raise my score from 6 to 8. There is no negative societal impact of this work.",
" There is a recent line of work that build on methods that learn invariant representations from multiple environments [Peters et al 2016, Arjovsky et al 2019] by attempting to infer the environment label from data. This paper presents a simple counterexample that shows that environment inference is not possible in general---essentially they construct examples where the joint is consistent with two different SCMs, each of which have a different invariant feature; and hence any environment inference procedure that depended only on the joint distribution would be wrong in at least one of the SCMs---and instead argues for environment inference from auxiliary variables. They complement their negative results with theory that shows that auxiliary variables are sufficient for environment inference in the linear and feature selection settings, and give nice experimental results on both simulated and real data. This review is going to skew short because I really liked this paper: I've long thought that environment inference was asking too much so I really appreciate a clear counter example that shows where it can fail (of course, that still leaves open the question of sufficient conditions for env inference to succeed without auxiliary info; but that's a separate question), and the auxiliary information perspective is a nice way around the negative examples. I also appreciated that the empirical section focused on more realistic datasets. \n\nWeaknesses:\n* The biggest weakest for me was that it doesn't seem obvious how one should choose an auxiliary variable in practice. There are a nice collection of examples in the experimental section which partially addresses this, but I think it could have been strengthened by including a section where you discuss the process by which you selected the auxiliary variables. For each of the datasets, you could explain what variables are available, why the selected subset make sense as auxiliary variables, and under what conditions you'd expect environment inference to fail. 1. Why is assumption 2 necessary? If you weren't making it explicitly, I would have thought it was just a consequence of your definition of invariance (line 116 / 117). What is it ruling out?\n\n2. Related to my weakness comment - many of the selected auxiliary variables seem very similar to an adjustment set that you would control for in order to block paths to the environment. Is there an explicit connection here? Can we interpret the method as an invariant representation learning alternative to adjustment? And if so, does ZIN fail in the presence of unblocked confounding? Yes"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"DNRB-mo-6Q",
"hL_xAI3_O7f",
"q6fW52PRhUw",
"KL7bGF6PuWH",
"RzrgE6Wm5Nu",
"-y-YuaiNEPJ",
"-y-YuaiNEPJ",
"YQc6V0k_ff",
"yjja9FTUBR",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD",
"nips_2022_pUPFRSxfACD"
] |
nips_2022_NXHXoYMLIG | EfficientFormer: Vision Transformers at MobileNet Speed | Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks.
However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves $79.2\%$ top-1 accuracy on ImageNet-1K with only $1.6$ ms inference latency on iPhone 12 (compiled with CoreML), which runs as fast as MobileNetV2$\times 1.4$ ($1.6$ ms, $74.7\%$ top-1), and our largest model, EfficientFormer-L7, obtains $83.3\%$ accuracy with only $7.0$ ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. | Accept | This work proposes a purely transformer-based vision model for mobile vision purposes.
This proposition is somewhat surprising, since transformers did not excel at low-latency inference on resource-constrained hardware, especially compared to convolutional networks.
This is achieved by using a clever design that allows for reshape operations without the actual need of copying data as well as new techniques for latency-optimized network pruning.
While the methods itself are very technical and engineering-oriented, the overall result: a purely transformer-based low-latency, high-quality vision network is of general interest and is worth being shared with the wider community. Therefore I propose this paper to be accepted for NeurIPS 2022.
| train | [
"HeQ0Ex8TgNS",
"p0d3toOs0D7",
"MLgu9DeoG6e",
"3L6_A7_V_pf",
"ELPORpDi5B",
"NY-mmtZWnLk",
"XXJgNO9NHO",
"FNpadAD9sgK7",
"PYJTD_al9fcF",
"-FFzfBJtr7v",
"DbrtZSU5mRZ",
"UqYItUwc-OI",
"CpBcoV_LZFZ",
"MWwxIl5vHOG",
"wgz-vCdFQCG"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer mbmg,\n\nThanks again for your time and reviewing efforts to help improve our work! We appreciate your positive rating and insightful comments. \n\nAs a kind reminder, we provide suggested results and comparisons in the authors' response, including the demonstration of the advantageous performance of EfficientViT on other hardware and compilers (Nvidia A100 with TensorRT, iPhone CPU with CoreML, and Android device with NNAPI), and ablations on hardware utilization and the latency-driven search algorithm. We hope our responses have addressed your concerns. \n\nBest,\n\nAuthors\n",
" Dear Reviewer RfgL,\n\nThanks again for your time and reviewing efforts to help improve our work! We appreciate your positive rating and insightful comments. \n\nAs a kind reminder, we provide suggested results and comparisons in the authors' response, including the generalization of dimension-consistent design on other platforms, and demonstrations of the advantageous performance of EfficientViT on other hardware and compilers (Nvidia A100 with TensorRT, iPhone CPU with CoreML, and Android device with NNAPI).\nWe hope our responses have addressed your concerns. \n\nBest,\n\nAuthors\n",
" Dear Reviewer X2kc,\n\nThank you so much for checking our responses and raising the score. It is our great pleasure to know our efforts have helped address your concerns!\n\nWe appreciate your time and reviewing efforts to help improve our work. If you still have questions or concerns, we would sincerely like to know and will make the best of our efforts to resolve them within the open discussion period. \n\nBest,\n\nAuthors",
" Thanks for your rebuttal.\n\nI like your rebuttal with your experiments and explanations. I will change the final rating to 5.",
" Dear Reviewer X2kc,\n\nWe appreciate your time and reviewing efforts to help improve our work. Thanks!\n\nWe follow your initial suggestions to provide additional results, such as the comparison with Mobile-Former and CSwin, to clarify the advantage of our model, especially on mobile devices. \nWe also provide the differences between our work and others. We hope our response can help further demonstrate that our approach is crucial for designing mobile-friendly transformer architectures. \n \nAs the deadline for the author-reviewer discussion is approaching, we would sincerely appreciate it if you could kindly let us know whether our response addressed your concerns, and please let us know if you have further questions. It will be our great pleasure if you would consider updating your review or score.\n\nBest,\n\nAuthors\n",
" **We thank the reviewer for the positive feedback and valuable suggestions. We appreciate that the reviewer acknowledges our work introduces novel improvement for latency vs. accuracy trade-off; proposes technically solid EfficientViT design space and general latency analysis workflow; provides comprehensive insights on optimizing vision transformers; achieves meaning latency for practitioners; and the paper is clearly written and well organized. In the following, we provide results on transferring the design space to other hardware and compilers (Q1) and conduct an ablation study on the hardware utilization and searching algorithm (Q2).**",
" **Q1. Transfer the insights and design space to other hardware and compilers.**\n\nWe thank the reviewer for the suggestion. Here we provide additional results by deploying our and other models on different hardware and compilers. We show the average latency of over 1,000 runs for the following hardware and compilers.\n- Nvidia A100 GPU with TensorRT. We run the latency analysis on the Nvidia A100 GPU [a] with batch size 64. The Pytorch models are saved into the ONNX format [b] and compiled with TensorRT [c]. We report two latency results from TensorRT in the following table (Table E). One is the computing time on GPU (TRT-A100-GPU), and the other is the total walltime that includes the time for data transfer (TRT-A100-Total). We use the latest Nvidia software environment for the experiments [d]. \n- iPhone CPU with CoreMLTools. We benchmark the latency for models by only using the CPU in the iPhone 12. The models are deployed by CoreMLTools. \n- Google Pixel 6 with NNAPI. We report the model latency on android devices. We utilize the Google Pixel 6 with NNAPI [e] for model compiling. Please note that NNAPI does not well support the GeLU, so we replace the GeLU with HardSwish in all models that include GeLU for a fair comparison. Models are converted into TensorFlow Lite format and deployed using NNAPI. Due to the compatibility issue of NNAPI, many converted models can not successfully run on Pixel 6. Therefore, we only report the latency for the models that can. We leave the support for more baseline models on Google Pixel as future work.\n\nThe following tables (Table E and Table F) report the latency analysis on the Nvidia A100 GPU with TensorRT, iPhone CPU with CoreMLTools, and Pixel 6 with NNAPI for the models trained on ImageNet-1K for the classification task. We can see our model still achieves decent latency vs. accuracy trade-off improvement on different hardware and compilers. \n\nFor example, compared with the CNN models, EfficientViT-L1 runs faster (38% faster on Nvidia A100 GPU Computing and 21% faster on iPhone CPU) than EfficientNet-B0 while achieving 2.1% higher top-1 accuracy. For the models with high performance (>83% top-1), EfficientFormer-L7 runs much faster (4.6$\\times$ faster on Nvidia A100 GPU Computing and 3.8$\\times$ faster on iPhone CPU) than EfficientNet-B5. \n\nCompared to ViTs and their variants, EfficientViT-L1 has 4.4% higher top-1 accuracy than MobileViT-XS and runs much faster across different hardware and compilers (1.9$\\times$ faster on Nvidia A100 GPU Computing, 2.3$\\times$ faster on iPhone CPU, and 10.4$\\times$ faster on Pixel 6), and has 4.7% higher accuracy than DeiT-T while being 8.3$\\times$ faster on Pixel 6. Also, EfficientViT-L3 achieves 1% higher top-1 accuracy than PoolFormer-S36, while being 3$\\times$ faster on Nvidia A100 GPU and 2.8$\\times$ faster on iPhone CPU. The results on different hardware and compilers demonstrate the advantageous performance of our models.\n\n>**Table E. Comparison results on ImgeNet-1K. The latency (ms) is measured on the Nvidia A100 GPU with TensorRT (TRT-A100-GPU and TRT-A100-Total) and iPhone 12 CPU with CoreMLTools. ‘/’ denotes the model is not well supported by the hardware and compiler.**\n| Model | Train epoch | Top-1 | TRT-A100-GPU(ms) | TRT-A100-Total (ms) | iPhone CPU (ms) |\n|:---:|:---:|:---:|:---:|:---:|:---:|\n| **EfficientViT-L1** | **300** | **79.2** | **6.17** | **9.33** | **11.5** |\n| **EfficientViT-L1** | **450** | **79.9** | **6.17** | **9.33** | **11.5** |\n| **EfficientViT-L3** | **300** | **82.4** | **13.94** | **17.10** | **28.2** |\n| **EfficientViT-L7** | **300** | **83.3** | **30.67** | **33.83** | **67.7** |\n| MobileNetV2 | 300 | 71.9 | 4.97 | 8.13 | 8.0 |\n| MobileNetV2 x 1.4 | 300 | 74.7 | 7.32 | 10.47 | 10.7 |\n| EfficientNet-B0 | 350 | 77.1 | 9.99 | 13.15 | 14.5 |\n| EfficientNet-B3 | 350 | 81.6 | 35.03 | 40.67 | 52.6 |\n| EfficientNet-B5 | 350 | 83.6 | 141.00 | 153.97 | 258.8 |\n| ResNet50 | 300 | 78.5 | 9.02 | 12.17 | 29.4 |\n| ResMLP-S24 | 300 | 79.4 | 17.35 | 20.51 | 40.2 |\n| DeiT-T | 300 | 74.5 | 7.08 | 10.24 | 16.7 |\n| DeiT-Small | 300 | 81.2 | 15.45 | 18.60 | 41.0 |\n| PVT-small | 300 | 79.8 | 23.75 | 26.91 | 89.5 |\n| T2T-ViT-14 | 310 | 81.5 | 20.99 | 24.15 | / |\n| Swin-Tiny | 300 | 81.3 | 21.99 | 25.15 | / |\n| PoolFormer-s12 | 300 | 77.2 | 14.52 | 19.44 | 59.0 |\n| PoolFormer-s24 | 300 | 80.3 | 28.22 | 33.10 | 126.7 |\n| PoolFormer-s36 | 300 | 81.4 | 41.21 | 46.03 | 192.6 |\n| Mobile-Former-508m | 450 | 79.3 | 14.58 | 17.74 | 22.2 |\n| MobileViT-XS | 300 | 74.8 | 11.65 | 14.81 | 26.5 |\n\n>**Table F. Comparison results on ImgeNet-1K. The latency (ms) is measured on Google Pixel 6 with NNAPI (Android - Pixel 6).**\n| Model | Train epoch | Top-1 | Android - Pixel 6 (ms) |\n|:---:|:---:|:---:|:---:|\n| **EfficientViT-L1** | **300** | **79.2** | **7.89** |\n| DeiT-T | 300 | 74.5 | 65.60 |\n| MobileViT-XS | 300 | 74.8 | 82.49 |\n",
" **Q2. Ablation on hardware utilization and the searching algorithm.**\n\nFirst of all, thanks for the insightful comments! We totally agree with you that our model improves the latency vs. accuracy trade-off through (i) better architecture design so that higher hardware utilization is achieved, and (ii) leveraging latency driven slimming to find fast models while maintaining accuracy. As suggested, we provide a detailed analysis as follows to quantify the performance gain from each of them.\n\n---\n\n**2.1. Analysis of hardware utilization.**\n\nTo understand the hardware utilization, we employ throughput in TFLOPS (Tera FLOPs per Second) as the evaluation metric, which is calculated by model computation cost (FLOPs) divided by execution time. Models with higher throughput (TFLOPS) better exploit the computation power of the hardware.\n\nTo fairly compare with baseline models under different computation complexity, we linearly scale the depth and width of EfficientViT-L1 to obtain a series of models (EfficientViT-LS-1 to EfficientViT-LS-14), with the number of parameters ranging from 1.1M to 31.3M and MACs from 0.09G to 3.9G, and benchmark the latency and utilization on iPhone 12. \n\nAs in the following table (Table G, rows are sorted based on GMACs in descending order), super-tiny models still run at about 1ms, such as EfficientViT-LS-1, EfficientViT-LS-2, and EfficientViT-LS-3, where the throughput is low and the hardware is not fully exploited. Data processing and transferring become the bottleneck. As a result, making the model super small with sacrificed accuracy is less valuable. In contrast, our 1.3GMACs model, EfficientViT-L1 lies at a sweet point, enjoying fast inference speed (1.6ms) while maintaining high accuracy. \n\nFurthermore, we can observe that EfficientViT variants outperform both CNNs and ViTs in hardware utilization across different computation complexity levels. \nFor instance, at 4-GMACs level, EfficientViT-LS-14 outperforms DeiT-S by 3.3$\\times$ higher TFLOPS and outperforms PoolFormer by 2.2$\\times$, achieving comparable throughput to ResNet50. \nIn the lightweight domain, EfficientViT-LS-4 has 2.2$\\times$ higher TFLOPS than EfficientNet-B0. \nWe demonstrate that with the proposed hardware-friendly design, EfficientViT naturally has better hardware utilization. \n\nDue to the format constraints for author response, the throughput data shall be visualized in graph view in the revision. \n\n>**Table G. Analysis of hardware utilization on iPhone 12.**\n| Model | Params (M) | GMACs | Latency (ms) | Throughput (TFLOPS) |\n|:---:|:---:|:---:|:---:|:---:|\n| DeiT-S | 22.5 | 4.6 | 11.8 | 0.39 |\n| ResNet50 | 25.5 | 4.1 | 3.0 | 1.37 |\n| EfficientViT-LS-14 | 31.3 | 3.9 | 3.0 | 1.30 |\n| PoolFormer-S24 | 21.0 | 3.6 | 6.2 | 0.58 |\n| EfficientViT-LS-13 | 23.5 | 3.1 | 2.41 | 1.29 |\n| EfficientViT-LS-12 | 20.8 | 2.8 | 2.31 | 1.21 |\n| EfficientViT-LS-11 | 19.7 | 2.7 | 2.17 | 1.24 |\n| EfficientViT-LS-10 | 15.9 | 2.1 | 1.88 | 1.12 |\n| EfficientViT-LS-9 | 15.8 | 2.0 | 1.85 | 1.08 |\n| EfficientViT-LS-8 | 12.1 | 1.6 | 1.65 | 0.97 |\n| **EfficientViT-L1** | **12.3** | **1.3** | **1.60** | **0.81** |\n| EfficientViT-LS-7 | 8.2 | 1.0 | 1.37 | 0.73 |\n| EfficientViT-LS-6 | 6.9 | 0.86 | 1.33 | 0.65 |\n| MobileViT-XS | 2.3 | 0.70 | 7.20 | 0.10 |\n| EfficientViT-LS-5 | 5.0 | 0.55 | 1.23 | 0.45 |\n| MobileFormer | 14.0 | 0.51 | 13.22 | 0.04 |\n| EfficientNet-B0 | 5.3 | 0.39 | 2.71 | 0.15 |\n| EfficientViT-LS-4 | 3.8 | 0.37 | 1.13 | 0.33 |\n| MobileNetV2 | 3.5 | 0.30 | 1.70 | 0.18 |\n| EfficientViT-LS-3 | 2.8 | 0.25 | 1.02 | 0.25 |\n| EfficientViT-LS-2 | 2.0 | 0.17 | 0.95 | 0.18 |\n| EfficientViT-LS-1 | 1.1 | 0.09 | 0.85 | 0.11 |\n",
" **Q2. Ablation on hardware utilization and the searching algorithm.**\n\n**2.2. Analysis of latency driven slimming.**\n\nBesides the hardware-efficient architecture design, it is still crucial to find appropriate depth and width configurations for the model to achieve satisfactory performance. To understand the benefits of our latency driven slimming, we randomly sample networks from our search space that have the same computation, i.e.,1.3 GMACs, as our searched model EfficientViT-L1. The sampled networks are denoted as Random 1 to Random 5, which are either deeper and narrower, or shallower and wider than EfficientViT-L1. We train the sampled models on ImageNet-1K with the same training recipe as EfficientViT-L1. The comparison between these models is shown in the following table (Table H). As can be seen, our searched EfficientViT-L1 has better latency or higher top-1 accuracy on ImageNet-1K than the randomly sampled networks, proving the advantages of our proposed latency driven slimming. \n\n>**Table H. Analysis of latency driven slimming. All random networks are trained with the same training strategy as the EfficientViT-L1 on ImageNet-1K. The latency is obtained using iPhone 12 with CoreMLTools.**\n| Model | GMACs | Latency (ms) | Top-1 (%) |\n|:---:|:---:|:---:|:---:|\n| **EfficientViT-L1** | **1.3** | **1.6** | **79.2** |\n| Random 1 | 1.3 | 1.6 | 77.8 |\n| Random 2 | 1.3 | 1.7 | 78.3 |\n| Random 3 | 1.3 | 1.5 | 74.7 |\n| Random 4 | 1.3 | 1.6 | 73.3 |\n| Random 5 | 1.3 | 1.5 | 76.7 |\n\n---\n\n**References:**\n\n[a] https://www.nvidia.com/en-us/data-center/a100\n\n[b] https://onnx.ai/\n\n[c] https://developer.nvidia.com/tensorrt\n\n[d] https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrt\n\n[e] https://developer.android.com/ndk/guides/neuralnetworks\n",
" **We thank the reviewer for the positive feedback and thoughtful comments. We appreciate the reviewer's acknowledgment that the paper proposes efficient methods, achieves impressive results on ImageNet with low latency, gives a strong SOTA baseline for vision models on iPhone, and is well-written with clear ablation analysis. We thank the reviewer for mentioning MorphNet [Gordon, Ariel, et al., 2018], which is a relevant and insightful work. We will cite and discuss it in the revised paper. In the following, we validate dimension-consistent design on other platforms (Q1) and provide more results on different hardware and compilers (Q2).**\n\n---\n\n**Q1. Validate dimension-consistent design on other libraries/hardware (Nvidia GPU).**\n\nThanks for the suggestion. We perform the latency analysis on the Nvidia A100 GPU [a] to show that the proposed dimension-consistent (D-C) design is beneficial besides the iPhone. We save the Pytorch models with batch size as 64 into the onnx format [b] and use TensorRT [c] to compile and benchmark the latency. We report two latency results from TensorRT averaged over 1,000 runs in the following table (Table A). One is the computing time on GPU (TRT-A100-GPU), and the other is the total walltime that includes the time for data transfer (TRT-A100-Total). We use the latest software version released by Nvidia for the experiments [d].\n\nFor the non-D-C design, we revert the proposed Meta3D block into 4D implementation, where linear projections and MLPs are all implemented with CONV1x1-BN instead of 3D-Linear layers, and reshaping operations become necessary in order to perform multi-head self-attention. With this configuration, attention blocks can be arbitrarily placed along with Meta4D blocks without following dimension-consistent design, while frequent reshaping is introduced. \nWe conduct the comparison on the following two models, both with a D-C version and a non-D-C one with the exact same computation complexity. \n- EfficientViT-L7, which has 8 attention blocks. \n- DummyNet, a handcrafted dummy model with a total of 16 attention blocks. \n\nAs can be seen from Table A, our dimension-consistent design archives a faster inference speed than the non- dimension-consistent design for both EfficientViT-L7 and the DummyNet.\n\n>**Table A. Analysis of dimension-consistent (D-C) design vs. non-D-C placement of attention blocks. The latency (ms) is measured on the Nvidia A100 GPU with TensorRT (TRT-A100-GPU and TRT-A100-Total).**\n| Model | D-C | TRT-A100-GPU (ms) | TRT-A100-Total (ms) |\n|:---:|:---:|:---:|:---:|\n| EfficientViT-L7 | Y | 30.67 | 33.83 |\n| EfficientViT-L7 | N | 34.73 | 37.89 |\n| DummyNet | Y | 7.07 | 10.22 |\n| DummyNet | N | 11.93 | 15.09 |\n\nReferences:\n\n[a] https://www.nvidia.com/en-us/data-center/a100\n\n[b] https://onnx.ai/\n\n[c] https://developer.nvidia.com/tensorrt\n\n[d] https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrt ",
" **Q2. Applicability of the models on more hardware and compilers.**\n\nIn the main paper, we conduct the latency analysis on iPhone 12 NPU with all available computing resources and deploy models with CoreMLTools. Here we show more results on different hardware and compilers. The reported latency is averaged at over 1,000 runs.\n- Nvidia A100 GPU with TensorRT. We run the latency analysis on the Nvidia A100 GPU [a] with batch size 64. The Pytorch models are saved into the ONNX format [b] and compiled with TensorRT [c]. We report two latency results from TensorRT in the following table (Table B). One is the computing time on GPU (TRT-A100-GPU), and the other is the total walltime that includes the time for data transfer (TRT-A100-Total). We use the latest Nvidia software environment for the experiments [d]. \n- iPhone CPU. We benchmark the latency for models by only using the CPU in the iPhone 12. The models are deployed by CoreMLTools. \n- Google Pixel 6 with NNAPI. As suggested by the reviewer, we also report the model latency on android devices. We utilize the Google Pixel 6 with NNAPI [e] for model compiling. Please note that NNAPI does not well support the GeLU, so we replace the GeLU with HardSwish in all models that include GeLU for a fair comparison. Models are converted into TensorFlow Lite format and deployed using NNAPI. Due to the compatibility issue of NNAPI, many converted models can not successfully run on Pixel 6. Therefore, we only report the latency for the models that can. We leave the support for more baseline models on Google Pixel as future work.\n\nThe following tables (Table B and Table C) report the latency analysis on the Nvidia A100 GPU with TensorRT, iPhone CPU with CoreMLTools, and Pixel 6 with NNAPI for the models trained on the ImageNet-1K classification task. We can see our model still achieves decent latency vs. accuracy trade-off improvement on different hardware and compilers. \n\nFor example, compared with the CNN models, EfficientViT-L1 runs faster (38% faster on Nvidia A100 GPU Computing and 21% faster on iPhone CPU) than EfficientNet-B0 while achieving 2.1% higher top-1 accuracy. For the models with high performance (>83% top-1), EfficientFormer-L7 runs much faster (4.6$\\times$ faster on Nvidia A100 GPU Computing and 3.8$\\times$ faster on iPhone CPU) than EfficientNet-B5. \n\nCompared to ViTs and their variants, EfficientViT-L1 has 4.4% higher top-1 accuracy than MobileViT-XS and runs much faster across different hardware and compilers (1.9$\\times$ faster on Nvidia A100 GPU Computing, 2.3$\\times$ faster on iPhone CPU, and 10.4$\\times$ faster on Pixel 6), and has 4.7% higher accuracy than DeiT-T while being 8.3$\\times$ faster on Pixel 6. Also, EfficientViT-L3 achieves 1% higher top-1 accuracy than PoolFormer-S36, while being 3$\\times$ faster on Nvidia A100 GPU and 2.8$\\times$ faster on iPhone CPU. The results on different hardware and compilers demonstrate the advantageous performance of our models.\n\n>**Table B. Comparison results on ImgeNet-1K. The latency (ms) is measured on the Nvidia A100 GPU with TensorRT (TRT-A100-GPU and TRT-A100-Total) and iPhone 12 CPU with CoreMLTools. ‘/’ denotes the model is not well supported by the hardware and compiler.**\n| Model | Train epoch | Top-1 | TRT-A100-GPU(ms) | TRT-A100-Total (ms) | iPhone CPU (ms) |\n|:---:|:---:|:---:|:---:|:---:|:---:|\n| **EfficientViT-L1** | **300** | **79.2** | **6.17** | **9.33** | **11.5** |\n| **EfficientViT-L1** | **450** | **79.9** | **6.17** | **9.33** | **11.5** |\n| **EfficientViT-L3** | **300** | **82.4** | **13.94** | **17.10** | **28.2** |\n| **EfficientViT-L7** | **300** | **83.3** | **30.67** | **33.83** | **67.7** |\n| MobileNetV2 | 300 | 71.9 | 4.97 | 8.13 | 8.0 |\n| MobileNetV2 x 1.4 | 300 | 74.7 | 7.32 | 10.47 | 10.7 |\n| EfficientNet-B0 | 350 | 77.1 | 9.99 | 13.15 | 14.5 |\n| EfficientNet-B3 | 350 | 81.6 | 35.03 | 40.67 | 52.6 |\n| EfficientNet-B5 | 350 | 83.6 | 141.00 | 153.97 | 258.8 |\n| ResNet50 | 300 | 78.5 | 9.02 | 12.17 | 29.4 |\n| ResMLP-S24 | 300 | 79.4 | 17.35 | 20.51 | 40.2 |\n| DeiT-T | 300 | 74.5 | 7.08 | 10.24 | 16.7 |\n| DeiT-Small | 300 | 81.2 | 15.45 | 18.60 | 41.0 |\n| PVT-small | 300 | 79.8 | 23.75 | 26.91 | 89.5 |\n| T2T-ViT-14 | 310 | 81.5 | 20.99 | 24.15 | / |\n| Swin-Tiny | 300 | 81.3 | 21.99 | 25.15 | / |\n| PoolFormer-s12 | 300 | 77.2 | 14.52 | 19.44 | 59.0 |\n| PoolFormer-s24 | 300 | 80.3 | 28.22 | 33.10 | 126.7 |\n| PoolFormer-s36 | 300 | 81.4 | 41.21 | 46.03 | 192.6 |\n| Mobile-Former-508m | 450 | 79.3 | 14.58 | 17.74 | 22.2 |\n| MobileViT-XS | 300 | 74.8 | 11.65 | 14.81 | 26.5 |\n\n>**Table C. Comparison results on ImgeNet-1K. The latency (ms) is measured on Google Pixel 6 with NNAPI (Android - Pixel 6).**\n| Model | Train epoch | Top-1 | Android - Pixel 6 (ms) |\n|:---:|:---:|:---:|:---:|\n| **EfficientViT-L1** | **300** | **79.2** | **7.89** |\n| DeiT-T | 300 | 74.5 | 65.60 |\n| MobileViT-XS | 300 | 74.8 | 82.49 |\n\nReferences:\n\n[e] https://developer.android.com/ndk/guides/neuralnetworks\n",
" **We thank the reviewer for the valuable feedback. We appreciate that the reviewer acknowledges our paper is easy to follow, interesting, and useful for the designation of architectures. We address the concerns in the following. We hope our response can further demonstrate the strengths of our method.**\n\n---\n\n**Q1. Many papers focus on the combination of CNNs and Transformers.**\n\nWe thank the reviewer for the comment. If we understand correctly, the concern is that our work combines CNNs and Transformers, which is a strategy that is studied by some papers. In the following, we would like to kindly clarify our method and try to alleviate the concern.\n\nFirst, we would like to kindly mention that our work targets the efficient deployment of pure transformer models on mobile devices, instead of incorporating MobileNet blocks or depth-wise convolutions to reduce computation costs. Consequently, EfficientViT is built with token mixers (either global attention or local pooling) and MLP blocks, which is a standard transformer architecture. Based on our experimental results, without the integration of lightweight MobileNet blocks, EfficientViT still outperforms hybrid design in terms of speed-accuracy trade-off, which we humbly think is the strength of our work. Detailed discussion and comparisons can be found in Section 2 and 5.1. \n\nSecond, we would like to kindly explain that our network architecture design is still based on the transformer architecture, and the usage of CNN-like parts in the 4D partition of EffiicientViT is to align the feature dimension for efficient inference. For example, our CONV stem reduces the patch size of classic ViTs, which is better supported by edge devices. Similarly, we implement linear projections and MLP blocks through CONV1x1 layers, such that the dimension is ensured to be consistent in the 4D partition. Different from existing works, we investigate how the data dimension (4D CONV or 3D Linear) in transformers affects hardware efficiency, and propose a dimension consistent design that enables ultra-fast inference on edge devices, as demonstrated in Figure 2 and Section 3, which is one of the contributions of this work. \n\n---\n\n**Q2. The proposed MB blocks have similar architectures to some existing papers.**\n\nIn fact, our work targets the acceleration of standard vision transformers for efficient mobile deployment. Therefore, we design the MetaBlocks using the commonly adopted token mixer and regular MLP design. We do not aim to propose new complicated operations, such as shifted-window attention in Swin, to boost model accuracy. Instead, we develop the most hardware-friendly design strategies for ViTs for fast inference, including hardware-friendly operators (Section 3), dimension-consistent design (Section 4.1), and latency-driven architecture search (Section 4.2), while maintaining high performance.\n\nWe humbly think that following standard transformer architecture is not a weakness of our work. On the contrary, boosting the standard architecture to achieve significantly better speed and accuracy is an advantage and novelty of our work.\n\n---\n\n**Q3. Comparisons with Mobile-Former and CSwin.**\n\nWe thank the reviewer for suggesting the comparison with recent arts, Mobile-Former and CSwin. The comparison with the two works is shown in the following table (Table D), demonstrating better latency and performance of our models over Mobile-Former and CSwin.\n\nMobile-Former released the official models recently (after the submission of this draft). We perform the speed comparison on iPhone 12 and Nvidia A100 GPU, and will add these results in the revision. Under the same training resources, i.e., 450 epochs, our EfficientViT-L1 achieves 0.6% higher top-1 accuracy on ImageNet-1K than Mobile-Former-508M, while being 8.3$\\times$ faster on iPhone 12 and 2.4$\\times$ faster on A100 GPU. CSwin adopts a complicated token mixer such as the cross-shaped window. Such operation is not supported by most compilers on mobile devices (as discussed on Line 122 in the main paper). Efficiently implementing the cross-shaped window in the mobile compiler is beyond the scope of this paper, thus we provide a comparison on the Nvidia A100 GPU. We linearly scale up EfficientViT-L3 to obtain EfficientViT-L3-LS that matches the computation cost of CSwin-T, achieving slightly higher accuracy and 1.4$\\times$ faster speed. \nThanks for the suggestions and we will properly discuss Mobile-Former and CSwin in the revised paper.\n\n\n\n>**Table D. Comparison results on ImgeNet-1K with Mobile-Former-508M and CSwin, deployed on iPhone 12 with CoreMLTools and Nvidia A100 GPU with TensorRT.**\n| Model | Train epoch | Top-1 | iPhone 12 (ms) | TRT-A100-GPU (ms) |\n|:---:|:---:|:---:|:---:|:---:|\n| **EfficientViT-L1** | **450** | **79.9** | **1.6** | **6.17** |\n| Mobile-Former-508M | 450 | 79.3 | 13.2 | 14.58 |\n| **EfficientViT-L3-LS** | **300** | **82.8** | **3.4** | **20.55** |\n| CSwin-T | 300 | 82.7 | Not supported | 28.70 |\n",
" This paper proposes a combination of several implementation details for vision transformers that can be evaluated efficiently (with low latency) on iphones. These details include: \n- Clever management of the shape of the tensors to avoid expensive reshape operations.\n- Using BN instead of LN to save latency by folding in BN parameters into the final model.\n- Using GELU nonlinearity (eg. instead of swish)\nand most importantly a pruning methodology to prune the least essential blocks by training a categorical gating mechanism via gradient descent. \nThe resulting network vastly outperforms previous solutions that have similar latency on both ImageNet classification and semantic segmentation. Originality: Weak-Medium.\nThe paper presents a combination of methods, most of which is well-studied and understood in different contexts:\n- Shape management and alignment has been one of the most optimized aspect of highly tuned numerical linear algebra software for many decades. For example, this has been the focus of optimizing BLAS libraries over many decades.\n- It is well-known that BN has a slight edge over other normalization methods when it comes to latency of the final model, due to the possibility of folding in the normalization parameters.\n- It is surprising that swish is so inefficient on CoreML, but it seems like a special artifact of the current state of the library.\n- While the pruning methodology seems very efficient, it resembles very much to MorphNet [Gordon, Ariel, et al. \"Morphnet: Fast & simple resource-constrained structure learning of deep networks.\" CVPR. 2018], while failing to cite that prior work.\n\nQuality: Medium\nWhile it is impressive that 79% top-1 accuracy on ImageNet is possible within 1.2ms on an iphone, the papers sole focus on this particular library and hardware raises the question of the general applicability of the methods. Also several of the methods just circumvent the special deficiencies of the library (esp swish vs GELU accounts for a large parts of improvements over LeViT 256, which might become competitive with this single change. \n\nClarity: High\nThe paper is well-written with clear ablation analyses and explaining the motivation of all the applied techniques. The methods are evaluated on reasonable benchmarks on both classification and segmentation tasks.\n\nSignificance: Medium\nThis work gives a strong SoTA baseline for vision models on current IPhone hardware. However, it is unclear how/whether these methods generalize to other types of hardware/libraries. It seems unlikely that similar gains could be achieved on android systems, given the difference and maturity of the employed machine learning infrastructure. - Are the dimension-consistent layers relevant in other libraries/hardware, like pytorch, TF or jax on nvidia GPUs? (As the model was trained on pytorch/GPU), this should not be hard to answer. Obviously, the paper focuses on a very specific proprietary hardware/software stack, and therefore the applicability of the methods is somewhat limited. The paper does not reference very similar pruning methods presented in 2018 MorphNet paper.",
" In this paper, the authors propose a new ViT-based architecture named EfficientViT. In detail, this paper tries to build hybrid designs for MobileNet block and ViT architectures. Finally, this algorithm performs a latency-driven slimming for a series of final models. Strengths:\n1. This paper is easy to read.\n2. The proposed observations in this paper are interesting and useful for the designation of architectures. \n\nWeaknesses:\n1. Many papers focus on the combination of CNNs and Transformers.\n2. The proposed MB blocks have similar architectures to some existing papers.\n3. This paper does not compare with some baseline methods[1][2], and these methods perform better than this paper.\n\n[1] Mobile-Former: Bridging MobileNet and Transformer\n[2] CSWin Transformer: A General Vision Transformer Backbone with Cross-ShapedWindows As shown in the Strengths And Weaknesses. I do not find any limitations and potential negative societal impact of this work.",
" The paper benchmarks the latency of recent efficient vision transformer designs on iPhone 12 through CoreML support and makes observations on the latency vs. accuracy trade-off of different operators. The authors then proposed a super-net search space that bakes in multiple latency-favorable designs and performs gradient-based architecture search followed by a heuristic latency-driven slimming process to obtain sub-net under given latency constraint. The authors evaluate the searched architectures as the backbone across image classification, object detection, and semantic segmentation tasks. - *Originality:* To the best of my knowledge, the paper adequately cites related works such as MetaFormer and outlines the differences. While the authors mention that the design space is partly inspired by MetaFormer, the latency vs. accuracy trade-off improvement is novel and largely driven by new insights through latency benchmark on actual hardware.\n- *Quality*: The submission is technically solid in building the EfficientViT design space based on observations from latency analysis on real hardware. The authors clearly addressed two relevant limitations that come to mind when reading the paper: 1) limited insight on how the observations and designs hold across other hardware, and 2) the latency-driven slimming procedure could invite further investigation. Testing the searched architectures across different tasks is a good practice as well.\n- *Clarity*: The submission is clearly written and well organized. The observations from latency analysis naturally build up into respective considerations for the design space.\n- *Significance:* The paper provides comprehensive empirical insights on optimizing vision transformer design space for a given hardware, and achieving MobileNetV2 level latency is meaningful for practitioners. While the latency analysis workflow is general and could serve as a good example of devising design decisions for other hardware, it is not immediately clear how well the insights and the design space will transfer. Looking at the comparison in Table 1, at a given latency, EfficientViT tends to have a higher parameter count or GMACs (i.e. EfficientViT-L1 vs. MobileNetV2, EfficientViT-L7 vs. MobileViT-XS). From reading the paper, it seems the improvement could come from two sources: 1) better hardware utilization so that a more complex model can run under the same latency constraint (latency analysis guided design space), and 2) when hardware utilization saturates, finding architecture with better compute distribution over operators for higher accuracy (NAS + latency-driven slimming). Is there a way to perform ablation and quantify the performance gain from each source? Understanding the contribution from 1) would be valuable for practitioners optimizing architecture for specific hardware, while understanding 2) will shed light on the value of the proposed design space and search algorithm.\n The authors clearly addressed some potential limitations of the work: 1) Some observations and subsequent design decisions might be hardware and software dependent; 2) The NAS procedure, specifically the latency-driven slimming procedure is less involved and could be a direction for future exploration."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"wgz-vCdFQCG",
"CpBcoV_LZFZ",
"3L6_A7_V_pf",
"UqYItUwc-OI",
"MWwxIl5vHOG",
"wgz-vCdFQCG",
"wgz-vCdFQCG",
"wgz-vCdFQCG",
"wgz-vCdFQCG",
"CpBcoV_LZFZ",
"CpBcoV_LZFZ",
"MWwxIl5vHOG",
"nips_2022_NXHXoYMLIG",
"nips_2022_NXHXoYMLIG",
"nips_2022_NXHXoYMLIG"
] |
nips_2022_MIhgxhsJMtY | A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP | As an important framework for safe Reinforcement Learning, the Constrained Markov Decision Process (CMDP) has been extensively studied in the recent literature. However, despite the rich results under various on-policy learning settings, there still lacks some essential understanding of the offline CMDP problems, in terms of both the algorithm design and the information theoretic sample complexity lower bound. In this paper, we focus on solving the CMDP problems where only offline data are available. By adopting the concept of the single-policy concentrability coefficient $C^*$, we establish an $\Omega\left(\frac{\min\left\{|\mathcal{S}||\mathcal{A}|,|\mathcal{S}|+I\right\} C^*}{(1-\gamma)^3\epsilon^2}\right)$ sample complexity lower bound for the offline CMDP problem, where $I$ stands for the number of constraints. By introducing a simple but novel deviation control mechanism, we propose a near-optimal primal-dual learning algorithm called DPDL. This algorithm provably guarantees zero constraint violation and its sample complexity matches the above lower bound except for an $\tilde{\mathcal{O}}((1-\gamma)^{-1})$ factor. Comprehensive discussion on how to deal with the unknown constant $C^*$ and the potential asynchronous structure on the offline dataset are also included. | Accept | This paper considers offline reinforcement learning in the constrained MDP framework. It proposes an algorithm that provably obtains a near-optimal policy (under a single-policy concentrability assumption) and proves an upper bound (and a corresponding lower-bound) on the resulting sample complexity.
The reviewers found the paper well-motivated and technically sound, and unanimously recommend acceptance. Please incorporate the reviewers' feedback in the final version of the paper. In order to strengthen the final paper, it would be helpful to:
- Incorporate toy experiments and empirically validate some of the paper's claims
- Include a discussion about the tightness of the upper/lower bound.
| train | [
"Scg4nLj0n6k",
"Zbk41ikwWm2",
"FvBYhX5_WN0",
"m3WjAckPtfP",
"W6d4iwZfGAk",
"fbYQjkLJrp1",
"bDJ2zuJgfj3",
"djNyoYt-S2H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for her/his time and thoughtful feedback. We address the comments in detail as follows.\n\n$\\mathbf{Weakness1.}$ I would question the relevance of the manuscript as the assumptions needed to conclude the sample complexity are heavy. Despite one of them is necessary (Slater), it is not clear if this particular setting is worth investigating.\n\nAnswer: There are only 2 central assumptions in our work, we discuss the consequence of relaxing them as follows. \n\n(1) Slater's condition: We have established the necessity of Slater's condition for ensuring 0 constraint violation. When the Slater's condition is absent, our analysis of DPDL implies that it will output a policy with $\\mathcal{O}(\\epsilon)$ reward suboptimality gap and constraint violation.\n\n(2) Finite single-policy concentrability: On the one hand, we agree that this assumption can be a little bit strong in practice since the reference distribution may not fully cover the support of the optimal occupancy measure. However, we should notice that when this coefficient is infinity, there will definitely be no guarantee for obtaining the optimal policy. Therefore, this is an unavoidable assumption in the offline RL. If we relax this assumption, then our DPDL will output a policy that is near optimal among the policies in $\\Pi(\\psi)$ for any $\\psi>0$ selected by DPDL. (The optimal policy lies in $\\Pi(+\\infty)$.)\n\nOn the other hand, among all the assumptions that can provide optimality guarantee for offline RL, our assumption is indeed very weak. This is because a finite single-policy concentrability coefficient does not require the well covering all state-action pairs; instead, it only assumes that the trajectory of optimal policy to be covered. This is much weaker compared to the uniform coverage [35, 36] or the uniform concentrability assumption [14, 20]. \n\n\n$\\mathbf{Weakness2.}$ I also suggest the authors to highlight the technical contributions, if any.\n\nAnswer: We thank the reviewer for this suggestion. The following comments will be added under our main contribution (Line 51 - 63):\n\n\"Besides the above main contributions, our construction of the worst-case instance in the lower bound derivation can be of independent interest. In order to characterize the influence of the constraints in the lower bound, we need to carefully construct the constraints so that the different actions are properly correlated, which is the key to the hardness of the worst-case instance. We believe this is an important technical contribution that can be further extended to discussing the CMDP complexity lower bound under other settings such as on-policy learning, and so on. Moreover, our analysis to the asynchronous setting is also a technically novel contribution. We believe our techniques to handle the correlated gradient estimators with large variance can also be beneficial to other algorithms under the asynchronous setting.\" ",
" We'd like to thank the reviewer for the insightful comments. The reviewer's concerns are addressed as follows.\n\n$\\mathbf{Weakness 1}$ & $\\mathbf{Q1.}$ The assumption of an offline dataset with finite concentrability coefficient seems too strong... Could the authors discuss the potential of relaxing the assumption of a finite concentrability coefficient?\n\nAnswer: There are several aspects of the assumption of a finite concentrability coefficient in our paper.\n\n(1) A finite single-policy concentrability coefficient does not require the well covering all state-action pairs; instead, it only assumes that the trajectory of optimal policy to be covered. Therefore, our assumption is much weaker compared to the uniform coverage [35, 36] or the uniform concentrability [14, 20]. For example, if the dataset is generated by a near optimal policy (imitation learning with expert data), then it is natural to expect $C^*=\\mathcal{O}(1)$ even though a large fraction of state-action pairs are unvisited. \n\n(2) The guarantee of DPDL can be generalized to the case $\\psi<C^*$ (which covers the possibility that $C^*=\\infty$). In this case, DPDL provably outputs a policy that is comparable to the best policy in $\\Pi(\\psi)$, the class of policy whose deviation is controlled by $\\psi$. We also propose an adaptive deviation control framework that is adapted to unknown $C^*$.\n\n(3) Further relaxation of the finite concentrability is hard in general. Imagine an offline multi-arm bandit problem where one only has access to offline dataset of the arms. In this case, an infinite single-policy concentrability corresponds to the case where the offline dataset never touches the optimal arm. Therefore, no optimality can be guaranteed if the the finite concentrability assumption is relaxed. \n\n\n$\\mathbf{Weakness2.}$ The LP-based approach is very restrictive and not scalable. This further restricts the application of this work to the practical implementations.\n\nAnswer: We have to admit that the scalability of the LP-based approach is indeed a problem. Recently, there are several works on MDP that reveal the LP approach is potentially scalable: [*] considers the LP-based approach with linear features; [18] parameterizes the primal and dual variables by neural networks, and achieves performance comparable to or better than TRPO/PPO/DQN in several testing environments.\n\nOn the other hand, how to make LP-based approach scalable or how to replace it is itself an important problem. This is because, currently, many algorithms for online learning CMDP that have theoretical guarantee directly use LP in the planning step.\n\n[*] Chen, Y., Li, L., and Wang, M. Scalable bilinear $\\pi$ learning using state and action features.\n\n$\\mathbf{Weakness 3}$ & $\\mathbf{Q2.}$ There are some other works in offline CMDP that the authors may need to compare. Could authors discuss the technical challenges and contributions of this paper compared with some key references like [14, 30] and [32]?\n\nAnswer: We thank the reviewer for pointing this out, we summarize the technical challenges and contributions of this paper compared to previous works as follows:\n\n(1) The \"single-policy concentrability'' condition in (tabular) MDP is first formulated in [23]; however, it is previously unclear whether this notion can be further extended. In our work, we identify this concept for CMDP, and we discover an essential difference between offline MDP and CMDP: offline MDP can be solved using $\\mathcal{O}(|\\mathcal{S}| C^* \\epsilon^{-2})$ samples, but at least $\\Omega(min (|\\mathcal{S}||\\mathcal{A}|,|\\mathcal{S}|+I ) C^* \\epsilon^{-2})$ samples are need for offline CMDP, because we have to fulfill $I$ constraints simultaneously. \n\n(2) In terms of the information theoretic lower bound for offline CMDP, compared to the hard instance of offline MDP, our construction is more intricate because we need to correlate different actions via properly designed constraints. We also establish the necessity of Slater's condition, and hence justify this commonly used assumption.\n\n\n(3) Prior to our work, the only provably efficient model-free algorithm under single-policy concentrability are variants of Q-learning [24, 33] (which do not work for CMDP), while [17, 23, 30, 31, 34] all take the model-based approach. The deviation control mechanism we develop makes the primal-dual approach work efficiently under single-policy concentrability, and it is naturally model-free. \n\n(4) We consider the asynchronous setting where the dataset is a single trajectory generated by a behavior policy. In this setting, our analysis handles the correlated gradient estimators with large variance. We believe our techniques can also benefit the analysis of other algorithms in the asynchronous setting.\n\n",
" We thank the reviewer for her/his time and thoughtful feedback. We address the comments in detail as follows.\n\n$\\mathbf{Q1.}$ A large part of the analysis in this current version relies on an assumption that we know a reference distribution, while we actually don't have an access to such a distribution. Although the paper proposed to use an estimated distribution instead, this current does not provide a principled way to construct such a distribution estimation.\n\nAnswer: First, all of our analysis are done with regard to the estimated distribution $\\hat\\mu$, we do not assume the knowledge of the reference distribution $\\mu$. Second, we do provide a way to construct the $\\hat\\mu$, please see Eq (10) of Algorithm 1. By properly setting the parameters $N_e$ and $\\varsigma$ in (10), we establish the important properties of the estimator $\\hat\\mu$ in Proposition 4.3, which is all we need for $\\hat\\mu$ in the convergence analysis. \n \n$\\mathbf{Q2.}$ About the lack of numerical experiments to demonstrate the practice effectiveness of the proposed DPDL algorithm.\n\nAnswer: The primary goal of our paper is to provide theoretical foundations and insights for offline learning CMDP, by studying the information theoretic lower bounds under single-policy concentrability and the algorithms to achieve almost tight sample complexity upper bound and zero constraint violation. First, through Theorem 4.1 and 5.1, we confirm the minimax optimal dependence on $|\\mathcal{S}|,|\\mathcal{A}|,C^*,\\epsilon$ and $I$ for CMDP problem, and the dependence on $1-\\gamma$ is also almost optimal. Second, with Theorem 5.2, we confirm that the Slater's condition is the necessary condition for achieving zero constraint violation. \n\nWe do agree that numerical experiments are important for demonstrating the practice effectiveness. However, this is not main focus of our paper and we are not able to complete the experiments during the rebuttal period. ",
" We thank the reviewer for her/his time and thoughtful feedback. We address the comments in detail as follows.\n\n$\\mathbf{Q1.}$ Notation $I$ is abused in Section 2.1.\n\nAnswer: We thank the reviewer for pointing out this issue. In order to distinguish the identity matrix and the number of constraints, in the revision, we will use $\\mathbb{I}$ to denote the identity matrix while using $I$ to denote the number of constraints.\n\n$\\mathbf{Q2.}$ Line 103: what is the sparse nature of optimal policy? LP only works for occupancy measures, not policy.\n\nAnswer: Given any state-action occupancy measure $\\nu$, the policy $\\pi$ that generates this occupancy measure equals \n$\\pi(a|s) = \\frac{\\nu(s,a)}{\\sum_{a'}\\nu(s,a')}$, see Eq. (3) of our paper. Therefore, the optimal policy $\\pi^*(a|s)>0$ only when the optimal occupancy measure $\\nu^{\\pi^*}(s,a)>0$, for any $s,a$. That is, $\\pi^*$ and $\\nu^{\\pi^*}$ share the same support. By Proposition 2.1, as long as $|\\mathcal{S}|+I\\ll |\\mathcal{S}||\\mathcal{A}|$, the number of nonzero entries of $\\pi^*$ will be far less than its dimension, indicating the sparsity of the optimal policy.\n\n$\\mathbf{Q3.}$ Can you explain how to determine $C^*$ in Assumption 2.3?\n\nAnswer: The definition of concentrability coefficient $C^*$ (Assumption 2.3) involves the optimal policy of the CMDP, and hence it cannot be known a priori in general. One exception is when the offline data distribution completely covers the whole state-action spaces: $\\mu(s,a)>0, \\forall s,a$. In this case, a pessimistic upper bound for $C^*$ is $\\frac{1}{min_{s,a}\\mu(s,a)}$, which can be very loose. \n\nIn fact, the issue introduced by an unknown $C^*$ appears in many previous works on offline MDP. Suppose that we are given an offline dataset of $N$ samples, then it is guaranteed in [17, 23, 24, 31, 33, 34] that a policy with suboptimality $\\mathcal{O}\\(\\sqrt{MC^*/N})$ can be obtained for some constant $M$. Therefore, as $C^*$ is practically unknown, the suboptimality of the output policy is also unknown for these existing results. \n\nAnd this is exactly the reason why we propose an adaptive deviation control scheme (Section 6) to avoid the knowledge of the $C^*$.\n\n$\\mathbf{Q4.}$ Does saddle points always exist for (6) or (7)? How does constraints (8) work for (7)? This constraint set looks a strong restriction.\n\nAnswer: A saddle point of (6) and (7) always exists as long as the original LP problem has an optimal solution, which is indeed the case. In (8) we further restrict the primal domain and dual domain, by considering the natural relations that a saddle point $(V^*,\\lambda^*,x^*)$ must satisfy. Therefore, the constraints (8) possibly exclude some suboptimal solutions, but it is guaranteed to contain the optimal solution. Such a reduction of domain size is common in the primal-dual approach of MDP/CMDP, for example [28]. The equivalence between (7) and (8) is implied by Lemma E.3.\n\n$\\mathbf{Q5.}$ How to compute $\\mu(s,a)/\\hat{\\mu}(s,a)$ for degenerate $\\hat{\\mu}(s,a)$ when finite concentrability is absent? An issue occurs in (9) also.\n\nAnswer: Note that $\\hat{\\mu}$ is constructed by Eq. (10) in Algorithm 1, which does not depend on a finite concentrability. Because we truncate $\\hat{\\mu}$ at some pre-determined $\\varsigma>0$, we know $\\hat{\\mu}(s,a)>\\varsigma$ for all $s,a$. Therefore, $\\mu(s,a)/\\hat{\\mu}(s,a)$ and (9) are both well-defined. \n\n$\\mathbf{Q6.}$ How to compute $\\mathrm{KL}(\\lambda\\|\\lambda^t)$ on $\\Lambda$?\n\nAnswer: Because $\\Lambda$ is not a subset of the probability simplex, the KL divergence on it is actually the generalized KL divergence: $\\mathrm{KL}(Y\\|Y'):= \\sum_i Y_i\\log\\frac{Y_i}{Y_i'} -\\sum_i Y_i + \\sum_i Y_i'$, see Line 168-169 under the Algorithm 1. \n\n\n\n\n",
" The paper studies the offline reinforcement learning in the framework of constrained Markov decision processes. The authors propose an offline primal-dual algorithm, prove near-optimal sample complexity under different assumptions on datasets, and propose an adaptive algorithm without prior knowledge on concentrability coefficient. ## Originality\n\n- The proposed offline primal-dual algorithm is new in offline constrained reinforcement learning. \n\n- The sample complexity matches a lower bound except for the dependence on the discount factor. \n\n## Quality & Clarity\n\n- Main results of the paper has been delivered well, except for a few concepts.\n\n- All claims are supported by proofs, although I didn't check correctness. \n\n## Significance\n\n- The proposed offline primal-dual algorithm takes either independent batch dataset or single trajectory sequence, which are two important settings in offline reinforcement learning.\n\n- The optimality of sample complexity is studied by establishing lower bound. \n\n- An adaptive implementation of proposed algorithm is useful for practice due to the absence of prior knowledge on concentrability coefficient. - Notation $I$ is abused in Section 2.1.\n\n- line 103: what is the sparse nature of optimal policy? LP only works for occupancy measures, not policy. \n\n- Can you explain how to determine $C^*$ in Assumption 2.3?\n\n- Does saddle points always exist for (6) or (7)? How does constraints (8) work for (7)? This constraint set looks a strong restriction.\n\n- How to compute $\\mu(s,a)/\\hat \\mu(s,a)$ for degenerate $\\hat \\mu(s,a)$ when finite concentrability is absent? An issue occurs in (9) also.\n\n- How to compute $KL(\\lambda \\Vert \\lambda^t)$ on $\\Lambda$?\n\n No, for example how to mitigate potential bias in offline data is not discussed. ",
" This manuscript investigates offline Constrained Markov Decision Process (CMDP), where on top of optimizing the cumulative reward r one must maintain (in a hard way) a cumulative constraint u to be non-negative. The offline setting uses only the offline data without a generative model. This manuscript proposes Deviation-controlled Primal-Dual Learning (DPDL), an algorithm that uses the saddle point formulation of MDP and a mirror descent-like update. When the concentrability coefficient of a CMDP is finite, assuming the Slater's condition, the algorithm guarantees the convergence and a proved sample complexity. An information-theoretic lower bound validates the optimality of this sample complexity up to a 1/(1-gamma) factor. This paper improves previous works on CMDP by removing the generative model. Without a generative model, it is necessary to have the Slater's condition to satisfy the constraint. With the Slater's condition, the authors show a close-to-matching pair of upper and lower bounds of sample complexity (need concentrability assumption). The argument seems natural and the improvement is clear.\n\nI would question the relevance of the manuscript as the assumptions needed to conclude the sample complexity are heavy. Despite one of them is necessary (Slater), it is not clear if this particular setting is worth investigating.\n\nI also suggest the authors to highlight the technical contributions, if any. N/A N/A",
" This paper proposes a model-free off-policy reinforcement learning algorithm DPDL that aims to solve the constrained MDP (CMDP) problems given an offline static dataset, and also provides an information theoretic lower bound on the sample complexity in the offline CMDP setting. DPDL adopts the primal-dual approach and addresses the distribution shift challenge in the offline setting via an adaptive deviation control mechanism. \n An innovative aspect of this work is that it considers a combination of the offline reinforcement learning (RL) and safe RL formulated as a constrained MDP. Currently, there are limited existing work on this specific combination, while recent exploration on offline RL and safe RL separately are extensive. The paper could be further improved if the authors would elaborate on the factors that motivate the authors to combine the two settings from both theoretic and practical perspectives. \n\nIn addition, while this paper analyzed the algorithm theoretically, it remains unclear whether it would be effective in solving real-world problems due to a number of issues. Towards this end, experiment results should be conducted to address this issue. \n\n\n\n A large part of the analysis in this current version replies on an assumption that we know a reference distribution, while we actually don't have an access to such a distribution. Although the paper proposed to use an estimated distribution instead, this current does not provide a principled way to construct such a distribution estimation. \n\n As mentioned in the strengths and weaknesses section, some of the concerns include a lack of experiments that can demonstrate the practice effectiveness of the proposed DPDL algorithm as well as principled ways to derive a practical algorithms based on the theoretical analysis. ",
" This paper studies the offline CMDP through the linear programming (LP) formulation and the primal-dual based method. An upper bound is proved under the notion of the concentrability coefficient in offline RL literature. A lower bound is established to demonstrate the near-optimality of the proposed upper bound. Finally, an adaptive algorithm is proposed to address the situation when the optimal concentrability coefficient is unknown. Strengths:\n\n(1) The paper is well-written and the presentation is very clear.\n\n(2) Offline/off-policy CMDP is an important problem and there are not as many offline/off-policy CMDP in the literature as for online CMDP.\n\n(3) The theoretical results are solid. Both the upper bound and lower bound are provided, and the dependencies on the key parameters are very clear. In addition, an adaptive algorithm is proposed to address the situation when the optimal concentrability coefficient is unknown, without the optimality sacrifice.\n\nWeaknesses:\n\n(1) The assumption of having an offline dataset with a finite concentrability coefficient seems to be too strong in practice. Most of time, we would only have an offline dataset that does not well covers all state-action pairs.\n\n(2) The LP-based approach is very restrictive and not scalable. This further restricts the application of this work to the practical implementations. \n\n(3) There are some other published works in offline CMDP that the author may need to compare. For example:\n\nWu, Runzhe, et al. \"Offline Constrained Multi-Objective Reinforcement Learning via Pessimistic Dual Value Iteration.\" Advances in Neural Information Processing Systems 34 (2021): 25439-25451.\n\nXu, Haoran, Xianyuan Zhan, and Xiangyu Zhu. \"Constraints penalized q-learning for safe offline reinforcement learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.\n\n (1) Could the authors discuss the potential of relaxing the assumption of a finite concentrability coefficient?\n\n(2) Could authors discuss the technical challenges and contributions of this paper compared with some key references like [14, 30] and [23]?\n N/A"
] | [
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"fbYQjkLJrp1",
"djNyoYt-S2H",
"bDJ2zuJgfj3",
"W6d4iwZfGAk",
"nips_2022_MIhgxhsJMtY",
"nips_2022_MIhgxhsJMtY",
"nips_2022_MIhgxhsJMtY",
"nips_2022_MIhgxhsJMtY"
] |
nips_2022_4qR780g2Mg | Distributional Reward Estimation for Effective Multi-agent Deep Reinforcement Learning | Multi-agent reinforcement learning has drawn increasing attention in practice, e.g., robotics and automatic driving, as it can explore optimal policies using samples generated by interacting with the environment. However, high reward uncertainty still remains a problem when we want to train a satisfactory model, because obtaining high-quality reward feedback is usually expensive and even infeasible. To handle this issue, previous methods mainly focus on passive reward correction. At the same time, recent active reward estimation methods have proven to be a recipe for reducing the effect of reward uncertainty. In this paper, we propose a novel Distributional Reward Estimation framework for effective Multi-Agent Reinforcement Learning (DRE-MARL). Our main idea is to design the multi-action-branch reward estimation and policy-weighted reward aggregation for stabilized training. Specifically, we design the multi-action-branch reward estimation to model reward distributions on all action branches. Then we utilize reward aggregation to obtain stable updating signals during training. Our intuition is that consideration of all possible consequences of actions could be useful for learning policies. The superiority of the DRE-MARL is demonstrated using benchmark multi-agent scenarios, compared with the SOTA baselines in terms of both effectiveness and robustness. | Accept | The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. They believe that the NeurIPS community could benefit from the ideas and techniques presented in this work. They argued, e.g., that the paper is novel and interesting, technically sound, clearly written, and that the method is clearly motivated and introduced. One reviewer expressed a few technical concerns, to which the authors responded appropriately. The authors have also, post-submission, further compared their model and other baselines from different perspectives. One reviewer pointed out that a limitation of the paper is the lack of discussion and experimental comparison with other work related to Distributional MARL. The authors responded to this, but the reviewer requested further details and a more thorough discussion; the authors then expanded their initial response via two detailed rebuttal messages, which were considered to be satisfactory. Finally, another reviewer (who also expressed positive views on this work) mentioned that the authors could have provided more details on the limitations of their method. Overall, all reviewers were positively impressed with the quality of this work and look forward to an updated version of the paper that addresses the suggestions mentioned in their reviews. | train | [
"7JD0P7J6V0r",
"am62vuVWnT0",
"rS44RXpN_cq",
"4p5iuiMAd3c",
"X6OJ9jg4rZI",
"3P-F1jVXfA",
"C2hK-kKk2h",
"6kxkjUYD9te",
"yim4V_Rvaq9",
"nHT7i7-vkAf",
"T3u684HmJkG",
"HUcbgXFTXv",
"Nc1HN_DDxwB",
"PQMIKacOzAJ",
"F_Ct61PngUK",
"nGRdOL16rc",
"fnoll44TjEG",
"9qwY83DsjpH",
"VIxb9nPV9Uj",
"VsQQMAAwNOm"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer JEsw,\n\nWe appreciate the reviewer's positive feedback and worthy suggestions for our paper. Furthermore, the recommendations of ablation studies and the clarification of our framework help us improve the quality of our paper further. As the end of the discussion is approaching, we are wondering if there are any additional potential clarifications or suggestions that you think would help us improve this manuscript. \n\nFollowing your recommendations, we provided ablation studies (as shown in Table 2) about the reward estimation and the regularization term, which were also added to the paper. Besides, we also explained the relationship between reward estimation and reward aggregation in detail to make it clear to you and other readers.\n\nWe thank the reviewer's effort in the review of our paper. We hope all the concerns have been addressed. Please let us know if there are more questions. We are happy to address any further questions or concerns.\n\nThank you again for your careful review and helpful comments!\n\nKind regards,\n\nPaper2649 Authors",
" Dear Reviewer TJjg,\n\nWe appreciate the reviewer's constructive suggestions for our paper. Your comments help us increase the readability and quality of the manuscript, and we also improve our paper according to other reviewers' comments. As the end of the discussion is approaching, we are wondering if there are any additional potential clarifications or suggestions that you think would help us improve this manuscript. \n\nFollowing your suggestions, we provided the clarifications and explanations for the assumption of action branches, the motivation of aggregation weights, and the comparison of computational costs. Additionally, we compared our model and other baselines from the perspectives of parameters, physical training time, and memory consumption. The comparison is shown in Table 1.\n\nWe thank the reviewer's effort in the review of our paper. We hope all the concerns have been addressed. Please let us know if there are more questions. We are happy to address any further questions or concerns.\n\nThank you again for your careful review and helpful comments!\n\nKind regards,\n\nPaper2649 Authors",
" We appreciate the reviewer for the valuable suggestions!\nAlthough we can not discuss more content about the limitations due to space limitations, we will put the discussion about the limitations and future work in the appendix.\n\nThank you again for your careful review and helpful comments!",
" The authors have generally addressed my questions and feedback. I would have preferred a more detailed discussion on limitations and future work, but understand there likely isn't enough space for it given the current content of the paper.",
" ### III. Detailed comparison with each paper\n\n- **Comparison with [1]**\n\n[1] proposes the Mean-Shape Decomposition method and quantile mixture in value decomposition, bridging the gap between distributional RL and value function factorization methods and enhancing the performance. Compared to this work, our focus is different in the following aspects:\n\n**(1)** From the centralized training perspective: [1] adopts the value factorization method, and the centralized critic is the Q network. DRE-MARL adopts the actor-critic method, and the centralized critic is a V network. Besides, [1] needs to satisfy the assumption of distributional individual-global-max due to the property of value function factorization, while we do not need any assumptions.\n\n**(2)** From the modeled distribution perspective: [1] regards the long return Q value as a random variable Z and models the distribution of Z with the implicit quantile network (IQN), which uses state, action, and quantile sample $\\omega$ as input and outputs the corresponding quantile value $Z_{\\omega}$. Our paper considers the per-timestep reward as a random variable and proposes multi-action-branch reward estimation to model the distributions.\n\n**(3)** From the mixture method perspective: [1] adopts the quantile mixture method (details can be found in Section 2.7 and Section 3.3 in [1]), which obtains the total return by weighted summation $Z=\\sum_{k\\in\\mathbb{K}}\\beta_{k}Z_{k}$ with respect to all agents. We adopt policy-weighted reward aggregation for each agent, which obtains the aggregated rewards with respect to all action branches.\n\n- **Comparison with [2]**\n\nBased on [1], [2] extends the implementation of DFAC variants. Specifically, 1) [2] proposes to fit the individual utilities with the thought of C51. 2) [2] proposes to fit the individual utilities with the quantile function and considers combining the shape of these individual utilities with the quantile mixture. \nOur work is different from this work because:\n\n**(1)** [2] mainly follows [1]. The main difference between [2] and [1] is the implementation: [1] implements the DFAC on VDN and QMIX, obtaining two implementations, DDN and DMIX, while [2] implements two DFAC variants based on QMIX by adopting C51 and IQN to approximate the individual utilities. So it is obvious that the primary differences between DRE-MARL and [2] are the same as those between DRE-MARL and [1]. \n\n**(2)** Another difference is that [2] adopts C51 to approximate the individual utilities and 1D convolution to combine all agents’ utilities. While we estimate reward distributions on the multi-action branches and use reward aggregation to combine all predicted reward distributions.\n\n- **Comparison with [3]**\n\n[3] proposes to parameterize the return distributions by a mixture of Dirac Delta functions which are used to calculate Conditional Value at Risk (CVaR). Besides, [3] further introduces dynamic risk level prediction for calculating CVaR and makes risk-sensitive decisions with the calculated CVaRs. \nThe differences between [3] and our method are as follows.\n\n**(1)** [3] approximates the long-term return distributions by a mixture of parameterized Dirac Delta functions. We directly model the per-timestep reward distributions on all action branches with the Gaussian reward model.\n\n**(2)** [3] adopts the mix network to consider all agents’ CVaRs while we adopt reward aggregation to consider all action branches’ rewards. \n\n**(3)** [3] mainly considers the risk of actions by computing CVaR based on the value distributions. The goal of [3] is to learn a risk-sensitive policy, but we aim to maximize the cumulative reward by modeling the reward uncertainty.\n\n- **Different ways of parameterization.**\n\n[1] adopts the implicit quantile network to model the return distributions (See Section 2.6 in [1]), while we adopt the reward network to model the per-timestep reward distributions. Besides, [2] adopts C51 to model the probability mass functions of individual utilities andutilizes IQN to approximate the individual untilities, which is also different from us. [3] parameterizes the return distributions with a mixture of Dirac Delta functions, which is different from [1], [2], and our method.\n\n\n\nReferences:\n\n[1] Sun W F, Lee C K, Lee C Y. DFAC framework: Factorizing the value function via quantile mixture for multi-agent distributional q-learning[C]//ICML. PMLR, 2021: 9945-9954.\n\n[2] Sun W F, Lee C K, Lee C Y. A Distributional Perspective on Value Function Factorization Methods for Multi-Agent Reinforcement Learning[C]//AAMAS. 2021: 1671-1673.\n\n[3] Qiu W, Wang X, Yu R, et al. RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents[J]. NeurIPS, 2021, 34: 23049-23062.\n\n[4] Rashid T, Samvelyan M, Schroeder C, et al. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning[C]//ICML. PMLR, 2018: 4295-4304.",
" We thank Reviewer c9dW for the prompt reply and questions. Below we further analyze the differences between our method and the mentioned distributional MARL methods [1,2,3] and the possible reasons that the performance of our method is better than DFAC variants in our preliminary experiments.\n\nIn part I, we first analyze the essential differences between our method and the currently available distributional RL in depth. Next, we explain the reasons behind the experimental phenomenon in Part II. Finally, we put a detailed comparison to [1], [2], and [3] in Part III.\n\n\n### I. Major differences between our method and distributional MARL methods\n**1. Q distribution v.s. reward distribution.**\nThe mentioned distributional MARL approaches [1,2,3] follow the idea of distributional RL in single-agent problems and learn the distribution of Q values. However, different from a single-agent environment, a MARL environment can be highly stochastic and non-stationary, where uncertainty may exist in the per-timestep rewards of every agent. In this case, although the distribution of Q values can capture the uncertainty of the total future return, it may **(1)** lose some information of per-step reward uncertainty since all future stepwise rewards are blended, and **(2)** require more samples to be accurately estimated due to the high uncertainty. In contrast, our proposed DRE-MARL models the distribution of the per-step reward, which can better capture the per-step uncertainty and is easier to estimate from samples. Therefore, the distributional MARL methods [1,2,3] are more suitable for environments with relatively sparse reward where long-term consideration is more crucial; our DRE-MARL is more suitable for environments with denser reward and higher per-step stochasticity.\n\n\n**2. Quantile mixture v.s. reward aggregation.**\n[1] adopts the quantile mixture method to consider all agents' return distributions (See Theorem 3 in [1]). In addition, [2] not only adopts the quantile mixture method but also 1D convolution to consider all agents' return distributions, which realize two variants of DFAC. In contrast, we adopt reward aggregation to consider all reward distributions on all action branches at every step. [1] and [2] use the return distributions to make the decision where inaccurate estimation may directly affect the quality of action sampling. In our method, the aggregated reward is not used to make the decision directly because we have a specific actor network, but is used to update the critic. Then we use the updating signals produced by the critic to perform policy training, where the influence of estimation error is alleviated by policy-weighted reward aggregation. [3] considers all agents' return distributions with dynamic risk-level-masked CVaRs and the mix network, which is different from [1], [2], and our method.\n\n\n\n\n### II. Empirical comparison and analysis of performance \n\n**(1)** The results in Table 3 show that *our DRE-MARL is much better than DFAC-diql(128), DFAC-diql(256), and DFAC-dmix(128) [1] in the MPE environment CN-3*. The potential results are: \n- MPE environments have relatively dense rewards, and the per-timestep reward tends to be greatly influenced by the other agents' behaviors (e.g., collision). In this case, as we explained in I, it would be hard to model the long-term Q distribution, and the Q distribution may lose some information of uncertainty about the per-timestep reward. \n- QMIX is based on value decomposition, which has been shown effective in SC II in literature [4]. However, in the MPE environments we consider, value decomposition performs not very well. The potential cause is that the reward of SC II is more structured than MPE. For example, the total damage to enemies is the summation of each individual agent.\n\n**(2)** *We are running more experiments on MPE baselines* in other scenarios with tuned hyperparameters. Our method still outperforms these baselines, and we will update the table once the results are out.\n\n**(3)** We are not sure what the reviewer was referring to by \"more experiments\". We would appreciate it if the reviewer could provide some examples. We are happy to conduct new experiments as suggested.",
" Dear Authors.\n\nThe discussion of the differences with the currently available methods of Distributional MARL does not look sufficient, please authors to discuss further in depth the differences with them.\n\nAs the authors further reported the results, I am curious about the difference in performance between Distributional MARL and DRE-MARL, and whether the authors need to conduct more experiments in the future to support the benefits of DRE-MARL?\n",
" We are particularly encouraged that the reviewer finds our method novel and effective. We appreciate the valuable feedback of Reviewer ugkM and respond to the questions and limitations below.\n\n\n### [I]. Explanation of questions\n\n> **[1/2] Q1:** I suggest the authors put the related work in section 1 introduction to section 2.\n\nFor Q1, following your suggestions and at the same time preserving logical integrity, we revise the related work of Section 1 Introduction and Section 2 Related Work.\n\n\n> **[2/2] Q2:** I think the data volume for reward estimator should be very large, so I suggest the authors conducting some experiments to test how much data should be used.\n\n\nWe appreciate your comments on the data volume problem. **We clarify that the data volume used to train reward estimators is not very large**. We do not need to collect extra data from the environment. The total data volume is identical to the baselines. In each training epoch, we train each agent's reward network the same as actor and critic, and the quantities of samples are also the same. We update our model every 100 timesteps with the batch size 1024. Additionally, we report the comparison of the training time (min) in Table 1, which illustrates the training cost is not expensive.\n\n**Table 1:** The comparison of the computational cost of different models based on the CN-3 scenario. The items of the comparison contain three aspects: parameters, physical training time (min), and max memory consumption (MB).\n| comparison items | DRE-MARL (ours) | MAPPO | MAAC | QMIX | MADDPG | IQL\n| -------- | -------- | -------- |-------- | -------- |-------- | -------- |\n| total parameters | 198440 | 72854 | 450364| 307330|172884 |564642 |\n| trainable parameters | 54126 | 72854 | 450364 | 123488 | 86436 | 141147 |\n| untrainable parameters | 144314 | 0 | 0 | 183842 | 86448 | 423495 |\n| physical training time (min) | 256.0±1.859 | 336.0±9.623 | 915.9±6.270 | 867.1±16.54 | 166.9±1.976 | 217.4±2.311 |\n| max memory consumption (MB) | 1354.±60.18 | 310.6±9.623 | 229590.1±65.69 | 2146.5±22.95 | 3754.±227.3 | 668.4±20.18 |\n\n\n\n\n\n### [II]. Explanation of limitations\n\n> **[1/1] L1:** The reward distribution in reward estimation is hard to choose\n\n\nFor limitations, we have discussed them in Section 7. Gaussian distribution is one of the most widely used distributions because of its powerful fitting ability, so we choose it as the form of reward distribution. We admit that choosing the reward distribution is indeed a bit hard in practice. However, just like tanglesome signals can be factorized into the superposition of simple but elementary sinusoidal signals, we may consider using a cluster of basic distributions as reward distribution in future work.\n\n---\n\nWe again thank Reviewer ugkM for reviewing our paper and giving suggestions. We hope our answers have addressed all the concerns the Reviewer has. If so, we would greatly appreciate it if Reviewer ugkM could consider raising their score. Please let us know if there are more questions.\n\nPaper2649 Authors",
" ### [III]. Explanation of limitations\n\n> **[1/1] L1:** I think that the paper could be improved by further discussion of the limitations of DRE-MARL. As far as I can tell only one limitation is briefly mentioned on Page 9, which is that DRE-MARL and reward aggregation are only used in discrete action spaces. It would be good to get further clarity on the following from the authors:\n> - How could DRE-MARL be applied to continuous action spaces?\n> - What limitations does DRE-MARL have that can inspire future work?\n\n\nIn our method, the number of action branches matches the number of available discrete actions, so we can set the value of K to be equal to the number of available discrete actions such as “move forward”, “move backward”, “move left”, “move right”, and “motionless” in MPE. But in continuous action space, for example, we want to manipulate a robot arm. The “grasping” action needs us to assign a continuous value such as rotation angle to the robot arm. Right now, the available action is a range, so we can not define the number of K. \n- One possible solution may be the discretization of the range of the available action value. More sophisticated discretization will bring better manipulation, but it will consume more computational resources at the same time. Although coarse discretization reduces the consumption of physical time, it may hurt the performance.\n- Another possible solution is learning a network with actions as inputs, just like how we convert discrete-action DQN into continuous-action Q network in DDPG.\n\nIn the revised version, we add the above discussion into Section 7 of our paper.\nWe hope the above analysis could provide some inspiration and encourage future works to develop novel reward estimation methods.\n\n---\n\nWe again thank Reviewer oHz3 for reviewing our paper and giving suggestions. We hope our answers have addressed all the concerns the Reviewer has. If so, we would greatly appreciate it if Reviewer oHz3 could consider raising their score. Please let us know if there are more questions.\n\nPaper2649 Authors",
" We are particularly encouraged that the reviewer finds our method novel and effective. We appreciate the valuable feedback of Reviewer oHz3 and respond to the weaknesses, questions, and limitations below.\n\n### [I]. Explanation of strengths and weaknesses \n\n> **[1/3] W1:** The authors could have made better use of figure captions (in Figure 2 and Figure 3 specifically) to make it easier for the reader to understand the relevant messages conveyed by the figures.\n\n\nWe really appreciate your comments, and we will add more explanations in the captions of Figure 2 and Figure 3 to make it easy for the readers. For more details, please refer to Figure 2 and Figure 3 in the main context.\n\n\n> **[2/3] W2:** The authors could have provided more detail on the limitations of their method (discussed below as well)\n\n\nWe merge the responses to **W2**, **W3**, and limitations, and we put them in part of **III.L1**.\n\n\n> **[3/3] W3:** The authors could have provided more detail on the limitations of their method and put it into the broader context of MARL methods (discussed below as well).\n\n\nWe merge the responses to **W2**, **W3**, and limitations, and we put them in part of **III.L1**.\n\n\n\n### [II]. Explanation of questions\n\n> **[1/5] Q1:** \n> - A is not capitalized in title \"Multi-agent\"\n> - first page says 35th Neurips, which is 2021\n> - Line 173-175 (page 5) have some spelling errors (\"interplay grows exponentially with an increase of the agent number\")\n\n\nAccording to your suggestions, all the typos are revised. Thank you very much for pointing them out.\n\n\n> **[2/5] Q2:** Could you clarify why you choose a centralized critic for your method? Is this related to the environment setting (i.e. the environment only provides a single reward for all agents)? Can you see your method working with a de-centralized critic? what would change if anything?\n\n \nFor Q2, centralized critic is commonly used as in MADDPG and MAAC. Our algorithm can be combined with decentralized critic. For example, it is feasible to design decentralized critic if there are only two agents in certain environments. However, when the number of agents is relatively large, equipping every agent with a critic brings a lot more computational cost and possible instability in training.\n\n\n> **[3/5] Q3:** Could you say more about how your method might perform in competitive MARL settings? Right now you have looked at cooperative settings and it would be interesting to contrast that with competitive settings. (No new experiments needed, mainly looking for additional detail)\n\n\nFor Q3, although we only test the performance in cooperative settings, our method can also be used in competitive environments. When we put this method in competitive scenarios, we should figure out that the reward uncertainty comes not only from the teammate-agent group's mutual interaction and natural disturbance but also from the opposite-agent group. This problem brings more challenges to reward estimation, which may be resolved by separating the reward estimation into teammate-agent group reward estimation and opposite-agent group reward estimation, followed by sophisticated reward aggregation. The overall process flow under competitive settings is the same as in cooperative settings (i.e., reward estimation followed by reward aggregation.).\n\n\n> **[4/5] Q4:** It seems like p2p-MARL is the most competitive method to DRE-MARL and also uses reward estimation. Could you clarify the differences between p2p-MARL and DRE-MARL?\n\n\nFor Q4, the differences between DRE-MARL and p2p-MARL are as follows: 1) DRE-MARL estimates reward distributions on all action branches while p2p-MARL estimates reward value on one action branch, which is taken by the reward estimator during the training process. 2) DRE-MARL performs reward aggregation after multi-action-branch reward estimation, while p2p-MARL has no reward aggregation.\n\n\n> **[5/5] Q5:** How did you choose k for the distributional estimation? Was there a significant difference between different values of k?\n\n\nIn the environment with discrete action space, the K is equal to the number of available discrete actions such as “move forward”, “move backward”, “move left”, “move right”, and “motionless” in MPE. When the environment is fixed, the value of K is determined at the same time.",
" We are particularly encouraged that the reviewer finds our method novel and effective. We appreciate the valuable feedback of Reviewer c9dW and respond to the weaknesses and questions below.\n\n### [I]. Explanation of strengths and weaknesses \n\n> **[1/1] W1:** Lack of discussion and experimental comparison with work related to Distributional MARL\n\n\nWe merge the responses to weaknesses and questions and put the discussion of differences and similarities in the following part.\n\n\n\n### [II]. Explanation of questions\n\n> **[1/1] Q1:** Shouldn't this work be discussed and compared with the related work of Distributional MARL? For example.\n> - DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning\n> - A Distributional Perspective on Value Function Factorization Methods for Multi-Agent Reinforcement Learning\n> - RMIX: Learning Risk-Sensitive Policies for Cooperative Reinforcement Learning Agents\n\n\nThe above three articles you recommended are good papers. They all focus on solving the MARL problem from the distributional perspective and all make their efforts to push a step forward.\nThe similarity between the above papers and our paper is that we all propose solving the uncertainty problem from the distributional perspective.\nThe differences between the above models and our models are shown in the following aspects: \n- They focus on the Q-value uncertainty of the environment and model the distribution on state-action value Q. In contrast, we focus on the reward uncertainty of the environment and model the distribution of reward R. \n- They consider the distributions of all agents by unique mixture methods such as quantile mixture in DFAC. In contrast, we consider the distributions of all action branches by reward aggregation.\n\nIn the revised version, we add the above discussion into Section 2 of our paper.\n\nDue to the time limitations, we just run the DFAC at the CN-3 scenario. The results are shown in Table 3. We conduct the experiment in CN-3 with DFAC variants which is based on SC II originally, and adopt the original hyperparameters of DFAC variants. The performance of DFAC variants is not good because we do not adjust the hyperparameters carefully due to time limitations. The DFAC-diql(128) denotes the number of hidden neural is 128.\n\n\n\n\n**Table 3:** Performance comparison of DRE-MARL variants and DFAC variants.\n| model | performance |\n| -------- | -------- |\n| DRE-MARL with $l_{SS}+g_{SS}$ | -272.7±25.22 |\n| DRE-MARL with $l_{SMO}+g_{MO}$ | -235.1±16.38 |\n| DRE-MARL with $l_{MO}+g_{MO}$ | -242.1±16.15 |\n| DRE-MARL with $l_{MO}+g_{SS}$ | -252.6±17.49 |\n| DFAC-diql(128) | -786.9±180.2 |\n| DFAC-diql(256) | -1117.9±21.38 |\n| DFAC-dmix(128) | -1121.6±21.51 |\n\n\n---\n\nWe again thank Reviewer c9dW for reviewing our paper and giving suggestions. We hope our answers have addressed all the concerns the Reviewer has. If so, we would greatly appreciate it if Reviewer c9dW could consider raising their score. Please let us know if there are more questions.\n\nPaper2649 Authors",
" **Table 2:** Performance comparison of DRE-MARL with and without regularization term $L_R$ while training with the team rewards and evaluating without the r_ac−dist setting.\nThe values represent mean episodic rewards.\n| comparison items | model |\n| -------- | -------- |\n| $l_{SS}+g_{SS}$ with $L_R$ | **-272.7±25.22** |\n| $l_{SS}+g_{SS}$ without $L_R$ | -274.6±19.13 |\n| $l_{SMO}+g_{MO}$ with $L_R$ | **-235.1±16.38** |\n| $l_{SMO}+g_{MO}$ without $L_R$ | -365.0±25.19 |\n| $l_{MO}+g_{MO}$ with $L_R$ | **-242.1±16.15** |\n| $l_{MO}+g_{MO}$ without $L_R$ | -258.9±17.36 |\n| $l_{MO}+g_{SS}$ with $L_R$ | **-252.6±17.49** |\n| $l_{MO}+g_{SS}$ without $L_R$ | -354.2±23.47 |\n| no reward estimation | -258.2±20.75 |\n\n\n\n\n### [III]. Explanation of limitations\n\n> **[1/1] L1:** The authors have discussed the possible limitations of their work in the last paragraph of the submission.\n\n\nWe have discussed the limitations of our method in Section 7, and these problems will be investigated in future works.\n\n---\n\nWe again thank Reviewer JEsw for reviewing our paper and giving suggestions. We hope our answers have addressed all the concerns the Reviewer has, especially the ablation of the regularization term $L_R$ and no reward estimation. If so, we would greatly appreciate it if Reviewer JEsw could consider raising their score. Please let us know if there are more questions.\n\nPaper2649 Authors",
" We are particularly encouraged that the reviewer finds our method novel and effective. We appreciate the valuable feedback of Reviewer JEsw and respond to the weaknesses, questions, and limitations below.\n\n### [I]. Explanation of strengths and weaknesses \n\n> **[1/1] W1:** To my best knowledge, the idea of estimating the reward distribution and using the aggregated rewards to optimize the policy is novel, but I get no sense how significant reward estimation is in the multi-agent reinforcement learning domain. Ablation studies on removing the reward estimation module could answer this doubt.\nThe writing of this paper is mostly clear. In addition, this paper has conducted thorough comparative experiments to compare with state-of-the-art multi-agent algorithms in the particle experiment domain. A more challenging multi-agent benchmark is Starcraft 2. The authors are encouraged to do more experiments in this domain.\n\n\n\nStarCraft II (SCII) is indeed a more challenging multi-agent benchmark. We are happy to evaluate our method in SCII, but it is hard to get results within the short rebuttal period. We will conduct the investigation of more challenging tasks such as SCII and put the experiments of SCII in future works.\n\n\n\n### [II]. Explanation of questions\n\n> **[1/3] Q1:** Section 6.3 has conducted ablation studies on different reward estimation methods. I wonder what will happen when no reward estimation is conducted (just using the external rewards to update actor and critic networks). This experiment could better demonstrate the utility of reward estimation in multi-agent reinforcement learning.\n\n\nFor Q1, due to time limitations, we evaluate the performance of our model with just external reward signals on CN-3. **The experimental results are shown in the last line of Table 2** (in the answer to Q3), which shows that **only using external rewards indeed will hurt the performance**.\n\n\n\n> **[2/3] Q2:** This paper refers to the proposed framework as “two-stage” learning (Line 162). Is the reward estimation conducted before the reward estimation? If so, where does the data for estimating the reward distribution come from? Or those two stages are conducted concurrently? This point should be clarified.\n\n\nFor Q2, we do the reward estimation and the reward aggregation **sequentially in every training epoch** rather than doing all the reward estimation at once before all the reward aggregation. The data set used to estimate reward distributions comes from the replay buffer. As illustrated in Algorithm 1, a more intuitionistic but detailed process of one training epoch is as follows: 1) We interact with the environment and deposit the transitions in the replay buffer. 2) We sample a batch of transitions B from the replay buffer. 3) These transitions will be used to train reward estimators first. Then we again use trained reward estimators to predict reward distributions $\\hat{R}$ of all action branches with states that are stored in transitions B. 4) We sample rewards $\\hat{r}$ from $\\hat{R}$ and deposit $\\hat{r}$ and environmental rewards that are stored in transitions B together. 5) Perform reward aggregation with Equation 2 and Equation 5. 6) Update the critic and the actors.\n\n\n\n> **[3/3] Q3:** Why is the regularization term $L_R$ needed in the reward estimation objective? What if this term is removed?\n\n\nFor Q3, we add the regularization term $L_R$ out of consideration for training stability. If this term is removed, the policy learning process may be influenced. For example, a larger variance of $\\pmb{\\mu}$ will make the aggregated rewards more fiercely, affecting the critic's updating. A more significant $\\sigma$ value will lead to a smaller gradient of Equation 1, which will affect the updating of the reward estimators. To illustrate the effect of $L_R$, we evaluate our model without $L_R$ on the CN3 scenario, and the results are shown in Table 2. From Table 2, we can see that **removing the regularization term $L_R$ will hurt the performance compared with the corresponding \"with $L_R$\" models**.",
" ### [II]. Explanation of limitations\n\n> **[1/1] L1:** The authors have discussed the limitations and broader impacts of their work in section 7.\nDRE-MARL requires a prior assumption about the form of reward distribution. Although the distribution usually can be chosen arbitrarily, it may hurt performance when we choose a very complex distribution.\nBesides, reward aggregation is only used in discrete action space. If would be better if the method can be extented to more general continuous action space.\n\n\nFor limitations, we have discussed them in Section 7 of this paper. We admit these limitations indeed hinder our model from being used in some other scenes, such as continuous action space. However, these problems may be resolved in the future. For example, one can discretize the range of action value in a continuous action space. The assumption of reward distribution limitation may be solved by using a cluster of basic distributions as reward distribution.\n\n---\n\nWe again thank Reviewer TJjg for reviewing our paper and giving suggestions. We hope our answers have addressed all the concerns the Reviewer has. If so, we would greatly appreciate it if Reviewer TJjg could consider raising their score. Please let us know if there are more questions.\n\nPaper2649 Authors",
" We are particularly encouraged that the reviewer finds our method novel and effective. We appreciate the valuable feedback of Reviewer TJjg and respond to the questions and limitations below.\n\n### [I]. Explanation of questions \n\n> **[1/3] Q1:** Line 55, \"If we can obtain the potential rewards on other action branches, we will perform more stable critic updating and thus achieve better performance.\" Is there any assumption of requirement on the design of action branches to make this sentence correct? What if the other actions are opposite the the current action of a specifict agent?\n\n\nWe express our gratitude for your comments, and we will revise this sentence to avoid potential ambiguity. We want to clarify that we actually do not need extra assumptions. Although inaccurate reward estimation may exist in certain action branches during training, our model possesses relative robustness due to the policy-weighted reward aggregation. Because the strategy of policy-weighted reward aggregation puts larger weights on some action branches where the corresponding actions will also be performed more frequently during the interaction, so we can obtain more samples and be more confident that the reward estimation is relatively accurate in practice. As for the second doubt of Q1, “What if the other actions are opposite the current action of a specific agent?”, we do not see any specific scene that matches the foregoing situation. Despite the fact that the opposite actions may appear, we can still obtain the corresponding rewards on that opposite action branch as long as the policy endows them with a certain probability. We have revised this sentence in the revised edition.\n\n\n> **[2/3] Q2:** What is the motivation for the design of different aggregation weights?\n\n\nThe reason for adopting policy-weighted reward aggregation is that the policy-weighted reward aggregation strategy can better reflect the current policy's expected return, enabling the agent to evaluate its historical experiences thoughtfully. Additionally, in Equation 2 and Equation 5, we adopt various aggregation methods to evaluate our model in different and frequently-used aggregation settings, which enables us to assess our method comprehensively in the experiments.\n\n\n> **[3/3] Q3:** DRE-MARL requires individual reward estimator for each agent. How much is the computational cost gain of DRE-MARL compared with other SOTA-MARL methods?\n\n\nIn order to evaluate the computational cost of our model, we compare it with other models from three aspects: 1) parameters, 2) physical training time (min), and 3) max memory consumption (MB), where we subdivide parameters into total parameters, trainable parameters, and untrainable parameters. The testing is conducted on the CN-3 scenario.\nThe physical training time (min) is tested based on CPUs without GPUs, and we run the codes with one thread. The type of CPU is Intel(R) Xeon(R) Gold 6230 CPU @ 2.10 GHz. \nThe results are shown in Table 1, which shows that the computational cost of our model is competitive or even more efficient in practice, such as the number of trainable parameters.\n\n\n**Table 1:** The comparison of the computational cost of different models based on the CN-3 scenario. The items of the comparison contain three aspects: parameters, physical training time (min), and max memory consumption (MB).\n| comparison items | DRE-MARL (ours) | MAPPO | MAAC | QMIX | MADDPG | IQL\n| -------- | -------- | -------- |-------- | -------- |-------- | -------- |\n| total parameters | 198440 | 72854 | 450364| 307330|172884 |564642 |\n| trainable parameters | 54126 | 72854 | 450364 | 123488 | 86436 | 141147 |\n| untrainable parameters | 144314 | 0 | 0 | 183842 | 86448 | 423495 |\n| physical training time (min) | 256.0±1.859 | 336.0±9.623 | 915.9±6.270 | 867.1±16.54 | 166.9±1.976 | 217.4±2.311 |\n| max memory consumption (MB) | 1354.±60.18 | 310.6±9.623 | 229590.1±65.69 | 2146.5±22.95 | 3754.±227.3 | 668.4±20.18 |\n",
" This paper focus on addressing the problem of high reward uncertainty in multi-agent reinforcement learning (MARL). To address this problem, this paper proposes a new Distributional Reward Estimation (DRE) framework, which is composed of multi-action-branch reward estimation and policy-weighted reward aggregation for stable training. The full method, DRE-MARL is built on top with the architecture of centralized training and decentralized evaluation (CTDE), which consists of N decentralized actors and a centralized critic. Empirically, the proposed method is evaluated in several MARL environments, i.e. cooperative navigation, reference, and treasure collection with different number of agents. Compared with other SOTA MARL methods, DRE-MARL achives the best performance in most of the environments using three types of reward settings. ==Originality==\n\nThe proposed DRE-MARL is novel and interesting. The main idea is to develop distributional reward estimation followed by policy-weighted reward aggregation for MARL. This idea is intuitively similar to human's decision making process that considers all possible consequences of all action branches. The connections and differences of this work and previous work are well discussed and the related work are cited in the paper. \n\n\n==Quality==\n\nThe proposed method is technically sound. The experiments are conducted in diverse settings with ablations, and the results well supports the claims. Although there are some limitations, this work is a complete piece of work in addressing the reward uncertainty problem in MARL. \n\n==Clarity==\n\nThis paper is clearly written and well organized. The problem is well formulated with a clear intruduction of the cause of reward uncertainty. The method is clearly motivated and introduced. The limitations and potential impacts are also discussed. \n\n==Significance==\n\nThis paper provides a novel distributional reward estimation method to address the high reward uncertainty in multi-agent reinforcement learning. The proposed method is novel and can be applied in other MARL methods. \n 1. Line 55, \"If we can obtain the potential rewards on other action branches, we will perform more stable critic updating and thus achieve better performance.\" Is there any assumption of requirement on the design of action branches to make this sentence correct? What if the other actions are opposite the the current action of a specifict agent? \n\n2. What is the motivation for the design of different aggregation weights?\n\n3. DRE-MARL requires individual reward estimator for each agent. How muc is the computational cost gain of DRE-MARL compared with other SOTA-MARL methods? The authors have discussed the limitations and broader impacts of their work in section 7. \n\n==Limitations==\n1. DRE-MARL requires a prior assumption about the form of reward distribution. Although the distribution usually can be chosen arbitrarily, it may hurt performance when we choose a very complex distribution.\n2. Besides, reward aggregation is only used in discrete action space. If would be better if the method can be extented to more general continuous action space. \n\n==Broader impacts==\nThe authors does not see any negative societal impacts of this work while using the proposed method in practice.\n",
" Reward uncertainty is a longstanding problem in multi-agent reinforcement learning, which stems from two aspects: the natural uncertainty in the MDP environment, and the actions of other agents. To deal with the reward uncertainty problem, this paper proposes a new framework for estimating the reward distribution and aggregating the estimated reward distribution. Then the proposed method uses the aggregated rewards to update the centralized critic network and the decentralized actor networks. Experiments in the particle domains demonstrate the effectiveness of the proposed method. To my best knowledge, the idea of estimating the reward distribution and using the aggregated rewards to optimize the policy is novel, but I get no sense how significant reward estimation is in the multi-agent reinforcement learning domain. Ablation studies on removing the reward estimation module could answer this doubt.\n\nThe writing of this paper is mostly clear. In addition, this paper has conducted thorough comparative experiments to compare with state-of-the-art multi-agent algorithms in the particle experiment domain. A more challenging multi-agent benchmark is Starcraft 2. The authors are encouraged to do more experiments in this domain.\n Section 6.3 has conducted ablation studies on different reward estimation methods. I wonder what will happen when no reward estimation is conducted (just using the external rewards to update actor and critic networks). This experiment could better demonstrate the utility of reward estimation in multi-agent reinforcement learning.\n\nThis paper refers to the proposed framework as “two-stage” learning (Line 162). Is the reward estimation conducted before the reward estimation? If so, where does the data for estimating the reward distribution come from? Or those two stages are conducted concurrently? This point should be clarified. \n\nWhy is the regularization term $L_R$ needed in the reward estimation objective? What if this term is removed? \n The authors have discussed the possible limitations of their work in the last paragraph of the submission. ",
" Considering that high reward uncertainty remains a problem, the authors propose a novel distributed reward estimation framework to enhance multi-gent reinforcement learning. The training process is stabilized by designing multi-action-branch reward estimation and policy-weighted reward aggregation. Multi-action branch reward estimation is first employed to model the reward distribution of all action branches, and then reward aggregation is used to obtain a stable update signal during training. And its effectiveness is demonstrated experimentally. Strengths.\n* paper writing is well understood\n* The pictures drawn can help understanding\n* The motivation is clear and seems to make sense\n* table-1 looks like a lot of experiments were done\n* Ablation experiments also look interesting\n\n\nWeaknesses.\n* Lack of discussion and experimental comparison with work related to Distributional MARL\n * Shouldn't this work be discussed and compared with the related work of Distributional MARL? For example. \n * DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning\n * A Distributional Perspective on Value Function Factorization Methods for Multi-Agent Reinforcement Learning\n * RMIX: Learning Risk-Sensitive Policies for Cooperative Reinforcement Learning Agents\n None",
" The paper outlines a new method for multi-agent reinforcement learning settings (DRE-MARL) which primarily focuses on reward estimation for multi-agent reinforcement learning settings. The authors first describe the problem settings and motivate why reward estimation is important and challenging in multi-agent reinforcement learning and how DRE-MARL differs from prior approaches, including reward uncertainty method and reward estimation methods in single-agent reinforcement learning. According to the authors' summary, many reward estimation methods struggle in multi-agent setting due to the additional complexity and greater sources of uncertainty. The authors identify mutual interaction between agents and natural disturbance of the environment as two sources of uncertainty in MARL settings and propose a method simulate reward uncertainty by applying stochastic processes.\n\nSubsequently the authors outline their method (DRE-MARL) for reward estimation based on a distribution of potential action a single agent can take in a MARL setting and also describe different reward aggregation methods that are tested in the experiments. The authors then compare their method against different MARL baselines across a set of collaborative MARL environments showing mostly outperformance and conduct an ablation study for various reward aggregation schemes. **Originality:**\n- Strengths: The paper proposes a new method for reward estimation in MARL settings that differs from prior approaches and shows good performance compared to baseline methods. Relevant work is cited and compared against in the beginning of the paper.\n- Weaknesses: The authors could have provided more detail on the limitations of their method (discussed below as well)\n\n**Quality:**\n- Strengths: The methods and contributions are generally well supported in the experiments and discussed in detail in the paper.\n\n**Clarity:**\n- Strengths: The paper is generally well written and well organized with relevant equations, diagrams and descriptions.\n- Weaknesses: The authors could have made better use of figure captions (in Figure 2 and Figure 3 specifically) to make it easier for the reader to understand the relevant messages conveyed by the figures.\n\n**Significance:**\n- Strengths: The paper proposes a new method in a relevant subject area of MARL.\n- Weaknesses: The authors could have provided more detail on the limitations of their method and put it into the broader context of MARL methods (discussed below as well).\n\n\n **Nits**\n- A is not capitalized in title \"Multi-agent\"\n- first page says 35th Neurips, which is 2021\n- Line 173-175 (page 5) have some spelling errors (\"interplay grows exponentially with an increase of the agent number\")\n\n**General Questions:**\n- Could you clarify why you choose a centralized critic for your method? Is this related to the environment setting (i.e. the environment only provides a single reward for all agents)? Can you see your method working with a de-centralized critic? what would change if anything?\n- Could you say more about how your method might perform in competitive MARL settings? Right now you have looked at cooperative settings and it would be interesting to contrast that with competitive settings. (No new experiments needed, mainly looking for additional detail)\n- It seems like p2p-MARL is the most competitive method to DRE-MARL and also uses reward estimation. Could you clarify the differences between p2p-MARL and DRE-MARL?\n- How did you choose k for the distributional estimation? Was there a significant difference between different values of k? I think that the paper could be improved by further discussion of the limitations of DRE-MARL. As far as I can tell only one limitation is briefly mentioned on Page 9, which is that DRE-MARL and reward aggregation are only used in discrete action spaces. It would be good to get further clarity on the following from the authors:\n- How could DRE-MARL be applied to continuous action spaces?\n- What limitations does DRE-MARL have that can inspire future work?",
" this paper propose distributional reward estimation for multi-agent reinforcement learning (DRE-MARL). this paper focuses on the problem of reward uncertainty in MARL. The main idea of this paper is to design the multi-action-branch reward estimation and policy-weighted reward aggregation for stabilized training. This former part is simply function approximation with historical data, while the latter part is weighted reward aggregation. Experiments show that DRE-MARL outperforms other SoTA algorithms comprehensively. Strengths:\n\nit is novel to solve the reward uncertainty problem in MARL\n\npolicy-weighted reward aggregation enables stable training of the critic and actors and it is quite robust. \n\n I suggest the authors put the related work in section 1 introduction to section 2. \n\nI think the data volume for reward estimator should be very large, so I suggest the authors conducting some experiments to test how much data should be used. the reward distribution in reward estimation is hard to choose"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4,
2
] | [
"fnoll44TjEG",
"nGRdOL16rc",
"4p5iuiMAd3c",
"yim4V_Rvaq9",
"3P-F1jVXfA",
"C2hK-kKk2h",
"T3u684HmJkG",
"VsQQMAAwNOm",
"nHT7i7-vkAf",
"VIxb9nPV9Uj",
"9qwY83DsjpH",
"Nc1HN_DDxwB",
"fnoll44TjEG",
"F_Ct61PngUK",
"nGRdOL16rc",
"nips_2022_4qR780g2Mg",
"nips_2022_4qR780g2Mg",
"nips_2022_4qR780g2Mg",
"nips_2022_4qR780g2Mg",
"nips_2022_4qR780g2Mg"
] |
nips_2022_9-SZkJLkCcB | KSD Aggregated Goodness-of-fit Test | We investigate properties of goodness-of-fit tests based on the Kernel Stein Discrepancy (KSD). We introduce a strategy to construct a test, called KSDAgg, which aggregates multiple tests with different kernels. KSDAgg avoids splitting the data to perform kernel selection (which leads to a loss in test power), and rather maximises the test power over a collection of kernels. We provide theoretical guarantees on the power of KSDAgg: we show it achieves the smallest uniform separation rate of the collection, up to a logarithmic term. For compactly supported densities with bounded score function for the model, we derive the rate for KSDAgg over restricted Sobolev balls; this rate corresponds to the minimax optimal rate over unrestricted Sobolev balls, up to an iterated logarithmic term. KSDAgg can be computed exactly in practice as it relies either on a parametric bootstrap or on a wild bootstrap to estimate the quantiles and the level corrections. In particular, for the crucial choice of bandwidth of a fixed kernel, it avoids resorting to arbitrary heuristics (such as median or standard deviation) or to data splitting. We find on both synthetic and real-world data that KSDAgg outperforms other state-of-the-art quadratic-time adaptive KSD-based goodness-of-fit testing procedures. | Accept | The paper proposes a novel method of statistical tests with Kernel Stein Discrepancy, aggregating multiple tests with different kernels. The method can avoid data splitting, which is commonly used to choose a kernel aiming at better power but may not be effective with a smaller sample size. The paper gives theoretical analysis, and also experimental results outperforming other relevant methods. The work gives solid theoretical and methodological advances in the field of kernel-based tests. We think the work is worth being presented in NeurIPS. | train | [
"9vq-EgGFg6L",
"uW15Vgw2Cd",
"04heoo8p-MS",
"5Xq5-8ofPaC",
"nSU-N0P4-6",
"o_Voi5JcJnQ",
"Z6FrpioBpzZ",
"7c56NiCrdzg",
"Oskp49RIfXY",
"UEDkRUpzPP",
"X7MURglWkUa",
"fpUlJ3r_RYB",
"LXuucxdJHyO",
"R1y1pUORQ3a"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer ZqFj for their reply, and for increasing their score! \n\nWe will follow their suggestion and include a discussion of the advantages of the multiple testing strategy used against the classical Bonferroni correction.\n\nYes, KSDAgg selects the bandwidth 0.002 and split extra selects 2437. Split extra selects the bandwidth which maximizes \n$$\n\\widehat{\\textrm{KSD}}^{p,\\lambda} / \\widehat\\sigma_{\\lambda}\n$$\nwhere $\\widehat\\sigma_{\\lambda}^2$ is a regularised positive estimator of the asymptotic variance of $\\widehat{\\textrm{KSD}}_{p,\\lambda}$ under the alternative, as explained lines 195/196. Maximizing this criterion is equivalent to maximizing asymptotic power, as was shown by D. J. Sutherland et al., 2017 (Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy) for the MMD; the same result holds straightforwardly for the KSD due to similar asymptotic properties. However, this criterion only maximizes **asymptotic** power and has no guarantee when using limited data. In our high-dimensional setting ($d=784$) with small sample size $N\\leq 500$, the asymptotic regime is clearly not reached, and the criterion used for bandwidth selection does not maximize power in this **non-asymptotic** setting. So, even though split extra has access to some extra data, it does not have an accurate criterion to select the bandwidth and ends up selecting the largest bandwidth, which is not well-adapted to the problem. This explains why such a big difference in bandwidths is observed. We will clarify this finding in the main paper.\n\nWe also point the reviewer to our general comment **For all reviewers: Updated version of the paper (Appendix D)**, we have now updated the paper showing minimax optimality and adaptivity of KSDAgg in the setting in which the densities are compactly supported.\n",
" We thank reviewer CDhe for their response and for increasing their score! \n\nRegarding Q1, we have now provided an updated version of the paper where we consider in Appendix D the setting in which the densities have compact support, this includes for example d-dimensional isotropic Gaussians truncated to some compact subset of $\\mathbb{R}^d$. Please see **For all reviewers: Updated version of the paper (Appendix D)** and to the updated paper for details.\n\nQ1 asked about the behaviour of the two terms in the condition of Theorem 3.1 with respect to dimension.\nRecall that the first term $(1)=\\|\\psi-T_{h_{p,k}} \\psi\\|_2^2$ indicates the size of the effect of the Stein operator on the difference in densities $\\psi = p-q$, and is a measure of distance from the null (where this quantity is zero), and that the second term $(2)=\\log(1/\\alpha) \\frac{\\sqrt{C_k}}{\\beta N}$ is obtained from upper bounding the variance of the KSD $U$-statistic and the quantile as explained in the proof of Theorem 3.1. \n\nOur analysis in Appendix D shows the dependence of the two terms on the dimension with respect to the bandwidth. As shown in Equation (16) with bandwidth $\\lambda\\leq 1$, the first term $(1)$ gives $\\lambda^{2s}$ where $s$ is the smoothness parameter of the Sobolev ball while term $(2)$ gives $\\lambda^{-d/2}$. For the first one, the bandwidth component has no dependence on dimension, due to the Sobolev assumption. The second term increases with dimension as $\\lambda\\leq 1$. This reasoning holds for bandwidths independent of the dimension. In order to obtain the minimax rate we need to set the bandwidth depending on the dimension, (i.e. $\\lambda=N^{-2/(4s+d)}$). \n",
" We thank reviewer ys6t for suggestion 2, we will clarify this in the final version of the paper.\n\nRegarding comment 3, we point the reviewer to our comment **For all reviewers: Updated version of the paper (Appendix D)** and to the updated paper which now mathematically presents concrete examples for which the conditions in Theorem 3.1 / 3.3 are satisfied, and derives minimax optimal rates over Sobolev balls. Our results hold for any strictly positive compactly supported densities (with continuous score function for the model): this includes for example d-dimensional isotropic Gaussians truncated to some compact subset of $\\mathbb{R}^d$. This makes precise the comment we had provided in lines 175--176.\n",
" We thank all reviewers for the engaging discussions. Reviewers asked for specific cases satisfying the conditions of Theorem 3.1 and 3.3. In response, we have achieved significantly stronger results than originally promised in the rebuttal text: we have proved minimax optimality and adaptivity results when assuming only compactness of the support of the densities and continuity of the model score function, which we believe is a strong addition to our theoretical results.\n\nThese new results are presented in Appendix D of the updated submission (this will be included in the main text for the final version of the paper). Specifically: under the abovementioned assumption that the densities are compactly supported, we have been able to derive the uniform separation rate over Sobolev balls for the KSD and KSDAgg tests using a Gaussian kernel. The rate is minimax optimal for the KSD test using a bandwidth depending on the unknown smoothness parameter of the Sobolev ball. KSDAgg is adaptive to this unobserved smoothness parameter and achieves the minimax rate up to an iterated logarithmic factor. The proof is based on noting that, in the compactly supported setting, the Stein kernel can be upper bounded by a Gaussian kernel which is translation-invariant. For a translation-invariant kernel, the integral transform is a convolution, which allows working in the Fourier domain through Plancherel’s Theorem, hence our choice of Sobolev balls in characterising the separation rate. We remark that the proof does not follow directly from the earlier result 'Schrab et al., 2021, MMD Aggregated Two-Sample Test': an additional term (L2-norm of the integral transform (convolution) of the difference in densities) needs to be dealt with.\n",
" Thank you for your detailed replies and clarifications. I'm happy with the clarifications you have made, particularly with regards the Bonferroni query. Based on your replies to me and to other authors I will increase my score.",
" Thank you for your well-presented response. A couple of points:\n\n2. It will be good to state that the bounds for sample split are unknown, rather than claiming that they are guaranteed to be worse (at least that's the impression I currently get from your paper).\n\n3. I am not sure I fully follow your comment---will a concrete example be mathematically presented for when the conditions in Theorem 3.1 / 3.3 are satisfied, e.g., given a certain mean separation in d-dimensions isotropic Gaussians? Or would just a numerical experiment be conducted? I did read your response to CDhe, and it seems that you are planning to only do experiments. If that is the case, it would be please clarify that it is non-trivial to derive any explicit theoretical result/conditions for your Theorems.\n\n\nFor all other comments, I am assuming you would add clarifications, and make statements more precise (e.g., 5) in your revision.\n",
" I really appreciate the author's detailed response, which has addressed most of my concerns. The explanation of Q3 is quite useful to understand this method. Maybe consider adding this discussion to the main paper?\n\nIf I understood correctly, for Q5, you said the KSDAgg select the bandwidth 0.002 but split extra select 2437? Why there is such a big difference even with limited data?\n\nI have increased my score after reading the author's response. ",
" We thank reviewer ZqFj for their questions and suggestions. We hope the discussion in the response to all reviewers (see box \"Comments to all reviewers\" above) have addressed the concerns expressed by the reviewer. We welcome the suggestion of adding a more detailed background section in the appendix, which will be reflected in the final version.\n\n**Q1:** See response to all reviewers above.\n\n**Q2**: The time complexity of KSDAgg is provided in Algorithm 1. Indeed it grows linearly with the number of kernels, quadratically with the sample size, and linearly with the number of bootstrap samples.\n\nMNIST Normalizing Flow (average of 10 run times with wild bootstrap)\n\n$n=100$, KSDAgg: 0.0372s, Median: 0.0046s, Split: 0.0217s, Split Extra: 0.0233s\n\n$n=200$, KSDAgg: 0.0841s, Median: 0.0099s, Split: 0.0637s, Split Extra: 0.0698s, \n\n$n=300$, KSDAgg: 0.1622s, Median: 0.0198s, Split: 0.1320s, Split Extra: 0.1452s, \n\n$n=400$, KSDAgg: 0.2759s, Median: 0.0336s, Split: 0.2298s, Split Extra: 0.2534s\n\n$n=500$, KSDAgg: 0.4212s, Median: 0.0510s, Split: 0.3585s, Split Extra: 0.3947s\n\nThe time complexity of KSDAgg is $\\mathcal{O}(\\mid \\Lambda\\mid (B_1+B_2) N^2)$ and the one for median KSD is $\\mathcal{O}(B_1 N^2)$ where $\\mid \\Lambda\\mid = 21$ and $B_1=B_2=500$. While KSDAgg takes roughly 10 times longer to run than KSD median, we could have expected a larger difference looking at the time complexities. This can be explained by the fact that there are two major time-consuming steps: (i) computing the kernel matrices and (ii) computing the wild bootstrap samples. While (i) has complexity $\\mathcal{O}(N^2)$ and (ii) complexity $\\mathcal{O}(BN^2 + NB^2)$, the constant for step (i) is much larger than the one for step (ii) (which is some matrix multiplication). Note that for KSDAgg to compute the $\\mid \\Lambda\\mid$ kernel matrix, we need to compute the matrix of pairwise distances only once.\n\nWhen splitting the data, the computationally expensive step is to select the bandwidth. All the $\\mid \\Lambda\\mid$ kernel matrices need to be computed as for KSDAgg, which is the expensive step (i). The split test runs only slightly faster than the split extra test, it runs faster than KSDAgg but their run times are of the same order of magnitude.\n\n**Q3:** Given some fixed weights and level there does not exist a single $u_\\alpha$ associated to them. This is the strength of this multiple testing correction. First, note that it can be shown that $u_\\alpha \\geq \\alpha$, this means that this multiple testing strategy is always as powerful as using a Bonferroni correction. Essentially, the Bonferroni correction comes from a union bound, the multiple testing strategy aims to tighten the bound. Here are two extreme examples to provide intuition about the multiple testing strategy. First, assume that $\\ell$ events are all disjoints, then the union bound is tight and both Bonferroni and the method we use yield adjusted levels $\\alpha/\\ell$. Second, assume that all events are the same (or almost the same), then the Bonferroni correction still yields adjusted levels $\\alpha/\\ell$, while multiple testing strategy will give 'adjusted' levels $\\alpha$. \n\nWhen KSDAgg rejects the null, we can check which specific kernels rejected the adjusted tests: this provides the kernels/bandwidths which are well-adapted to the problem, i.e. the \"best\" selection of bandwidths is naturally returned as a side-effect of the test (without requiring data splitting). This also contributes to interpretability of the resulting test, for instance if different kernels prioritise different features.\n\n**Q4:** We appreciate the reviewer's suggestions to improve the clarity of the paper. We will follow those suggestions for the final version of this paper.\n\n**Q5:** We assume the reviewer is asking about the MNIST Normalizing Flow experiment where KSDAgg obtains high power while KSD split extra does not. In this setting, the median bandwidth is on average 2437. The collection consists of the median bandwidth scaled by $2^i$ for $i=-20,\\dots,0$. When KSDAgg rejects the null hypothesis, the smallest bandwidth (among others) rejects the single test with adjusted level, note that this bandwidth is $2^{-20}\\cdot 2437 \\approx 0.002$. The bandwidth selected by the by split extra is the largest bandwidth of the collection, that is the median bandwidth (roughly 2437). The proxy used for the bandwidth selection is to maximize asymptotic power! In this high-dimensional setting ($d=784$) with sample sizes smaller than 500, we are clearly not in the asymptotic regime, which explains the low power obtained by KSD split extra.\n\n",
" We warmly thank reviewer CDhe for praising the soundness, clarity and significance of our work!\n\nPlease see also the response to all reviewers above.\n\n**Comment**: *When the weights are equal to $1/N$ (where $N$ is the number of tests), this is essentially a Bonferroni-type correction for aggregating finitely many independent tests.* Actually, this is always at least as powerful as a Bonferroni-type correction. It can be shown that when the weights are $1/N$, then the correction $u \\geq \\alpha$, which means that the test will always reject the null when Bonferroni correction would reject it. Essentially, Bonferroni correction uses a loose union bound argument and the method used is trying the tighten this loose upper bound. For an extreme example illustrating the difference, imagine that all the kernels in the collection are the same, then a Bonferroni correction would be $\\alpha/N$ while the method used would give level $\\alpha$ as there is nothing to correct for since all the kernels are the same.\n\n**Q1**: The behaviour of the statistic as a function of dimension is subtle: in particular, increasing dimension might make the problem harder *or* easier. To illustrate with earlier work in the two-sample setting: when two multivariate Gaussians differ along a single dimension by a fixed amount, then increasing the number of dimensions will make the problem harder, and test power will decrease. When two multivariate Gaussians with the same mean differ in variance across all their dimensions, then test power will increase as evidence accumulates with increasing dimension. See Gretton et al (2012a) Figure 5 for both cases. We propose to add a study when p and q are both multivariate Gaussian, covering both scenarios, and to investigate test power with increasing dimension, for the final version, to verify similar behaviour occurs for KSD.\n\n\n\n**Q2:** Without any prior knowledge (which is often the case in practice), we recommend using uniform weights since we do not expect particular bandwidths to be better-suited than others. If the user has some prior knowledge of which bandwidths would be better for the task considered, then higher weights on those bandwidths can be used. Allowing for weights whose sum is strictly smaller than 1 is only for convenience of being able to add a new bandwidth with a new weight without changing the previous weights (for examples with weights $\\frac{6}{\\pi^2\\ell^2}$ for $\\ell\\in\\mathbb{N}\\setminus\\{0\\}$). Multiplying all the weights by a constant simply results in dividing the correction $u_\\alpha$ defined in Equation (5) by the same constant. This means that the product $u_\\alpha w_\\lambda$ remains the same, and hence the definition of the aggregated test is not affected by this sample. For simplicity, in practice, we use weights whose sum is equal to 1. We will add a discussion of the choice of weights in the final version.\n\n**Q3:** Extension to the case of a continuous collection of kernels (indexed by the bandwidth parameter on the positive real line) is a direction for future work, however, this extension is far from trivial. To the best of our knowledge, this has currently never been done without data splitting. Our tests retain high power even with large collections of kernels, and our method also allows to aggregate multiple kernels (Gaussian, Laplace, IMQ, Matérn, etc.) with different parameters, but the extension to a continuous parametrization remains a challenge.\n",
" We thank reviewer ys6t for summarising the strengths of the paper, and for their kind words on the clarity and quality of writing.\n\nPlease also see response to all reviewers above.\n\n**Q1:** The constant $C$ in Theorem 3.1 depends only on $M$ and on $d$. We will add this to the statements of the theorems.\n\n**Q2:** In Theorem 3.1, the dependence is $\\log(1/\\alpha)$. This gives rise in Corollary 3.4 to a dependence $\\log(1/\\alpha w_\\lambda)$. Since we require $\\sum_\\lambda w_\\lambda \\leq 1$, the weights for a collection $\\{\\lambda_1,\\dots,\\lambda_L\\}$ are often defined as $w_\\ell \\coloneqq \\frac{6}{\\pi^2 \\ell^2}$ for $\\ell=1,\\dots,L$ so that the series converge. In this case, we would get $\\log(1/\\alpha) \\leq C \\log(\\ell) \\leq C \\log(L)$. It is not directly clear what the rate would be for sample split since this would depend on how the kernel/bandwidth is selected. \n\n**Q3:** Thank you for the suggestion - we will demonstrate this principle for the Gaussian case, as suggested (in this case, the Stein operator and relevant operations are computable in closed form). The alternatives we will demonstrate are \"same variance different mean\" and \"same mean different variance,\" to illustrate local departures from the null. We will consider in particular the behaviour of the statistic as a function of dimension. See also the reply to reviewer CDhe question 1.\n\n\n**Q4:** The probability of type I error is always controlled by $\\alpha$. Intuitively, if $B_1$, $B_2$ and $B_3$ are 'too small' then the probability of type I error will strictly smaller than $\\alpha$, and if $B_1$, $B_2$ and $B_3$ are 'large enough' then it will be close to $\\alpha$ and still smaller of equal to it.\n\n**Q5:** The proof of asymptotic level of the aggregated test with wild bootstrap relies on the asymptotic level of the single tests. This is proved by Chwialkowski et al., 2016, Proposition 3.2 in a mathematically precise way for the wild bootstrap: they prove that the difference between true quantiles and the wild bootstrap quantiles converges to zero in probability under the null hypothesis with the following dependence on $N$:\n$$\n\\text{sup}_x \\mid P(N B_N > x \\mid Z_1, \\dots, Z_N) - P(N K_N > x \\mid Z_1, \\dots, Z_N) \\mid\n$$\nconverges to 0 in probability, where $K_N$ is the KSD estimator of Equation (1) in our paper, and $B_N$ is the wild bootstrap KSD of Equation (4).\n\n**Q6:** In settings where the median bandwidth is the 'best' bandwidth, KSD median would be more competitive than KSDAgg since by considering a large collection of bandwidths we are not only considering the 'best' median bandwidth but also 'worse' bandwidths. However, in practice, we cannot know in advance which bandwidth would perform well, and KSDAgg retains power even for large collections of bandwidths (21 bandwidths considered in MNIST Normalizing Flow experiment). We could imagine a setting where the best bandwidth lies in between two bandwidths of our collection and those two are 'bad' bandwidths, in which case a test which uses data splitting to select an 'optimal' bandwidth would be able to select it, however one must bear in in mind the loss of power due to data splitting. In our experiements, the aggregation approach outperformed competing approaches.\n\n**Q7:** As presented in the inputs of Algorithm 1, it is sufficient for KSDAgg to have access to the score function (gradient of log density), this allows to work with unnormalized densities, as it is the case with Gaussian-Bernoulli Restricted Boltzmann Machine (Section 4.4) for example. We will emphasize this fact in the introduction. There are no extra assumptions for the wild bootstrap, while for the parametric bootstrap we need to have access to a sampler of the density as well, to the best of our knowledge it is not enough for this sampler to be approximate. \n\n**Q8:** When the squared $L^2$-distance of the difference in densities is smaller than the lower bound provided for all $\\beta\\in(0,1)$, this means that $p$ and $q$ are very close to each other and cannot be distinguished from each other at this fixed sample size. In this setting, we cannot provide any power guarantees (even deteriorated).\n\n**proof Theorem 3.3**: The proof Theorem 3.3 relies on (i) upper bounding a probability of intersections of events by the minimum of the probabilities of each event, (ii) checking that the assumptions of Theorem 3.1 are satisfied for the adjusted levels, (iii) applying Theorem 3.1. This proof sketch will be included in the final version of the paper.\n\n",
" We warmly thank all reviewers for their careful reading of our paper and their invaluable insights.\nWe hope this rebuttal addresses the concerns expressed by reviewers, and if so, that they would kindly consider upgrading their evaluation.\nWe provide some general comments here about novelty of our paper and differences with prior work, and we individually answer the questions of the reviewers below.\n\nWe consider the novelty aspect of our work to be the proposal of a solution to the kernel selection problem for the KSD goodness-of-fit setting. KSD tests are widely used and cited; despite their appearance back in 2016, kernel selection had still never been done for these tests. Our contribution is in showing, both theoretically and experimentally, that the aggregation procedure works in this novel setting. Contrary to previous works on two-sample and independence testing, we have presented our theoretical results in the more general framework of kernel selection rather than bandwidth selection.\n\nApplying the aggregation procedure to the goodness-of-fit setting is not trivial, we highlight some of the main differences with the two other testing frameworks. In our case, using a wild bootstrap does not result in a test with well-calibrated non-asymptotic level: the reasoning used for the MMD and HSIC breaks down because of the asymmetry of the KSD with respect to the two densities. In order to guarantee correct non-asymptotic level for our aggregated test, we propose to use a parametric bootstrap, a procedure unique to the goodness-of-fit framework. The lack of translation invariance of the Stein kernel introduces new challenging problems. For the expectation of the Stein kernel squared, i.e. \n$$\nE_q[h_{p,\\lambda}(X,Y)^2],\n$$\nit is not possible to extract the bandwidth parameter $\\lambda$ outside of the expectation as it is the case for the usual kernel $k_\\lambda$ when using either the MMD or HSIC. Moreover, working with the integral transform $$(h_{p,k}\\diamond\\psi)(y) = \\int_{\\mathbb{R}^d} h_{p,k}(x,y) \\psi(x) \\,\\mathrm{d} x$$ is more complicated since the operation $\\diamond$ does not correspond to a simple convolution, as would be the case when working directly with a translation-invariant kernel.\n\nWe also emphasise that we have experimentally validated our proposed approach on benchmark problems, not only on synthetic datasets classically used in the literature but also on original data obtained using state-of-the-art generative models (i.e. Normalizing Flows). We provide publicly available code to allow practitioners to employ our method (please see the link in the paper, line 69).",
" This work provides guarantees for goodness-of-fit tests (whether a given set of samples comes from a given null distribution) based on kernel stein discrepancies (KSD). They construct a test that allows for kernel selection to maximize power by using all data rather than using data split while maintaining the type I error level. In particular, the work extends the idea of aggregated tests already used for the two-sample problems via MMDs, and the independence problem via HSIC, to the goodness-of-fit setting via KSD. To estimate the test thresholds, they make use of a Monte Carlo strategy based on (a) parametric bootstrap (which requires samples from the null distribution p) and provides a non-asymptotic level, and power; and (b) wild bootstrap that provides the same guarantees asymptotically. + Their Theorem 3.1 characterizes the uniform separation rate between the null and the alternative densities so as to assert a certain power for a given test, and Theorem 3.3 extends it to multiple aggregated tests. The authors provide several numerical experiments that showcase that KSDAGG performs better than some other benchmarks.\n\n+ Writing of the paper is very good, and the authors cover a fair amount of related work! \n\n- This work applies only to those settings where model density is known. \n\nFor more comments, see questions. 1. Can the authors clarify what the constant C depends on in Theorem 3.1?\n\n2. Corollary 3.4 has a logarithmic inflation factor in front of the sample size N (in the second term). If one uses an equal sample split to do kernel selection first and then test, isn't that factor just 2? If yes, then is KSDAGG better than sample split only if log(#kernels) <=2? If not, it would help to clarify what the rate for sample split is known to be.\n\n3. Given the non-transparent nature of the Stein operator, can the authors provide some examples of q's that satisfy theorem 3.1 / 3.3 for a given simple distribution p, say Gaussian distribution? The authors make a comment in l 175--176, but it would be helpful to provide concrete examples (also relevant for comment 2).\n\n4. In Proposition 3.2, is there no requirement on B1, B2, and B3?\n\n5. Can the authors state a mathematically precise asymptotic result for wild bootstrap? (Like how is the limit for N to infinity taken?) \n\n6. Is there no setting, where the KSD median would be competitive? It would be helpful to comment on the limitations of KSDAGG, namely when does KSDAGG either perform poorly or not better than the other sensible benchmarks. \n\n7. Do we really need to know the model density? Or just the gradient of log density, and a sampler of p for parametric bootstrap? (OPTIONAL: What if the sampler is approximate?)\n\n8. Given the unknown nature of constants in the uniform separation rate, can the authors comment on how the power degrades when the separation deteriorates from the mentioned lower bound?\n\nMinor comments: \n\n- To my understanding from the proof, Theorem 3.3 is effectively a corollary of Theorem 3.1 and a union bound; it would be helpful to provide that comment if true?\n See questions.",
" In this work, the authors propose a natural extension to the existing KSD goodness of fit tests, but allowing aggregate testing over multiple kernels (in particular, multiple kernel hyper-parameters) without requiring strategies such as data-splitting etc. They achieve this by rescaling the critical value of each individual test by a positive weight, summing up to <= 1. When the weights are equal to 1/N (where N is the number of tests), this is essentially a Bonferroni-type correction for aggregating finitely many independent tests. \n\nThe authors provide an algorithm for performing this multiple testing protocol, based on wild-bootstrap or parametric bootstrap. They then establish control over type II error through a uniform separation rate (USR) argument. \n\nMany of the arguments are similar to what is proposed in the preprint [Schrab, A., Kim, I., Albert, M., Laurent, B., Guedj, B., and Gretton, A. (2021). MMD aggregated two-sample test], which considers the similar case of MMD testing. But in that situation things are more straightforward since terms within the USR argument can be bounded explicitly in terms of best approximation rates for densities with a given regularity, yielding minimax optimal rates, etc. In this setting it is far less straightforward to obtain similar bounds. Strengths:\n* (Quality + Clarity) It is very well written. The arguments are presented cleanly which made following the proofs quite easy. \n* (Significance) It tackles an important challenge which provides a principled alternative to the median heuristic or data-splitting approaches to addressing this problem.\n\nWeaknesses:\n * (Originality) It is a bit incremental, and builds very obviously on previous work which tackled aggregate tests for MMD in the same light.\n * (Significance) While the uniform separation rates established in this paper are of theoretical interest, they aren't actionable in any manner due to the complexity of the Stein kernel. 1. Do the authors have any intuition on the behaviour of the terms in the uniform separate rates established for the aggregate test in terms of dimension? Even just heuristics or a case study on a Gaussian Stein kernel would be of interest. \n2. Do the authors have any recommendations on the choice of the weights? This seems arbitrary and not really addressed anywhere in this work? Is there any scenario where taking the sum to be < 1 make sense? \n3. One often wishes to select a kernel over an infinite continuum family of parameters, and it's sometimes less obvious how to discretise this. Are the authors aware of any strategies which would enable them to generalise to this setting. I am satisfied with how the authors addressed the limitations of this work.",
" This paper proposed a new goodness-of-fit (GOF) testing method based on KSD, which sidesteps the challenge of selecting a single kernel for the test. Instead, the proposed method, KSDAGG, can aggregate tests with a collection of kernels so that it maximises the test power over these kernels. This is achieved by performing a GOF test with each kernel in the collection and rejecting the null hypothesis if any one of the tests rejects it. To ensure that the KSDAGG can still control the user-defined type-I error, the author proposed a bisection algorithm to select the proper test interval for each kernel in the collection. Theoretically, the author showed a condition for a uniform separation rate so that the KSDAGG can also control the type-II error. This condition shows that KSDAGG achieves the smallest uniform separation rate of the collection.\n\nEmpirically, the author applied KSDAGG to the kernel bandwidth selection problem and compared it with kernel selection baselines including data splitting, median heuristic and kernel selection with extra data. It shows that it achieves similar test power as the kernel selection with extra data on synthetic data and the best performance for MNIST normalising flow. **Strength**:\nThis paper takes an alternative point of view on the kernel selection problem, where it extends the recently proposed two-sample test aggregation method to goodness-of-fit with Stein discrepancy. \nThe presentation clarity is reasonable but can be much improved. In terms of significance, I think the targeted community is generally limited but it is a nice addition to it. \n\n**Weakness**:\nOne of my confusions is its contribution. The author mentioned that the aggregation trick is not novel, which has already been proposed in the two-sample test. Thus, I fail to see the main contributions of KSDAGG. I saw the contribution section but I suggest the author summarise what are the key points and why they are novel. For example, why the extension to the GOF test is non-trivial (expand a bit on the challenge mentioned in related work?), what method do I use to solve this non-trivial extension, etc. So that the contribution and novelty are clearer. \n\nIn terms of clarity, the background material to KSD is a bit dense. Since I have worked with the Stein method before, it is clear to me, but I can see that for a more general audience, it can be a bit difficult to understand. I suggest considering adding a more detailed background section in the appendix? 1. Like I have mentioned, the author should consider elaborating more on their novel contributions: why the extension to GOF is not trivial? Is it because of the derivation of the bound for uniform separation rate?\n\n2. Another concern is the computational cost. It seems that with the aggregation trick, it performs a single test for each kernel in the collection. This means with a large collection (or the number of bootstrap samples), the computational cost can be much larger than a single test (with kernel selection). Can the author provide the performances with wall clock time? I am curious to see the trade-off. \n\n3. For a fixed set of weight $w_k$ and level $\\alpha$, do we have a \"fixed\" choice of $u_\\alpha$? Clearly, from the bisection algorithm, we won't have exactly the same $u_\\alpha$, but they are closed to each other. So here \"fixed\" means a very distinct set of $u_\\alpha$. Another question is can we select the best kernel using the KSDAGG. For example, if one kernel in the collection rejects the null hypothesis, does it mean this is the best kernel?\n\n4. The introduction of KSD can be improved. For example, instead of directly introducing KSD and the complicated Stein kernel and Stein identity. Maybe consider adding a bit of intuition on Stein discrepancy. In line 87, consider adding one sentence to explain what is the consistency of the Stein test? Also for corollary 3.4, it seems that this is identical to theorem 3.3, which doesn't need to be re-introduced. Saving this space allows the author to expand a bit more on the background. \n\n5. For the synthetic experiment, why KSDAGG is better than KSD split extra data in RBM? It will be interesting to see the optimised bandwidth and the bandwidth of one that rejects the null in the collection. This paper does not have any negative societal impact. But this paper does not seem to discuss its limitations enough. For example, the trade-off between high computational cost and test power. It seems that this method cannot be applied to kernels that require training like the deep kernel. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Z6FrpioBpzZ",
"nSU-N0P4-6",
"o_Voi5JcJnQ",
"nips_2022_9-SZkJLkCcB",
"Oskp49RIfXY",
"UEDkRUpzPP",
"7c56NiCrdzg",
"R1y1pUORQ3a",
"LXuucxdJHyO",
"fpUlJ3r_RYB",
"nips_2022_9-SZkJLkCcB",
"nips_2022_9-SZkJLkCcB",
"nips_2022_9-SZkJLkCcB",
"nips_2022_9-SZkJLkCcB"
] |
nips_2022_pkzwYftNcqY | Efficient Aggregated Kernel Tests using Incomplete $U$-statistics | We propose a series of computationally efficient, nonparametric tests for the two-sample, independence and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and Kernel Stein Discrepancy (KSD), respectively. Our test statistics are incomplete $U$-statistics, with a computational cost that interpolates between linear time in the number of samples, and quadratic time, as associated with classical $U$-statistic tests. The three proposed tests aggregate over several kernel bandwidths to detect departures from the null on various scales: we call the resulting tests MMDAggInc, HSICAggInc and KSDAggInc. This procedure provides a solution to the fundamental kernel selection problem as we can aggregate a large number of kernels with several bandwidths without incurring a significant loss of test power. For the test thresholds, we derive a quantile bound for wild bootstrapped incomplete $U$-statistics, which is of independent interest. We derive non-asymptotic uniform separation rates for MMDAggInc and HSICAggInc, and quantify exactly the trade-off between computational efficiency and the attainable rates: this result is novel for tests based on incomplete $U$-statistics, to our knowledge. We further show that in the quadratic-time case, the wild bootstrap incurs no penalty to test power over more widespread permutation-based approaches, since both attain the same minimax optimal rates (which in turn match the rates that use oracle quantiles). We support our claims with numerical experiments on the trade-off between computational efficiency and test power. In all three testing frameworks, our proposed linear-time tests outperform the current linear-time state-of-the-art tests (or at least match their test power). | Accept | The paper discusses fast computation methods for kernel-based statistical tests: MMD, HSIC, and KSD. The paper uses incomplete U statistics in constructing the methods, shows decent theoretical results including the rate analysis, and confirms favorable numerical results. The paper has significant theoretical contributions to the topic, and also demonstrates the practical usefulness of the methods. After the revision, all the reviewers agree to accept this paper to NeurIPS.
| train | [
"MFP7RFfr3g",
"1Y0-j2aBWlR",
"uWrGLVtlpLo",
"J9LyXat879h",
"JQvx2LfdwvN",
"mAGzNXC22_W",
"lSRQFBin4jP",
"3VCv2cSAKS",
"5rrgvO5RLZvK",
"AkGV_fj41m",
"UI8GewXzJB",
"Xu8TZgu9auu",
"jxbEgKOM46u",
"xcw-bmiX8fN"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We warmly thank reviewer aMao for increasing their score! \n\nWe will make sure to clarify the following points in the final version:\n\n(i) The tests we propose have a computational cost which can be specified by the user (the size of the design between $1$ and $N^2$), there is a tradeoff between test power and computational cost. \n\n(ii) We provide our theoretical rates in terms of the sample size $N$ working up to a constant. The rate is minimax optimal in the case where the design size grows quadratically with $N$. We quantify exactly how the rate deteriorates from quadratic (minimax) to linear (no guarantee) growth of the design size with respect to the sample size.\n\n(iii) In practice, one (among many others) possible choice of design size is to use $cN$ for some positive constant $c$. With this choice, the resulting tests are linear-time which allows us to compare them against other linear-time tests in our experiments. However, the assumption for having a rate converging to 0 in (ii) is not satisfied in this setting.\n\nRegarding the case of imbalanced sample sizes for the two-sample problem, we explain how such an estimator could be defined and point out the challenges that arise from working with it. We will provide such a discussion in the appendix of the final version of the paper.\n\nRecall that the original quadratic-time MMD estimate is\n$$\n\\frac{1}{|\\textbf{i}_2^m| |\\textbf{i}_2^n|}\n\\sum^{(i,j)\\in \\textbf{i}_2^m}\n\\sum^{(r,s)\\in \\textbf{i}_2^n}\nh_k^{MMD}(X_i, X_j; Y_r, Y_s)\n$$\nThis is a two-sample complete $U$-statistic and its incomplete version is \n$$\n\\frac{1}{|\\mathcal{D}_m| |\\mathcal{D}_n|}\n\\sum^{(i,j)\\in \\mathcal{D}_m}\n\\sum^{(r,s)\\in \\mathcal{D}_n}\nh_k^{MMD}(X_i, X_j; Y_r, Y_s)\n=\n\\frac{1}{|\\mathcal{D}_m| |\\mathcal{D}_n|}\n\\sum^{(i,j)\\in \\mathcal{D}_m}\n\\sum^{(r,s)\\in \\mathcal{D}_n}\n\\Big(k(X_i,X_j) - k(X_i,Y_s) - k(X_j,Y_r) + k(Y_r,Y_s)\\Big).\n$$\nThis expression, for example, result in a linear-time test for the choice $|\\mathcal{D}_m| = c \\sqrt{m}$ and $|\\mathcal{D}_n| = c' \\sqrt{n}$ for positive constants $c$ and $c'$ since $|\\mathcal{D}_m| |\\mathcal{D}_n| = c c' \\sqrt{m} \\sqrt{n} \\leq c c' \\text{max}(m,n)$. Other choices of design sizes are possible to obtain linear-time tests.\n\nIt is worth pointing out, however, that a wild bootstrap cannot be used with such an estimator. In order to calibrate the test non-asymptotically, permutations should be used instead. We now describe several challenges associated with the permutation approach.\n\n**Theory:** We believe we can easily obtain a variance bound equivalent to Lemma 1 which holds for this estimate. However, we believe that deriving a quantile bound (equivalent of Lemma 2) for permuted incomplete two-sample $U$-statistics is highly non-trivial: the extension of the result of Kim et al., 2022 (Minimax optimality of permutation tests, Theorem 6.3) to the case of permuted incomplete two-sample $U$-statistics is ongoing work.\n\n**Practice**: Theoretically, the cost of computing $B$ permuted estimates is $\\mathcal{O}(B|\\mathcal{D}_m| |\\mathcal{D}_n|)$ which would be the same as if we could use a wild bootstrap. However, in practice the computational time will be much higher because for each permuted estimate we need to evaluate the kernel matrix at new permuted pairs, while for the wild bootstrap we do not need to compute any extra kernel values: this changes the computation times drastically. In order to avoid this, we would need to restrict ourselves to permutations for which we have already computed kernel values using the fact that $h_k^{MMD}(X_i, Y_s; Y_r,X_j) = - h_k^{MMD}(X_i, X_j; Y_r, Y_s)$. It remains as future work to study conditions under which the set of such permutations is larger than the set consisting of the identity only, and is also large enough to construct accurate quantiles.\n",
" We thank reviewer eJnC for the further questions and for pointing out that answers provided will benefit the main paper. We will make sure to include them accordingly in the final version.\n\n**Q1**: We have included the additional experiments in an appendix for the rebuttal. We should have clarified that for the final version Figure 2 in Appendix H.1 will replace Figure 1 in the experiments section (Section 8). \n\n**Q2**: As in the experiments section of Huggins and Mackey (and as for FSSD), 10 features have been used for Cauchy RFF and L1 IMQ. We have originally used the implementation provided by Huggins and Mackey with the parameters they use in their experiments. We have noticed that they draw 5000 samples from the unnormalized density for covariance matrix estimation to simulate the null hypothesis (code: RFDH0SimCovDrawV(n_draw=5000)). This procedure causes the long runtimes observed, it is much more expensive than simulating the null using a wild bootstrap as KSDAggInc does. \n\nWe have tried different values for n_draw. Using n_draw=500 has almost no effect on test power (minor decrease) and reduces the runtimes from 16 seconds for n_draw=5000 to 2 seconds for n_draw=500. We tried smaller values than 500 for n_draw but this resulted in a significant decrease in test power, we have also verified that the test still has well-calibrated level. We have added a row to Figure 2 in Appendix H.1 running Cauchy RFF and L1 IMQ with n_draw=500. We have also added a new figure (Figure 3) to Appendix H.1 where we compare KSDAggInc $R=200$ with Cauchy RFF and L1 IMQ with n_draw=500,5000. \n\nOverall, KSDAggInc and Cauchy RFF (with n_draw = 500 or n_draw = 5000) obtain the same performance in terms of test power. While KSDAggInc runs faster in the experiments presented, even with the much lower n_draw, it seems that the KSDAggInc runtimes increase more steeply with the sample size than the Cauchy RFF / L1 IMQ runtimes. Note that the power of KSDAggInc could be improved by increasing $R$ (i.e. increasing $c$ in the design size $cN$) but it is of course upper bounded by the power of KSDAggCom. The code has been updated on the original anonymized repository (link line 74).\n\nIn Figure 2 (and Figure 1 and 3), the time plots (4th column) correspond to the experiments run by varying the sample size in the first column. As detailed in Appendix B lines 574/575: 'In Figure 1(i,l) [same for Figure 2(i,l)], we consider dimensions $d_x = 50$ and $d_h = 40$ with noise standard deviation $\\sigma = 0.02$ and we vary the sample size $N \\in \\{200, 400, 600, 800, 1000\\}$.\n\nRuntimes reported for KSDAggInc are based on our implementation (anonymized repository line 74), runtimes reported for FSSD are based on the implementation provided in their paper (kernel-gof repository by Wittawat Jitkrittum), runtimes reported for L1IMQ and Cauchy RFF are based on the implementation provided in their paper (random-feature-stein-discrepancies repository by Jonathan Huggins). We will stress that the plots show runtimes of the tests obtained in practice when using the implementations provided by the respective authors of the tests, but that the observed speed difference might be due to implementation (the tests are not theoretically shown to be faster/slower). \n\nWe hope this analysis addresses the questions raised, and if so, that reviewer eJnC will consider increasing their score.\n",
" We thank the reviewer once again for their feedback. \n\nWe will include the discussed points in the final version of the paper.",
" The author response addressed my questions, thank you. I will retain my scores, and ask that the authors clarify the above points, especially Q3, in the final version of the paper (except Q4, where it seems I had somehow missed the figure legend).",
" A few comments (that I believe need some clarification) regarding your new experiments:\n\n1. Given that Cauchy RFF is the best baseline, and often performs better than your method, it would make sense to add it to your main results, and not in the appendix. \n\n2. How many features were used for Cauchy RFF as there will be a trade-off between running time and power? And are the runtimes reported based on the same code for all methods? (That is, oftentimes slower runtimes for a test/method might be due to worse implementation, rather than a feature intrinsic to the method/test itself).",
" I thank the authors for their response and providing additional experiments which strengthen the paper. \n\nAlso, after reading the other reviews, I will upgrade my score from 4 to 6 for now.\n\nOne last comment regarding \"linear\" or not. I just want to prevent that the method is sold as \"linear\", and that's essentially how I read it from the experiments section. \n\nAlso maybe the authors can think of a better method regarding Q5. Indeed in theory it doesn't matter that one sample is simply truncated. But in practice it would be nice to use those samples (maybe without actually increasing the computational cost).",
" Thank you for your well-organized response, and for doing the additional experiments. \n\nI think the readers will benefit from the limitations / future directions as discussed in Q2, and Q3, and the mathematical clarifications for Q4 and Q5. \n\n",
" We thank reviewer aMao for summarising the strengths of the paper, for suggesting experiments, and for the questions raised.\n\n**Q1:** a) The [...] all.\n\n**A1:** We have now provided additional experiments. In Appendix X, we consider experiments using the real-world MNIST dataset (dimension 784) and observe the same trends as on the toy datasets (which satisfy the Sobolev smoothness assumption). We have also added an experiment which illustrates the benefits of the aggregation procedure. See the main reply common to all reviewers for details.\n\n**Q2:** b) Only [...] procedure.\n\nA2: Following your suggestion, we have increased the number of kernels in our experiments. In particular, we now use 21 kernels for MMDAggInc and KSDAggInc, and 25 kernels for HSICAggInc. The new simulation results indicate that the resulting tests still retain high power and still outperform other tests. See the main reply common to all reviewers for details.\n\n**Q3:** c) Overall [...] ?\n\nA3: Our simulation results demonstrate that using 20-25 kernels seems to present competitive performance under the considered settings. We therefore recommend 20-25 kernels to use in practice, while it is certainly possible to find a better option under different scenarios. We will make this point clear in the final version. See also the main reply common to all reviewers for details.\n\n\n**Q4:** The authors [...] tests.\n\n**A4:** In this paper, we propose efficient tests whose computational cost $L$ (as a function of the sample size $N$) can be chosen by the user. We study the theoretical properties of such tests. In particular, we obtain that if the test is quadratic (i.e. $L\\asymp N^2$) then it is minimax optimal. If the test is between linear and quadratic (i.e. $N\\lesssim L \\lesssim N^2$), we show that there is a price to pay in the minimax rate for computational efficiency. If the test is linear or faster (i.e. $L \\lesssim N$), our theoretical results do not guarantee that the rate converges to zero. We believe the gray box on page 7 is correct given that we think of the symbols $<$ and $\\leq$ as 'up to a constant', this is we care about $L$ only as a function of $N$. The notation being confusing, we propose to replace it by $\\lesssim$ as done in this discussion and emphasizing explicitly in the main text that this means 'up to a constant'.\nWe did not write $L$ as $N^{1+a}$ in order to be general, our statements always for cases such as $L\\asymp N \\log(N)$.\n\nWe agree that our theoretical results do not provide guarantees for linear-time tests, the results quantify how the upper bound decays between the minimax rate for quadratic time and a constant for linear time. For anything faster than linear, say $L=N\\log(\\log(N))$, the rate is guaranteed to converge to 0, in this case at a slow rate of $\\log(\\log(N))^{-2s/(4s+d)}$. In the experiments, we consider linear tests with $L=cN$ for fixed values of $c$ in order to compare the test performance against other linear-time tests.\n\n**Q5:** For [...] line 131).\n\n**A5:** The reason this truncation does not impact our theoretical results is because of the assumptions that the sample sizes are balanced so that $m\\asymp n$ (line 84). For the incomplete $U$-statistic we essentially choose which pairs of data points to consider in the kernel matrix, for computational efficiency we do not consider all the pairs. By truncating the data ($N = \\text{min}(n,m)$), we are essentially restricting the number of pairs to choose from but we are still choosing the same number. So, while it is not ideal, we do not think this restriction results in an important loss of power when the sample sizes are balanced. However, we recognise that our test is not well-suited to extreme cases where there are orders of magnitude difference between the sample sizes. Extension to this particular regime is a topic for future work, but would likely require stronger smoothness assumptions on the tested distributions.\n\n**Q6**: I [...] quadratically.\n\nA6: Thanks for your comment. In the final version, we will revise the background section and stress that these can be computed in quadratic time, referring readers to Appendix A for more details.\n\n**Q7**: Prior [...] kernels.\n\nA7: As far as we are aware, the current approach to continuously optimizing a kernel requires data splitting, which negatively affects the power performance. Our aggregated test does not require data splitting and shows a competitive power performance under the considered settings. Nevertheless, we agree that an extension to the case of a continuous collection of kernels (indexed by the bandwidth parameter on the positive real line) is an interesting direction for future work.\n\n**Q8:** l. 152-156\n\n**A8:** We cover both deterministic and random designs for the sake of generality. Indeed, our theory holds for both designs while we focus on the deterministic design in our experiments. We think this general theory will be beneficial for a follow-up study and other related work.",
" We thank reviewer eJnC for his/her comments and questions.\n\n**Q1:** We thank the reviewer for catching the typo that the tests of Huggins and Mackey, 2018 are linear and not quadratic time. We have included an experiment in Appendix H.1 which compares KSDAggInc against the L1 IMQ and Cauchy RFF test of Huggins and Mackey. \n\n**Q2:** As the reviewer correctly pointed out, the polynomial factor in $\\beta$ parameter comes from Markov/Chebyshev’s inequality. We believe that this polynomial dependence can be improved using more advanced concentration inequalities. Unfortunately, we do not have the right tool for proving this result at this moment and thus leave this interesting direction for future work. Nevertheless, the current sufficient conditions are sharp enough to prove optimality results in certain regimes.\n\n**Q3:** The minimax rate is the best rate that any test (of any time complexity) can achieve. As shown in Theorem 1, we can prove that the quadratic time MMD and HSIC tests achieve the minimax rate, and thus they are minimax rate optimal. To the best of our knowledge, it is unknown whether computationally efficient tests (faster than quadratic time) can achieve this rate, and `minimax rates for a given computational budget', say $L$ as a function of $N$, have not been explored in the literature. Theorem 1(ii) demonstrates a trade-off between the computational budget and the separation rate focusing on incomplete U-statistics but our result doesn’t tell us whether this trade-off is (universally) tight. We think this is one of the limitations of our work and hope that a follow-up study can make progress on this topic.\n\n**Q4:** In Proposition 1, for the two-sample and independence problems, there are no assumptions required on the densities. The proof first shows exchangeability of the wild bootstrap and original samples, and then relies on Lemma 1 of Romano and Wolf, 2005. For the goodness-of-fit setting, we need $\\mathbb{E}_q[h_p^{KSD}(Z,Z)] < \\infty$ and $\\mathbb{E}_q\\left[\\left\\Vert \\nabla \\log \\frac{p(Z)}{q(Z)}\\right\\Vert_2^2\\right] < \\infty$ in order to satisfy the conditions of Theorem 2.2 of Chwialkowski et al., 2016. Those conditions are presented in the introduction, and we will repeat them in the statement of Proposition 1 in the final version.\n\nFor Theorem 1(ii), we assume only that the difference $p-q$ lies in the Sobolev ball. Intuitively, we can view $q$ as a perturbed version of $p$ and we require that the perturbation is smooth (i.e. lies the a Sobolev ball).\n\n**Q5:** All results presented in the paper hold for both fixed design and design with elements sampled uniformly without replacement (the upper bound in Lemma 1 also holds for design with elements sampled uniformly with replacement). The results we prove hold for all choices of such design, however, this does not mean that the choice of design does not matter in practice. While our upper bound on the variance of the incomplete $U$-statistic holds for all choices, the variance depends on the choice of design. Certain choices of design lead to minimum variance of the incomplete $U$-statistic (see Lee, 1990, $U$ -statistics: Theory and Practice, Section 4.3.2). We are unsure how the design could be chosen adaptively, but we stress the design (or design strategy with randomness) is chosen independently of the data.\n\nWe thank the reviewer for suggesting combining all assumptions in one environment to improve clarity - we will do so in the final version.",
" **Q1:** The term $\\frac{1}{N^2}$ can indeed be upper bounded by $\\frac{1}{|\\mathcal{D}_r|}$ as done in the proof of Theorem 1, which corresponds to absorbing $\\frac{1}{N^2}$ in the constant. Thank you for pointing this out! This will simplify the statement of Lemma 1. \n\n**Q2:** For the independence problem, we have pairs of samples $(X_i,Y_i)_{i=1}^N$. The classical HSIC permuted $U$-statistic\nIn Equation (26) of Kim et al. (2022, Section 6), the HSIC permuted $U$-statistic is defined. Fixing the permutation to the identity, this gives the HSIC $U$-statistic\n$$\nU_1\n=\n\\frac{1}{\\mid\\textbf{i}_2^N\\mid}\n\\frac{1}{\\mid\\textbf{i}_2^N\\mid}\n\\sum^{(i,j)\\in \\textbf{i}_2^N}\n\\sum^{(r,s)\\in \\textbf{i}_2^N}\nh^{HSIC}(Z^i,Z^j;Z^r,Z^s)\n$$\nNow, one way to construct an incomplete HSIC $U$-statistic would be to replace those two complete sums with two incomplete sums, but we do not want to do this in order to keep a unified framework across the three testing frameworks.\nSo, instead we pair the variables with index $a$ and $a+\\lfloor N/2\\rfloor$ for $a=1,\\dots,\\lfloor N/2\\rfloor$ to obtain an estimator\n$$\nU_2\n=\n\\frac{1}{\\mid\\textbf{i}_2^{\\lfloor N/2\\rfloor}\\mid}\n\\sum^{(a,b)\\in \\textbf{i}_2^N}\nh^{HSIC}(Z^a,Z^b;Z^{a+\\lfloor N/2\\rfloor},Z^{b+\\lfloor N/2\\rfloor})\n=\n\\frac{1}{\\mid\\textbf{i}_2^{\\lfloor N/2\\rfloor}\\mid}\n\\sum^{(a,b)\\in \\textbf{i}_2^N}\nh^{HSIC}(Z^{a+\\lfloor N/2\\rfloor},Z^{b+\\lfloor N/2\\rfloor};Z^a,Z^b)\n$$\nThis corresponds to the discussion following Equation (26) of Kim et al., 2022 with the $\\lfloor N/2\\rfloor$-tuple $L\\coloneqq \\{1,\\dots,\\lfloor N/2\\rfloor\\}$ when the permutation is the identity, where $L$ is the notation used in Kim et al., 2022. In Equation (27) they then show that the expectation of $U_2$ with respect to the uniform choice of $L$ is $U_1$.\nThis motivated our choice of HSIC estimate in Equation (8) of our paper.\n\n**Q3:** Yes, the improved (logarithmic rather than polynomial) dependence on $1/\\alpha$ has been shown for two-sample testing but not for independence testing based on the permutation procedure “without sample splitting”. As we briefly mentioned, the logarithmic dependence on $1/\\alpha$ is possible by converting the independence testing problem into the two-sample problem via sample-splitting. This was the main idea proposed in Section 8 of Kim et al., (2022). While this indirect approach leads to a logarithmic factor in \\alpha, the practical power would be suboptimal due to an inefficient use of the data from sample splitting. Our result is based on the standard U-statistic for independence testing calibrated by the usual permutation approach, which does not depend on sample splitting. In particular, our result shows that the usual permutation-based HSIC test has the same logarithmic dependence in $\\alpha$ with the less practical test in Kim et al., (2022). In the final version, we will revise Section 7 to make our contributions clearer.\n\n**Q4:** The first row of plots (Figure 1) includes a green curve which is the third one in the legend (between SCF and MMDAggInc R=1) and corresponds to OST PSI.\n\n**Q5:** Li and Yuan (On the optimality of gaussian kernel based nonparametric tests against smooth alternatives, 2019) also consider the three testing problems and show minimax optimality/adaptivity over Sobolev balls. Their tests run in quadratic time and control the probability of type I error only asymptotically, while our proposed tests have well-calibrated non-asymptotic levels over a broader class of null distributions and are computationally efficient. Their theoretical guarantees hold only for the Gaussian kernel and with the smoothness restriction that $s>d/4$ while ours hold for a wide range of kernels (see Equation (12)) and for any $s>0$ (see Theorem 2). Note that, they tackle the goodness-of-fit problem in a different way. They do not use the KSD and instead use a one-sample MMD with some expectations of the Gaussian kernel under the model. For a generic model density, one cannot compute such an expectation and hence cannot use their proposed test, while it is possible to use the KSD (which makes the kernel expectation under the model vanish).\n\n**Future work:** Potential directions for future work include studying the regime with $L \\lesssim N$, which corresponds to 'faster than linear' tests. For this sub-linear case, our results do not give a definite answer to the question as to whether the upper bound converges to zero. Future work would focus on either deriving tighter bounds which prove convergence to zero in this regime, or proving that the uniform separation rate diverges in this setting. Another interesting direction for future work is to see whether it is possible to achieve minimax rate optimality in sub-quadratic time complexity. Also, it would be interesting to see if the polynomial factor of $\\beta$ in our condition can be improved using a sharper concentration bound. Due to page limit, we will discuss future directions and limitations of our proposals in the appendix.\n",
" We warmly thank all reviewers for their careful reading of our paper and their invaluable insights. We particularly thank reviewer **STxK** for emphasizing that our work provides 'unified but rigorous discussion' with 'much more compelling theoretical guarantees [...] than most related work [...] which usually only shows consistency'; reviewer **eJnC** for pointing out that 'the writing of the paper is pretty clear and worth appreciating'; and reviewer **aMao** for noting that 'the provided code is clean and it is easy to reproduce the experiments'.\n\nWe individually answer the questions of the reviewers below. We have provided additional experiments in Appendix H. We hope these address the concerns expressed by the reviewers, and if so, that they would kindly consider upgrading their evaluation. We provide here a brief discussion of the additional experiments (the code for reproducibility is provided on the original anonymised github repo).\n\n**Large collection of bandwidths** (Appendix H.1): We have increased the number of bandwidths our tests aggregate over. For MMDAggInc and KSDAggInc, we aggregate over 21 bandwidths which are $\\{2^i \\lambda_{med} : i = -10,\\dots,10\\}$. For KSDAggInc, we aggregate over 25 bandwidths $\\{(2^i \\lambda_{med}, 2^j \\lambda_{med}) : i,j = -2,\\dots,2\\}$. Firstly, our new simulation results show that the proposed tests retain high power even when a large collection of bandwidths is used. Secondly, we believe that this revised approach mitigates the concern around the collection of bandwidths: loosely speaking, for the Gaussian kernel, we are essentially aggregating over kernel matrices which interpolate between the identity matrix (very small bandwidth) and the matrix of ones (very large bandwidth).\n\n**Compare against Huggins and Mackey, 2018** (Appendix H.1): We compare KSDAggInc against the L1_IMQ test and Cauchy RFF of Huggins of Mackey (Random Feature Stein Discrepancies, 2018). L1IMQ performs similarly to the FSSD test in our RBM experiment, which is coherent with the results presented in Figure 4a of Huggins and Mackey, 2018. Cauchy RFF performs only very slightly better than our proposed test KSDAggInc $R=200$ but takes much longer to run (16 seconds against less than a second).\n\n**Benefits of aggregation** (Appendix H.2): We illustrate the benefits of the aggregation procedure by starting from a 'collection' consisting of only the median bandwidth and increasing the collection by adding more bandwidths. In all three settings, we observe that the power for the test with the median bandwidth only is low. As we increase the number of bandwidths, the power first increases as the test has access to 'better bandwidths'. For MMDAggInc and KSDAggInc: once the optimal bandwidth is included in the bandwidth, the power decreases slightly and reaches a plateau. HSICAggInc is more challenging, since there are kernels for both X and Y, hence the total number of bandwidth combinations grows rapidly (9 bandwidths for each of X and Y = 81 total combinations). For this case, we do experience a decay in test power once many bandwidths are considered, due to the large number of such combinations.\n\n**Experiments on MNIST dataset** (Appendix H.3): We demonstrate in experiments using the real-world MNIST dataset (dimension 784) that our proposed tests also obtain higher power than the tests we compare against.\n",
" This paper studies a family of nonparametric two-sample, independence, and goodness-of-fit tests based on incomplete kernel-based U-statistics, for which it proves validity (Proposition 1) as well as guarantees on power, assuming the true densities lie in a Sobolev space and are sufficiently well separated in $L_2$ distance. The power guarantees are initially proven for a statistic depending on the true smoothness of the Sobolev space (Theorem 1), but Theorem 2 extends this to an estimate that is adaptive to unknown smoothness. Theorem 3 shows that, compared to existing results for independence testing, a tighter bound, with better dependence on the type-1 error probability, can be obtained. Finally, some experiments demonstrate how the performance of the proposed estimators varies with hyperparameters and how this compares with some other linear-time nonparametric tests. Strengths: This paper provides much more compelling theoretical results (minimax optimality, especially for an adaptive estimator (Theorem 2)) than most related work on two-sample testing, which usually only shows consistency. It’s also nice that a unified but rigorous discussion is given for the closely related problems of two-sample, independence, and goodness-of-fit testing.\n\nWeaknesses: The paper has a lot of notation that is very similar or overloaded, but not explained or disambiguated near where it is used. For example, $L$ is defined just before Line 177 as the design size of the incomplete U-statistic, but is also used as a kernel (in Eq. (12)) and for $L^p$ spaces (in Eq. (17)). This made it a bit hard for me to follow the paper’s notation. I think it would help if the paper was a bit more explicit (even redundant) with explaining its notation near where it is used (e.g., reiterating “where $L$ is the design size” after Theorem 1). The paper also is not particularly clear about its distinctions from prior work (see questions below, although I was able to piece this together from various parts of the paper and by reading some of the references). 1. Lemma 1: The variance bound for random design includes a $\\frac{1}{|\\cal{D}_r|} + \\frac{1}{N^2}$ term. Since $|\\cal{D}_r| \\leq N^2$, isn’t the second term redundant (i.e., can’t it be absorbed into the constant $C$)?\n3. Line 144-146, “The motivation for defining the estimators… of order 2 (rather than of higher order) derives from the reasoning of Kim et al. (2022, Section 6)...”: I didn’t quite understand this sentence. I skimmed Section 6 of Kim et al. (2022), and, while they do indeed study U-statistics of order 2, the motivation for order 2 (rather than of higher order) wasn’t obvious to me. Could the authors clarify?\n2. I found the motivation for Theorem 3 (lines 241-256) a bit hard to understand. Am I understanding correctly that the improved (logarithmic rather than polynomial) dependence on $1/\\alpha$ has been previously shown for two-sample testing but not for independence testing. Later on (Lines 262-263), the paper says “As discussed by Kim et al. (2022, Section 8.3), their proposed sample-splitting method can also be used to obtain the correct dependency on $\\alpha$.” So what exactly is the new contribution of Theorem 3?\n4. Figure 1: The first row of plots includes a green curve that isn’t included in the legend. What is this? Also, the paper discusses results for some methods (e.g., OST PSI) for which I didn’t see any results in Figure 1. Where are these results reported?\n5. Could the authors elaborate on advantages of the proposed tests over previous tests that have been shown to be minimax optimal (e.g., the Gaussian-kernel-based tests of Li and Yuan (2019))? The paper would definitely benefit from further discussion of the limitations of its present results and suggestions for future work. However, given space limitations, I don't think further discussion of this is strictly necessary for acceptance.",
" POST REBUTTAL:\n\nScore increased to 7.\n\n------ -------\nIn this work, the authors propose faster than quadratic tests for the two-sample, independence, and goodness-of-fit problems, using the Maximum Mean Discrepancy (MMD), Hilbert Schmidt Independence Criterion (HSIC), and\nKernel Stein Discrepancy (KSD), respectively. They are based on incomplete U statistics that can interpolate between linear time, and quadratic time costs (the latter cost is incurred by typical tests which are complete U-statistics). \n The authors provide a tradeoff between the computational cost used and the power achieved in Theorem 1---while achieving the minimax rates over Sobolev balls when using the quadratic runtime variant. The authors then use this result to also achieve appropriate power results for kernel selection (up to logarithmic inflation in the number of kernels). Notably, this result is adaptive and does not require the knowledge of the smoothness parameter of the difference between the null and the alternative density.\n\nThe authors also provide several experiments which demonstrate the advantages of the proposed methods. \n\nThe writing of the paper is pretty clear and worth appreciating! \n1. The work in Huggins and Mackey (e.g., the L1 IMQ and Cauchy RFF random feature Stein discrepancies) were all linear and not quadratic time as the authors mention in l 53. Given the focus on linear time tests in this work, I believe that these tests should be treated as a useful baseline for goodness-of-fit comparison experiments. In particular, Huggins and Mackey's experiments showed that their tests typically outperformed the FSSD test (which is one of the key linear time baselines in the current work).\n\n2. In the separation rate, the dependence on alpha is logarithmic but that on beta is polynomial (1/x)--is the latter unavoidable? I can see from the proof that it is because of the nature of concentration inequalities used in the two contexts (namely Rademacher chaos concentration, and Markov's inequality)--but are the arguments known to be tight? Does there exist a setting where such dependence is necessarily needed?\n\n3. Is the tradeoff in Theorem 1 tight? I can see it's tight when L = N^2 but is it tight for smaller values of L? Some discussion on this would be very useful.\n\n\n4. Do you not need any requirements on p for Proposition 1? And does only the difference p-q need to lie in the Sobolev ball for Thm 1(ii)?\n\n\n5. Does the choice of design D not matter? Does it have to be an iid subsample? Can it be adaptive? (My guess is all the arguments go through relying on iidness of data points in D).\n\nMinor comment:\n- It would be easier to process the results if the assumptions on densities are stated in an assumption environment, and then referenced in the theorem results. See questions.",
" Post rebuttal: after the authors response and reading the other reviews, I updated my score to 6 (from initially 4) see also my comment below.\n\n\n-------- original review ------\n\nThe paper considers a general framework for kernel-based hypothesis testing, where the test statistic is given by a U-statistic. The framework covers Two-sample testing, Independence Testing, and Goodness-of-fit testing.\nThe paper shows that using an incomplete U-statistic estimate allows to trade-off computational resources for statistical significance, which follows from general theory of incomplete U-statistics.\nFurthermore, it adopts recent advances to aggregate such tests over multiple kernels and provides insights into the minimax separation rates over Sobolev balls.\nLastly, the paper provides simple experiments on toy data, illustrating their findings and comparing to some other approaches to tackle the respective testing problems. *Strengths*:\n- the work applies to three different testing scenarios and illustrates their close connections.\n- Although the use of incomplete U-statistics is not completely new for kernel-based tests (see e.g. Yamada et al ICLR 2019) the provided non-asymptotic tests are relevant and nicely illustrate the trade-off between computational resources and statistical significance.\n- The tests provably control type-I error also at finite data, while some of the existing methods, like OST, do not.\n- The theoretical insights are concisely stated and the prior results properly attributed. Arguably, though the provided results are rather simple consequences and combinations of prior results.\n- The provided Code is clean and it is easy to reproduce the experiments. The code was submitted after the deadline, which might be a violation of the rules!\n\n*Weaknesses*:\n- Overall, I think the practical relevance is quite limited:\na) The experiments are limited to very simple toy data sets. While they illustrate the effect of using incomplete U-statistics, the effect of the aggregation procedure is not illustrated at all.\nb) Only very few (4!) kernels are aggregated over. IMO this does not suffice to illustrate the benefits of the aggregation procedure.\nc) Overall there should be more guidance for practitioners. How many kernels should one choose in practice?\n- The authors consider ‘linear-time’ tests (eq. 10), which I think is misleading. By their theoretical results Theorem 1 ii) and considering $L=c N$, the uniform separation rate is not guaranteed to converge to zero. I thus also think that the gray box on page 7 is actually wrong. IMO it should be changed to the following: let $L=N^{1+a}$. Then for $a=1$ one recovers the minimax rate. For $a> 0$ the rate still converges to zero, but slower. For $a\\leq 0$ there is no guarantee. **Overall the provided theory does not provide guarantees for linear-time tests**.\n- For the two-sample problem, the provided tests cannot handle imbalanced samples in a meaningful way. The provided approach simply truncates data ($N=min(m,n)$ in line 131).\n- I think the presentation of the initial statistics (1) and (3) is suboptimal. For unexperienced readers it seems that these statistics scale like N^4. So I think it would be better to directly introduce the statistics such that they correspond to the complete U-statistics of (7) and (8). Alternatively, it should be explained why this statistics scale quadratically (no need to tell me in the rebuttal).\n- The claim that the “aggregation procedure is known to lead to state-of-the-art powerful tests” (l. 26f) seems a bit biased.\n- Prior work (Sutherland (ICLR 2017), Liu (ICML 2020)) showed that continuously optimizing a kernel is quite advantageous and harnesses the beenfits of gradient-based optimization. The present work only allows to combine finitely many (prespecified) kernels. \n- The aggregation scheme is a direct adaption from prior work (the authors are transparent about this). - Are the tests really 'linear-time'? (see comment in limitations)\n- l. 152-156: why do you discuss the random design when in the end you are using the deterministic one?\n- How should one choose the number of bandwidths ($l$) in practice?\n\nMinor Comments:\n- l. 147: define what a degenerate kernel is.\n- Type in Equation before line 500: should be $x_{i_1}$,... The work is theoretical and no direct negative societal impact is to be expected.\n\nThe theoretical results discuss minimax optimal rates, which leaves the impression that nothing can go wrong. But in practice there remain some parameters that users have to choose, for example how many bandwidths to include in the aggregation. For the experiments the authors only use 4 bandwidths, which in my opinion hardly suffices to illustrate the benefits of this aggregation. On the other hand, it is not clear what happens if too many bandwidths are included."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"mAGzNXC22_W",
"JQvx2LfdwvN",
"J9LyXat879h",
"AkGV_fj41m",
"lSRQFBin4jP",
"3VCv2cSAKS",
"5rrgvO5RLZvK",
"xcw-bmiX8fN",
"jxbEgKOM46u",
"Xu8TZgu9auu",
"nips_2022_pkzwYftNcqY",
"nips_2022_pkzwYftNcqY",
"nips_2022_pkzwYftNcqY",
"nips_2022_pkzwYftNcqY"
] |
nips_2022_YPoRoad6gzY | OST: Improving Generalization of DeepFake Detection via One-Shot Test-Time Training | State-of-the-art deepfake detectors perform well in identifying forgeries when they are evaluated on a test set similar to the training set, but struggle to maintain good performance when the test forgeries exhibit different characteristics from the training images e.g., forgeries are created by unseen deepfake methods. Such a weak generalization capability hinders the applicability of deepfake detectors. In this paper, we introduce a new learning paradigm specially designed for the generalizable deepfake detection task. Our key idea is to construct a test-sample-specific auxiliary task to update the model before applying it to the sample. Specifically, we synthesize pseudo-training samples from each test image and create a test-time training objective to update the model. Moreover, we proposed to leverage meta-learning to ensure that a fast single-step test-time gradient descent, dubbed one-shot test-time training (OST), can be sufficient for good deepfake detection performance. Extensive results across several benchmark datasets demonstrate that our approach performs favorably against existing arts in terms of generalization to unseen data and robustness to different post-processing steps. | Accept | The reviewers unanimously accept the paper, so is the final proposal. | train | [
"BGRUCGoFEHc",
"S1BJWeoX-HH",
"W4lyeGyn008",
"fdaQ6KSuWCy",
"Z_--U2tVhRwL",
"vdB7i1DvrQ",
"IWV_eMbXdqY",
"IhfKwQIfBO8",
"q9zCcqcWrd",
"Jc1wN4tYzek",
"V2GdwTKxlCI",
"g5GL1lzOp7",
"3EuZnIvBhj6",
"GRCL4wlwYsP"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for the valuable suggestions. These works will be included and discussed in our future version.",
" Thank you authors for the great effort on the rebuttal. Authors have addressed my concerns to some extent. \n\n**In the revised version, please consider including a short description to compare against GAN-synthesized image detectors ([1, 2, 3]) to accurately convey the scope of your work.**\n\n**Although I still stand by my initial review regarding limited technical novelty ( See weakness (1) ), given that the proposed method could be useful in face-forgery detection applications, I will increase my recommendation accordingly.**\n\n[1] Wang, S. Y., Wang, O., Zhang, R., Owens, A., & Efros, A. A. (2020). CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8695-8704).\n\n[2] Dzanic, T., Shah, K., & Witherden, F. (2020). Fourier spectrum discrepancies in deep network generated images. Advances in neural information processing systems, 33, 3022-3032.\n\n[3] Chandrasegaran et al., 2021: \"A closer look at fourier spectrum discrepancies for cnn-generated images detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n",
" Dear reviewer DZTC,\n\nThanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nWe have carefully studied your comments and added additional experiments and analyses in our previous responses to address your concerns. We genuinely hope you could kindly check our responses.\n\nWe hope that the new experiments and additional explanations have convinced you of the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again!\n\nBest wishes,\n\nAuthors",
" Dear reviewer ajm1,\n\nThanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nWe have carefully studied your comments and added additional experiments in our previous responses to address your concerns. We genuinely hope you could kindly check our responses.\n\nWe hope that the new experiments and additional explanations have convinced you of the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again!\n\nBest wishes,\n\nAuthors\n",
" Thanks for the response. The modification does enhance this paper. It is interesting to see the reasonable finding in feature visualization.",
" We thank the reviewer for the comments, and we answer the raised questions below.\n\n* **1. Including more methods in Table 2**\n\nWe include a comparison in Table 2 by comparing OST with a recent meta learning-based method MT3 [5] which uses a contrastive loss between the original sample and its augmentation to update the parameters during inference. The results are listed below. We observe that our method performs favorably against MT3 [5] in both benchmarks in all evaluation metrics.\n\n||| DFDC|| | CelebDF| ||\n|--- |--- | --- | --- | --- |--- | --- | --- |\n|Method| Training dataset | AUC |ACC |ERR|AUC |ACC |ERR|\n|MLDG| FF++|0.682|0.607|0.370|0.609|0.595|0.418|\n|LTW | FF++|0.690|0.631|0.368|0.641|0.634|0.397|\n|MT3| FF++|0.775|0.667|0.307|0.701|0.664|0.319|\n|Ours| FF++|0.833|0.714|0.250|0.748|0.673|0.312|\n\n* **2. Visual example**\n\nWe include t-SNE visualizations to demonstrate the advantages of OST over the baseline model. Please refer to Section G and Figure 4 in the appendix of the revised manuscript for a detailed description.\n\n* **3. Using different template sample $x_r$**\n\nWe provide ablation studies using different template samples in Section 5.3 of our manuscript. We replace the random selection strategy with two variants. The first is using nearest neighbor sampling, which selects the template image that is closest to the test sample. Another is the average sampling strategy, where we sample five different $x_r$ and use them with the current test sample to synthesize five different pseudo training samples. We report the average accuracy from the detector finetuned with these five different pseudo training samples. Results in the first and second rows of Table 6 indicate that different template samples do not bring many differences for the model.",
" We thank the reviewer for the comments, and we answer the raised questions below.\n\n* **1. Time consumption of OST**\n\nThe average running time of OST is 0.065 seconds for an image with a resolution of $256 \\times 256$, and the average running time of the pseudo training sample generating process is 0.074 seconds as some of the blending steps are processed with a CPU. Note that the facial alignment, mask refinement, and facial region blending steps are not always required during inference. When using the learning-based generating process is selected, we can directly use the test and template samples as inputs and output a pseudo training sample. In this case, the pseudo training sample generating process is 0.021 seconds, which is less than the inference time. However, it is noteworthy to point out that the deepfake detection community now focuses more on accuracy over speed. Please also see our response to Reviewer QiuQ and Section B in the appendix of our revised manuscript for detailed resource comparisons.\n\n* **2. Numbers of sampled $x_r$**\n\nFor every test sample $x_e$, we randomly select one template image $x_r$ from the training set and generate a pseudo training sample $x_o^f$. In other words, one synthesized image corresponds to one template image, and the numbers of the template and pseudo training sample will always be equal.\n\n* **3. Contribution of the offline meta-learning step**\n\nThe offline meta-learning step is used to learn a good initialization for the test-time adaptation step. It mimics the training steps of OST. First, we randomly select a template image for the current training sample and generate two pseudo samples. Then, we use one pair of the template and pseudo samples as a support set for inner update and use another pair of training and pseudo samples as a query set for meta update. The overall pipeline is directly borrowed from MAML. We also test our method without using the MAML framework, and the model performs on par with it. We use MAML for our method to enable fast adaptation during inference. Please also refer to Section 5.3 in our manuscript for detailed descriptions.\n\n* **4. Compare with TENT**\n\nWe compare with TENT by training the model on the four data from the FF++ dataset, and test it on the DFDC, DFD, and DF1.0 datasets. Results are listed below. We note that TENT performs less effectively against our method in most cases. One possible reason is that the small entropy regularization in TENT [46] may not be as effective in the binary classification task as it is in the multi-class classification task.\n\n||| DF|||F2F|||FS|||NT|||\n|------|---|---|---|----|---|-----|----|---|-----|----|---|-----|----|\n|Method|DFDC|DFD|DF1.0|DFDC|DFD|DF1.0|DFDC|DFD|DF1.0|DFDC| DFD|DF1.0|Avg.\n|TENT|0.749|0.851|0.915|0.707|0.826|0.916|0.725|0.752|0.915|0.748|0.840|0.886|0.819|\n|Ours |0.757|0.869|0.938|0.798|0.880|0.947|0.802|0.824|0.909|0.752|0.841|0.929|0.854|\n\n* **5. Evaluations with multiple gradient descent steps**\n\nTo study if more updating steps can improve the detection, we conduct ablation studies by using different gradient descent steps during evaluation. The model is trained on the FF++ dataset, and it is evaluated on the DFDC, DFD, and DF1.0 datasets. As shown below, the performances of using more gradient descent steps do not visibly improve detection accuracy while the time consumptions (TC) are proportional to the number of gradient descent steps. In addition, using more gradient descent steps also requires much more memory costs in a MAML-based framework. For those reasons, we use only one gradient descent step in our method as a compromise between efficiency and accuracy.\n\n|||DF|||F2F|||FS|||NT||||\n|------|---|---|---|----|---|-----|----|---|-----|----|---|-----|----|----|\n|Steps|DFDC|DFD|DF1.0|DFDC|DFD|DF1.0|DFDC|DFD|DF1.0|DFDC|DFD|DF1.0|Avg.|TC(s)|\n|1 update|0.757|0.869|0.938|0.798|0.880|0.947|0.802|0.824|0.909|0.752|0.841|0.929|0.854|0.065|\n|2 updates|0.759|0.902|0.927|0.794|0.892|0.942|0.797|0.830|0.922|0.758|0.836|0.947|0.859|0.121|\n|3 updates|0.783|0.860|0.942|0.804|0.906|0.934|0.801|0.840|0.929|0.747|0.843|0.948|0.861|0.181| ",
" We thank the reviewer for the comments, and we answer the raised questions below.\n\n* **1. Main contribution**\nWe want to emphasize that our main contribution is to design a test-time training objective specially for the generalizable deepfake detection task. We then apply the existing meta-learning framework, i.e. MAML, to our main idea for better test-time training speed. In other words, the novelty our method does not lie in the meta-learning algorithm or the test-time training principle but in their applications and special development for deepfake detection.\n\n\n* **2. Generalizing with calibrated threshold**\n\nFollowing the same setting [a], we calibrate the model with a randomly selected pristine and deepfake pair from the test dataset. We first augment the image pair 128 times to obtain a small calibration set. Then the small calibration set is passed into our detection model to get the logits, which are then fitted by logistic regression. Similarly, we take the weight and bias learned from the logistic regression to adjust the output of the evaluated models. All the models are trained on the FF++ dataset, and the original and calibrated accuracies on three different benchmarks are shown below (original / calibrated). We observe that the calibration does not improve the accuracy in all cases. This phenomenon is consistent with the observations in [a].\n\n|Original & Calibrated| DFDC | DFD | DF1.0 |\n|---| --- | --- | --- |\n|Xception|0.671 & 0.704 |0.693 & 0.688 |0.543 & 0.622|\n|Face X-ray|0.659 & 0.674|0.653 & 0.649|0.625 & 0.684|\n|F3Net|0.657 & 0.691 | &0.604 & 0.659 | 0.707 & 0.693|\n|RFM|0.731 & 0.702 |0.783 & 0.796 |0.625 & 0.649 |\n|SRM|0.696 & 0.688| 0.695 & 0.714 |0.684 & 0.677|\n|Ours|0.714 & 0.703|0.831 & 0.858 | 0.903 & 0.881|\n\n* **3. Limitation in totally GAN-synthesized images**\n\nIn fact, a universal detector trained on GAN-synthesized images does not perform well on deepfake images. Table 5 from the appendix of [a] shows that the accuracy of the detector evaluated on the FF++ dataset is around 50%, much lower than that evaluated on other GAN-synthesized images. These experiments indicate that we may need a different set of clues than those used for detecting GAN-synthesized images to detect deepfake images. This also explains many recent works only study the detection of deepfake images [23, 30, 52, 18, 37, 4, 34, 53, 12, 19, 17]. Our OST follows this line of research. More specifically, same as existing deepfake detection arts, OST is developed based on the assumption that a deepfake contains contents from different sources, and contents in a real image are from only one source. That is also why the pseudo training sample is regarded as fake even if they are blended with two real images. However, for the totally GAN-synthesized situation, the fake images contain only one source, and images without GAN patterns, even if they are synthesized by two real images, are still considered real. This setting contradicts the assumption in current deepfake detection, and it certainly requires more effort to detect them perfectly in one unified framework.\n\n[a] CNN-generated images are surprisingly easy to spot... for now. In CVPR 2020.",
" We thank the reviewer for the comments, and we answer the raised questions below.\n\n* **1. Visual examples**\n\nWe include a visual example to demonstrate the generating process of the pseudo training sample. Please refer to Section A and Figure 4 in the appendix of the revised manuscript for a detailed description.\n\n* **2. DLIB face recognition fails**\n\nDLIB is a commonly used machine learning toolkit. As a subset, the face detection function in DLIB can achieve up to 99.38% accuracy on the Labeled Faces in the Wild benchmark. Indeed, there are circumstances that DLIB may fail, and in which cases, our OST method may also fail since the pseudo training sample generating process requires the landmarks of the faces. We will include it as a limitation in our future version.\n\n* **3. Experiments on images with different resolutions**\n\nFollowing the settings in previous works [23,30], we use images with the resolution of $256\\times 256$ in our method. To evaluate if the resolution can also influence the generalizability of the detector, we conduct ablation studies by using samples with different resolutions. Results are listed below. We observe that the differences between the three different resolutions are rather small (less than 1\\% on average), indicating that image resolution is not a major influential factor.\n\n||| DF|||F2F|||FS|||NT|||\n|------|---|---|---|----|---|-----|----|---|-----|----|---|-----|----|\n|Resolution|DFDC|DFD|DF1.0|DFDC|DFD|DF1.0|DFDC|DFD|DF1.0|DFDC| DFD|DF1.0|Avg.\n|200 $\\times$ 200|0.741|0.855|0.962|0.782|0.862|0.959|0.790|0.794|0.947|0.713|0.812|0.937|0.846|\n|320 $\\times$ 320|0.755|0.916|0.937|0.721|0.858|0.948|0.843|0.801|0.939|0.760|0.823|0.931|0.853|\n|256 $\\times$ 256|0.757|0.869|0.938|0.798|0.880|0.947|0.802|0.824|0.909|0.752|0.841|0.929|0.854|\n\n* **4. Evaluations with different metrics**\n\nTo evaluate how the compared methods perform on pristine and deepfake separately, we also report true positive (TP), true negative (TN), false negative (FN), false positive (FP), and true negative rate (TNR) for them. Results are listed below. We observe that the TNR of our method is much larger than other methods, indicating that our method is more likely to correctly detect a given deepfake image. \n\n|Dataset| DFDC||||DFD||||DF1.0|||||\n|------|---|---|---|----|---|---|---|----|---|---|---|----|---|\n|Metric|TP|TN|FN|FP|TP|TN|FN|FP|TP|TN|FN|FP|Avg. TNR|\n|Xception|1410 |700|123 |913|783|5375|138|2588|8857|2064|1193|7986|0.415|\n|Face X-ray |1288|784|245 |829 |814 |4984 |107 |2979 |9435 |3118 |615 |6932 |0.453|\n|F3Net|1387|679|146|934 |902 |4465 |19 |3498 |9338 |4880 |712 |5170 |0.511|\n|RFM|1109|1192|424|421 |763 |6191 |158 |1772 |9731 |2835 |319 |7215 |0.521|\n|SRM|1348|841|185|772 |874 |5302 |47 |2661 |10037 |3712 |13 |6338|0.502|\n|Ours|1032|1214|501|399|781 |6602 |140 |1361 |8336 |9816 |1714 |234 |0.898|\n\n* **5. Time consumption and computational complexities evaluations**\n\nTo comprehensively evaluate the proposed method, we provide the time consumption (TC) and computational complexity (CC) comparisons below. All the compared methods are evaluated on the same device using a $256 \\times 256$ image, and they are all implemented with the Xception backbone. Thus the computational complexities are nearly the same for most models except for SRM which uses a dual branch network architecture. Because our model includes two forward and one backward operations during inference, thus the corresponding computational complexity is more than others. Meanwhile, our method involves the generation of pseudo training samples that mostly use the CPU for the task except for when using the learning-based generating method (0.074 seconds on average for the pseudo training sample generation process). Thus, it requires more running time than others but also at an acceptable speed. Moreover, it is noteworthy to point out that the current focus of deepfake detection is still on the detection accuracy rather than speed. This might be because deepfake detection system might not be speed-sensitive in many scenarios. Certainly, methods can be further developed in the future, e.g., using distillation or approximation of online update, to further accelerate our method. \n\n||Xception|Face X-ray|F3Net|RFM|SRM|OST|\n|-----| --- | --- | --- |--- | --- | --- |\n|TC (s) | 0.015|0.017|0.019|0.015|0.037|0.062+0.074|\n|CC (MACs(G))| 6.01|6.01|6.05|6.01|13.81|18.03|",
" We sincerely appreciate all reviewers' efforts in reviewing our paper and giving insightful comments and valuable suggestions. We are glad to find that the reviewers generally acknowledge the following novelty and contributions of our work.\n\n* **Main contribution.** We introduce a test-time training paradigm specially designed for the deepfake detection task. Specifically, for each test image, we can use it to synthesize a pseudo training sample with existing deepfake generating techniques. Because the label of the pseudo sample is known (i.e. fake), we thus can use it to update the detector during inference.\n\nAs suggested by the reviewers, we would like to include the following contents in our revised manuscript to further improve our paper. We summarize the major revision as follows. Our detailed responses can be found in the following response sections to the reviewers.\n\n* **Visual examples.** We add visual examples to better illustrate the pseudo training sample generation pipeline and visualization of the embedded representations.\n\n* **Resources usages.** We include time consumption and computational complexity comparisons for all the compared arts.\n\n* **Ablation studies.** We include more ablation studies regarding using multiple gradient descents in the online adapting step, evaluations on images with different resolutions, and results by using the calibrated threshold.\n\n* **More comparisons.** Comparisons with more methods including those based on meta learning and test-time training (i.e. MT3 [5] and TENT [46]), and comparison using other evaluation metrics such as true positive and false positive.\n\nPlease also refer to the appendix in our revised paper for more detailed descriptions.",
" This paper presents an approach for generalizable Deepfake detection using a recently proposed one-shot test-time training strategy and a combination of meta learning. The approach is simple and a straightforward extension of the recently proposed test-time training framework, where a test-data sample itself is used to create a pseudo-training set, and the model parameters are updated. Experiments are presented on various datasets under different experimental settings. \n Strengths\n\nThis paper directly extends the test-time training (TTT) method from Efros’ group, to the Deepfake detection problem, and shows that this can be used for better generalization. Since the TTT method is recent, and has been well received, the authors have cleverly “struck the iron, when it’s hot” and have applied this recently proposed “hot” method to Deepfake detection.\n\nThe paper is clearly well and the methods are well explained, for the most part. \n\nWeakness\n\nVisual examples are missing. Though the authors have done a good job in extending a recently proposed well received method to Deepfake detection, it doesn’t make sense why the authors have not included any good visual examples. Other than Figure 2 (which hardly illustrates the method), there are no visual images. Since the method takes a test image and blends with images from the training set using different blending methods, these can be easily visualized and thus provide better insights. \n\nA few experimental scenarios and details are missing. \n\n The paper uses a face detector method (DLIB) to extract faces. There is not much information on how accurate this method is. The paper appears to assume that the face detection accuracy is 100%. Since face detection is critical to the proposed method, are there scenarios when the face detection algorithm can fail? Some discussion on this will help.\n\nThe paper mentions that the input face images are resized to a dimension of 256x256. It will be good to have a discussion on why this dimension is chosen, and if choosing a higher or lower dimension will have an impact on the performance of the proposed method. \n Lack of good visual examples is a big limitation (having which could have ended up in a higher rating). \n\nThe paper uses ACC and AUC as evaluation metrics. It will be helpful to know how the proposed method works on both pristine and Deepfake images. Metrics like True positives, False positives, True Negatives and False Negatives could help here. Since only ACC and AUC are provided, this does not reflect how well this algorithm performs separately on pristine and Deepfake images. \n\nThere is not much discussion on the computational complexity and time complexity of the proposed approach. Since the paper uses test-time training, a good practical system should also discuss the time and computational complexity at the test stage.\n",
" The major contributions of this paper is:\n1) This work proposes a simple **One-Shot Test-Time training (OST)** framework to improve detection of Out-of-Distribution face forgery detection. OST obtains noticeable improvements in OOD detection.\n **Strengths:**\n1) This paper is written well. It is easy to follow.\n2) The improvements in generalization are interesting and useful to the face forgery detection community.\n\n\n**Weaknesses:**\n1) Although the method gives improvement and I appreciate the authors' comprehensive experiments, these improvements are not surprising. I agree that it is different from other existing methods, but I feel that the contributions are limited. I.e.: Only Line 1 in Algorithm 1 is proposed in this work, Lines 2-4 are from existing meta-learning works.\n\n2) For detecting CNN-generated images, there is a popular work that shows its possible to create a universal detector that generalizes to detecting CNN-generated images from unseen GANs with different architectures, datasets and loss functions (See [1]). This generalization is obtained by a simple threshold calibration with no gradient updates (See appendix Table 5 [1]) of this universal detector. Can the authors include some discussion / benchmark regarding this generalization [1]. \n\n3) Can the authors clarify as to why OST cannot be applied to GAN-synthesized images? I.e.: The universal detector [1] generalizes to face forgery detection (See Table 2 for Cross-generator generalization results) although only trained using CNN-generated images.\n\n\nOverall this is an interesting paper. In my opinion, the weaknesses of this paper outweigh the strengths. I’m willing to change my opinion based on the rebuttal. \n\n\n=====\n\n[1] Wang, S. Y., Wang, O., Zhang, R., Owens, A., & Efros, A. A. (2020). CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8695-8704). Please see Weaknesses section above for a list of questions. The authors have discussed the limitations / ethic statement in Section 6 (Main Paper).",
" This paper studies a Test-Time Training paradigm to improve the generalization ability of deepfake detection method to unseen forgery attacks. Specifically, for each coming test sample, the proposed method synthesizes a pseudo-training sample by blending the test samples with a randomly selected template image and finetunes the pretrained detector with it. To achieve a good initialization for test time training, this paper adopts the MAML-based meta-training to enable fast adaptation to different test samples. Experiments are carried out in several public deepfake datasets. Strengths:\n\n1. Studying Test-time Training to improve the generalization ability of deepfake detection is reasonable.\n\n2. Generating pseudo-training samples in the proposed Online Test-Time Training is novel and is devised specifically for deepfake detection.\n\n3. Evaluation is reasonably thorough and acceptable results are claimed.\n\n\nWeaknesses:\n\n1. Most of test time training/adaption works, such as Tent [1], emphasize its advantage of computationally efficiency and capability of fast adaptation. Comparatively, it seems very time-consuming in the proposed Online Test-Time Training as the proposed Generating pseudo-training samples involves facial alignment, mask refinement and facial region blending. It is hard to be carried out in an ‘online’ manner and more likely to be an offline pipeline. \n\n2. In line 136, the reviewer is not clear about how many x_r are sampled. It is likely to cause the problem of domain imbalance (domain here means template and synthesis domains) if many template images are sampled and only one synthesized image is generated.\n\n3. Offline Meta-training is trivial and directly borrows the ideas of MAML.\n\n4. Missing some SOTA test time training/adaption baselines, such as Tent [1].\n\n[1] Tent: Fully Test-Time Adaptation by Entropy Minimization. ICLR 2021.\n 1. In line 136, how many x_r are sampled? Could the authors provide more details about the number or proportion of template images and the synthesized images in the One-shot online training?\n\n2. What is the technical novelty of proposed Offline Meta-training?\n\n3. How long does Online Test-Time Training take?\n\n4. Why only perform a single-step gradient descent? How is the performance if the number of step gradient descent increases?\n Yes.",
" This paper proposes a new method for generalized deepfake detection in which the testing has domain gap from the training. The proposed method is based on meta-learning on training set and one-shot test-time training on the synthesized pseudo samples. Extensive experiments show that the proposed method outperforms the state of the art. Ablation study proves the effectiveness of the proposed designs. Strength:\n\n1.The paper is well-written. It has good literature review which clearly shows the difference between this paper and related works.\n\n2.The proposed method is interesting and well designed. The motivation is clear and reasonable. \n\n3.Extensive experiments with analysis are provided. The new method outperforms a number of state-of-the-art methods. \n\n4.The experiment details are complete.\n\n\nWeaknesses:\n\nI didn’t see obvious drawbacks. \n\nThe authors can provide comparison to more SOTA methods in table 2. \n\nThe paper can be further enhanced by illustrating the advantages brought by the proposed OST adaptation in visualized examples.\n Will the result differ a lot for using different x_r in OST? The authors can provide further analysis on it. The authors have discussed the limitations. No potential negative societal impact."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
3
] | [
"S1BJWeoX-HH",
"fdaQ6KSuWCy",
"3EuZnIvBhj6",
"g5GL1lzOp7",
"vdB7i1DvrQ",
"GRCL4wlwYsP",
"3EuZnIvBhj6",
"g5GL1lzOp7",
"V2GdwTKxlCI",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY",
"nips_2022_YPoRoad6gzY"
] |
nips_2022_adFLKRqRu1h | Fuzzy Learning Machine | Classification is one of the most important problems in machine learning and the nature of it is concept cognition. So far, dozens of different classifiers have been designed. Although their working mechanisms vary widely, few of them fully consider concept cognition. In this paper, a new learning machine, fuzzy learning machine (FLM), is proposed from the perspective of concept cognition. Inspired by cognitive science, its working mechanism is of strong interpretability. At the same time, FLM roots in set theory and fuzzy set theory, so FLM has a solid mathematical foundation. The systematic experimental results on a large number of data sets show that FLM can achieve excellent performance, even with the simple implementation. | Accept | The paper proposes an approach for the design of neural networks for classification based on fuzzy theory, and a specific implementation is presented and experimentally assessed. Arguments from cognition to justify the proposed approach are also used, although at the level of inspiration. The lack of reference to fuzzy systems based neural networks models in the relevant literature in the initial version of the paper has been solved in the revised version, and author's rebuttal seems to have clarified most of the issues raised by reviewers. The experimental assessment seems to be robust. Personally I find the jargon used in the paper a bit unfit for NeurIPS standards, however I do not think this should be a valid reason for rejecting a paper for which no serious drawback has emerged. In any case, I think it is good for NeurIPS to diversify the range of approaches and methodologies covered by the scientific program. | train | [
"Uz9LQIpywCb",
"7n1Ppzt0JoK",
"IPhgQp9QIfP",
"fPVfcBd8eUh",
"1wTs6e6tqdY",
"d_L_zfF5iFs",
"f1er7W5Bj-B",
"qHCfxKL5JHt",
"VAAaNCS6n65"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your advice. You are right.\n\nFrom a biological point of view, the concepts of “cat” and “dog” can be defined according to their DAN features. At this time, the concepts are crisp.\n\nIn the field of ML, for example, in most image classification task, the goal is to learn the concepts from the images provided by data set. At this time, the learned concepts are fuzzy because the information contained in image is limited to define the concepts crisply.\n\nThanks for your advice again. And do you have any other questions about this paper?",
" Just because two concepts share certain features does not mean the concepts are fuzzy, does it? As soon as their is one distinctive feature identifying a data point to belong to a certain concept, the border is crisp. It does not matter whether the remaining feature values are even the same. Regarding your DNA example: There are markers in the DNA for cats to actually identify them as cats and vice versa.\n\nHowever, I agree that some visuals are shared and if membership is only defined via the presence of a few visual markers the concepts may overlap in this description.",
" Thanks for your reply. My concerns are clarified. I will keep my voting and support this paper.",
" General Response\n\nWe thank the reviewer for taking the time to review our manuscript and for the valuable comments. Below is a point-by-point response to the comments.\n\nResponse to Weaknesses\n\nThanks for your advice. We have made corresponding adjustments to increase the readability in the revised manuscript (see Section 2.5).\n\nResponse to Question (1)\n\nFLM is a general learning machine, which can deal with the classification problem given in Definition 1. NN-FLM is a specific implementation of FLM, which can handle the classification problem when the input space is Euclidean space. Euclidean space is the common input space in ML. However, there are still many data that is not represented by vectors in Euclidean space, such as category data. In this case, how to implement FLM is a worth research topic in the future.\n\nIn addition, we take NN-FLM as the representative of FLM family to conduct comparison experiments. And the experimental results also demonstrate its effectiveness.\n\nResponse to Question (2)\n\nFirstly, the goal of the optimization model is learning a similarity function. The paper argues that classification problem can be solved based on similarity, which has previously demonstrated (see Section 2.1). Therefore, the goal is reasonable.\n\nSecondly, the features of the samples may not accurately describe the intrinsic attributes of the concept and may contain information unrelated to the concept. Therefore, the excellent feature extraction ability of deep neural networks is used to learn new representations from the original features. In learned representations, the feature related to the concept are strengthened and the features unrelated to the concept are weakened. Therefore, a good FSR can be obtained, which builds a good basis for the representing the concept.\n\nThirdly, the proposed loss function can not only capture the intrinsic fuzziness of the concept, and can be optimized efficiently. More importantly, given the proposed loss function, it is proved that FER can be approximated effectively by FSR (see Theorem 2), which preserves the nature of classification problem as much as possible.\n\nAt last, exploiting the learned similarity function and the exemplar selection method, NN-FLM can select representative exemplars for each concept. And the concept can be represented effectively by selected exemplars and is used for classification.\n\nIn summary, all the above factors guarantee the performance of NN-FLM.\n\nResponse to Question (3)\n\nFirstly, exemplar theory is adopted to complete representation of concept. That is because exemplar theory is friendly to data-driven ML paradigm. Compared with other theory of concept representation, such classical theory, prototype theory and knowledge theory, exemplar theory hardly relies on the high-level semantic information of the features. Although the semantic information of features is unknown (This is common setting in ML), representation of concept also can be completed.\n\nSecondly, prototype in cognitive science is rely more on semantic information, for example the prototype of dogs can be represented as (four legs, hair, barking, etc.). One of the most serious problems is that no matter how you extend this representation, there are always some dogs that can not be captured by this representation. On the contrary, prototype in ML usually can be descried by a vector, and the semantic information of each dimension in vector is usually ignored. The former is of highly interpretability, but requires expert participation. The latter is the opposite. And the two are highly complementary. Therefore, the paper selects exemplar theory to complete representation of concept, which preserves the merits of the two. Specifically, concept is represented by the selected exemplars that is the samples in original space, which has good interpretability and can not cost the human efforts.",
" General Response\n\nThank the reviewer for carefully reading and the valuable comments.\n\nThe innovation of this paper not only lies in the proper use of fuzzy set theory to design classifiers, but also includes:\n\n(1)This paper demonstrates that almost all classification problems can be solved by similarity (Section 2.1).\n\n(2)This paper demonstrates that the fuzziness is unavoidable when solving classification problems (Section 2.2). Unlike most existing fuzzy classifiers (FCs), their motivation for using fuzzy set theory is more intuitive.\n\n(3)An exemplar theory based concept representation method is designed (Section 2.3).\n\n(4) A general learning machine is designed, and a specific implementation is given (Section 2.4 and 4).\n\nWe apologize for the mistake that the manuscript did not contain a discussion about the existing FCs. We have added the discussions in modified version (Section 2.5 and Appendix A.2.1). The following is the responses to the comments.\n\nResponse to Weaknesses\n\n1 The focus of this paper is to demonstrate the relationship between similarity and classification problem and the fuzziness is unavoidable for solving classification problem.\n\n2 We have added corresponding discussions (Section 2.5 and Appendix A.2).\n\n3 They have been modified in the revised manuscript.\n\n4 In this example, we consider the visual instead of biological sense of cats and dogs. Given only visual features, the boundary between the cat and dog is fuzzy. If the cat and dog are described in DNA features, they can be distinguished. However, the boundary between cat and dog is still fuzzy because there are a certain similarity between them in DNA features.\n\nIn general, fuzziness of concept can be reduced by more accurate information and is almost impossible to be eliminated (Chapter 2 in literature 13).\n\n5 (1) The exemplar theory is selected for concept representation. The learned FSR and the exemplars for every class are used for representing concepts.\n\n(2)From Fig 3c, it can be seen that NN-FLM selects representative exemplars for every class, which shows that NN-FLM captures the visual concepts 0-9 well.\n\n(3)In Fig 3a, the nonzero value in non diagonal indicates that NN-FLM captures the fuzziness of concepts. The Fig 3a also shows that the FSR value between 0 and 1 is lower than it between 0 and 7.\n\n(4) Analog learning can capture concepts to a certain extent. The core component of analogy learning is the 4-ary relation (Section 9.2.22 of literature 32), while the core component of the proposal is the binary relation (i.e. 2-ary relation) and this paper demonstrates that almost all classification problems can be solved by binary relation (Section 2.1). In addition, it is difficult for analog learning to deal with features that only contains low-level semantic information. The proposal can automatically extract useful features from the raw features.\n\nResponse to Strength\n\nAccording to Friedman test, when the performance of some algorithms are the same, these algorithms will share their ranking values equally (literature [1]).\n\nResponse to Questions\n\n1 Fuzzy classifier (FC) is an important classification paradigm, which can deal with ambiguity effectively, has strong interpretability and can easily be fused with the knowledge of experts.\n\nIn literature [2], the FC is defined as a classifier that uses fuzzy sets or fuzzy logic in the course of its training or operation. According to this definition, NN-FLM is a kind of FC.\n\nCompared with most existing FCs, the proposal is more suitable for data-driven machine learning tasks. Specifically, (1) it hardly relies on the semantic information of features; (2) it can automatically extract useful information for concept from low-level semantic features and capture the representation of concepts. (3) it can effectively complete training and test and can learn from large-scale data. For detailed analysis, see the Appendix A2.1 in modified version.\n\n2 In Definition 1, the output space is a finite set. Mathematically, a finite set only needs to contain several different elements, and the meaning of each element is ignored. In this sense, classification problem can be solved with Definition 2.\n\n3 In training stage, the f^* and E_c, \\forall c \\in Y are invariant to the order of training samples. In test stage, the predicting results are invariant to the order of test samples. To sum up, the proposal is invariant to the order of inputs.\n\nResponse to Limitations\n\n1 The time complexity from FSR matrix to FER matrix is discussed in Appendix A.1.2.\n\n2 When the samples in the training set are not enough to cover all the intrinsic attributes of concepts, the learned concept representations will fail, resulting in the failure of the proposal.\n\nReference\n\n[1] Janez Demˇsar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7 (2006):1–30.\n\n[2] Ludmila1. Kuncheva. Fuzzy Classifier Design. Springer-Verlag Berlin Heidelberg GmbH, 2000.",
" General Response\n\nWe thank reviewer for valuable comments. We carefully revised the manuscript according to these comments. Below is a point-by-point response to the comments.\n\nResponse to Weaknesses\n\n1 Concept consists of two parts, extent and intent, and they can induce each other. Concept cognition is to obtain the extent or intent of the concept. The process of classification is to match an object to a concept, that is, to get the extent of the concept. Therefore, the nature of classification is concept cognition in some sense.\n\n2 Concept is the basic cognitive units of knowledge representation. And humans use concepts to organize and understand the world. And, concept is essential for humans intelligence (See detailed discussion in Chapter 1 of literature 13).\n\n3 Similarity is the key of classification. (See detailed analysis in literature 13-15)\n\n4 Related studies can be found in literature 15.\n\n5 To ensure the fairness of comparison, literature 1 has made sufficient consideration and great efforts. The experimental results of all the comparison methods are from literature 1.\nIn addition, the splits of training and test data sets follows the literature 1. And implementation details, hyper-parameter settings and optimization process settings of the proposed method are given in detail (see Appendix A.4).\nTo sum up, as far as we known, the fairness of the comparison can be guaranteed.\n\nResponse to Questions\n\n1 Seen response to weaknesses 1.\n\n2 In the manuscript, we do not make a distinction between \"classification\" and \"categorization\". We use “classification” in the whole manuscript, which is common in machine learning literatures.\nGiven Definition 1, the possible ambiguity can be reduce greatly.\nThe existing science researches still can not comprehensively illustrate it. But there is no doubt that supervised learning should be important for humans learning.\n\n3 We have modified it into “DNN has exceed the human level in very specific settings” in revised manuscript.\n\n4, 5 Up to now, it is still difficult to understand and explain how do humans classify. Therefore, a human-like classifier is also difficult to understand. And it is impossible to directly simulate the classification process of humans. We have modified it into “To implement a classifier that is easy to understand and interpretable, one effective approach is to draw on relevant research in cognitive science.”\n\n6 These numbers are artificially assigned to illustrate the difference between fuzziness and randomness. We have modified the description to eliminate the misunderstanding.\n\n7 We assume that the evaluator has access to the class labels of the test data. This is a common setting in machine learning.\n\n8 It is a typo. We have modified it.\n\n9 We have deleted this sentence to eliminate the misunderstanding.\n\n10 The formula 1 is an abstract optimization problem for illustrating the core work mechanism of FLM. In NN-FLM, we add the regularization term to prevent overfitting. (See appendix A.4). To eliminate this misunderstanding, we have made corresponding modifications.\n\n11 A direct solution is to select more exemplars for classes with higher intra-class variability.\nIn the experiment, to simplify the experimental setting the number of exemplars of each class on all data sets is set as min(5, # training samples of the class). And the NN-FLM has achieved competitive results. If the number of exemplars is adjustable, the better performance should be obtained.\nIn the modified version, the number of exemplars is set as an adjustable parameter.\nIn addition, the reason why intra-class variation is high is probably because the features used for describing samples contain the information unrelated to the concept. For example, “car” and “trucks” belong to “vehicles”, but the differences between them are great. That is because the features used for describing “car” and “trucks ” contain the information unrelated to “vehicles”. If raw features are used directly to calculate the similarity, the similarity between “car” and “trucks” would be small. Therefore, the key to concept representation is the similarity, and the key to the similarity is the sample representation. In the learned representations, it is expected that the intrinsic information of the concept will be strengthened and the unrelated information will be weakened. This is also the design idea of NN-FLM.\nAt last, the above discussions also demonstrate that similarity is the basis of concept representation.\n\n12 You are right. We have modified it into “In this case, the class labels predicted by NN-FLM are in line with human cognition to a certain degree.”.\n\n13 The experimental details are given in appendix A.4.\n\nResponse to Typos and writing style\n\n1-5 and 7-9 It has been modified. And we checked the manuscript carefully to avoid similar problems.\n\n6 The inner brackets denote the elements of the Cartesian product, and the outer brackets denote the function.",
" This paper proposes a new machine learning method for classification called Fuzzy Learning Machine. The paper draws from concepts from cognitive science to derive a method based on fuzzy similarity relations of examples on the input space. The training method learns a similarity function and selects a set of exemplars from each category used during the prediction phase to compute the similarity of new examples to the exemplars in each category and then assign it to the category with more similar examples. Strengths:\n\nThe method proposed is interesting and brings up a number of novelty elements. The method seems to improve significantly in relation to existing classification methods on a large number of data sets.\n\nWeaknesses:\n\nThe paper makes a lot of assertions about human cognition that are questionable. For instance:\n- \"In essence, the process of classification is the process of concept cognition\"\n- \"Concept contains our knowledge about the world, and we use concept to understand and organize the world. Without it, there will be no human intelligence at all.\"\n- \"Similarity (...) plays a crucial role in the process of human classification\"\n- \"Concept is represented based on similarity for children, which is also a basic choice for adults\"\n\nAlso, sometimes it is difficult to understand if the paper makes assertions about its own definitions or about human cognition as in \"the intrinsic property of concept is just the fuzziness rather than the randomness\".\n\nI do not see a problem in using assumptions based on cognitive science for building models. In fact, most models in AI do that somehow. However, care should be taken to not state these assumptions in the paper as settled truths. I rather see the paper provide in advance a list of theories, hypotheses, and assumptions considered along with references for them, and then describe the model proposed using them as a basis.\n\nFinally, without details on how well the other methods used for comparison were adjusted, it is hard to know if the comparison is fair.\n\n Below I provide a list of questions and points that could be improved in my opinion:\n1. Line 16: The term \"Concept cognition\" lacks reference and support in the literature. If this is a term coined in the paper, then first, it should be defined what is meant by \"Concept\" and \"Cognition\" in the views of the authors, since these terms are rather vague.\n2. Line 18: Why classification and not categorization? The former may be rather too specific and we do not know enough about human intelligence to pinpoint it as the approach used by it. For instance, see Jacob, Elin K.. “Classification and Categorization: A Difference that Makes a Difference.” Libr. Trends 52 (2004): 515-540. More specifically: do humans need supervision to learn?\n3. Line 28: It should be made clear that DNN may exceed the human level in very specific settings.\n4. Lines 29-30 and 33: The way humans do classifications is also difficult to understand and explain. Therefore, I see a conflict in pursuing human-like classifiers and classifiers that are easy to understand and interpretable at the same time.\n5. Line 30: I do not see how this can be direct and efficient because we first need to understand and explain how humans do classification and, although there exist theories, this is not settled in the literature.\n6. Figure 1: Please, describe where the numbers used in the legend are derived from.\n7. Line 74: Test data also need labels. Otherwise, the method can not be evaluated.\n8. Definition 1: Observe that \"phi(x) belongs to Y\" does not define any mapping in particular. Is that the intended definition? If so, it does not resemble a classification problem, and any arbitrary mapping would be a solution.\n9. Line 107: I do not see why an awkward situation arises from the definitions above.\n10. Equation 1: no regularization terms are considered. Is overfitting a problem?\n11. I see potential problems in the way the model chooses examples for the Ec set. Consider a class with high variability, such as \"vehicles\". In such a class, it is hard to select k exemplars that cover well the variability of the class (cars, trucks, planes, boats) and yet are similar among all of them.\n12. Line 269: \"On the contrary, the class labels predicted by NN-FLM are more in line with human cognition.\" it is hard to conclude this from the examples shown without adequate research with different subjects and in comparison with other methods.\n13. Please, provide a description of the hyperparameter tuning method used for the methods. How well were they adjusted?\n\nTypos and writing style:\n1. Line 23: possess instead of process?\n2. Line 26: humans instead of human?\n3. Line 42: \"A concept is\" or \"Concepts are\" instead of \"Concept is\" ? \n4. Line 75: The question mark is not needed because the phrase is not a question.\n5. Definition 1: In the notation used (X,Y, phi), phi seems rather an input than output or result. Consider something like phi = f(X,Y). (optional)\n6. Line 89: why double parenthesis in ((xi, xj))?\n7. Line 101: Concepts instead of Concept?\n8. Line 141: \"exemplar theory is about\" instead of \"exemplar theory is a kind of theory about\".\n9. Line 269: humans instead of human.\n I do not see any limitations or potential negative social impact of this work.",
" In the paper \"Fuzzy Learning Machine\" the authors propose an approach to learn a classifier via a neural network forming a fuzzy equivalence relation. Deriving the approach from fuzzy set theory, the authors find their approach to perform particularly well across a number of datasets comparing the approach to various other classifiers.\n ## Weaknesses\nThe idea of employing fuzzy set theory for classification tasks is not new at all and I am wondering what is now the methodological novelty of the approach. In general the idea of comparing instances / data points according to their similarity is the basic idea behind learners using kernel functions where the shape of a concept is specified via the respective kernel. However, there is also a relatively large corpus of literature on classifiers leveraging fuzzy set theory, even working exactly with neural networks and the idea of fuzzy equivalence relations. Still this related work is neither discussed nor cited in the paper. See for example the following references:\nAcharya, U. Rajendra, et al. \"Classification of heart rate data using artificial neural network and fuzzy equivalence relation.\" Pattern recognition 36.1 (2003): 61-68.\nMoser, Bernhard. \"On Representing and Generating Kernels by Fuzzy Equivalence Relations.\" Journal of machine learning research 7.12 (2006).\nMeier, Andreas, and Nicolas Werro. \"A fuzzy classification model for online customers.\" Informatica 31.2 (2007).\nSenge, Robin, and Eyke Hüllermeier. \"Top-down induction of fuzzy pattern trees.\" IEEE Transactions on Fuzzy Systems 19.2 (2010): 241-252.\nKuncheva, Ludmila. Fuzzy classifier design. Vol. 49. Springer Science & Business Media, 2000.\nSun, C-T., and J-S. Jang. \"A neuro-fuzzy classifier and its applications.\" [Proceedings 1993] Second IEEE International Conference on Fuzzy Systems. IEEE, 1993.\nUebele, Volkmar, Shigeo Abe, and Ming-Shong Lan. \"A neural-network-based fuzzy classifier.\" IEEE Transactions on Systems, Man, and Cybernetics 25.2 (1995): 353-361.\nIt is unclear to me how this part of the literature is widely ignored by the authors when they seem to come from that area.\n\nOverall, the paper has a good structure but could benefit from proofreading. Especially, a vs an is a frequent problem in the text, e.g., \"a input space\", \"a output space\", \"a FER\". Then, \"classifier\", \"concept\" and \"classification process\" are used without an article. Some parts also seem overly complicated to me. For example, consider the proof that a non-linear model is needed to tackle the derived problem where the instances are concatenated. I do not know whether yet another proof for the fact that an XOR problem cannot be tackled via a linear model is really needed. This could have been simplified. Furthermore, I find that the example given in Figure 1 is not very well chosen. The concepts cat and dog have crisp biological borders and a human not being able to distinguishing the two categories is rather due to epistemic uncertainty than fuzziness of the concept borders. Personally, I would also argue that non of the three cats is more or less representative of the category or concept \"cat\".\n\nA claim that was made by the authors is that their approach indeed learns \"concepts\" instead of just assignments. However, there was no proof given in the paper that this is really the case. Especially, there is no presentation or demonstration of any particular concepts that were induced by fitting their model. I would even argue that from Figure 3 is rather becomes clear that it is learning not really any concepts as the FSR matrix shows more or less the same color for every cell not being on the main diagonal. If it was to learn real concepts I would also expect that a 0 would receive a lower membership score for the concept 1 than a 7 for example. A better overall performance is no proof for the claim that the method learns concepts.\n\nAnother branch of classification literature also tries to capture concepts for classification purposes: Analogy learning.\nBayoudh, Sabri, Laurent Miclet, and Arnaud Delhay. \"Learning by Analogy: A Classification Rule for Binary and Nominal Data.\" IJCAI. 2007.\n\n## Strengths\nSince most people in the machine learning community will not be that much familiar with fuzzy set theory, I liked it very much that all fundamental definitions were provided by the authors in the paper or supplementary material to make it self sufficient.\n\nAccording to the experiments the proposed method seems to perform very strong compared to a set of almost 200 classifiers. However, the way how the rankings were calculated is a little bit odd. Why are 65 learners sharing rank 1 with 100% accuracy receive a rank of 65? This will most likely also affect the average rank statistics compared for the ten classifiers later on. I would rather expect that performances with a tie receive the same higher rank, leaving free the next n-1 spots in the ranking.\n * In what regard does the proposed method really go beyond the already existing corpus of literature in fuzzy classifiers?\n* Is the classification problem really solved with definition 2? To me, it seems like the information to which class at least a representative data point belongs to is missing. Hence, the problem is only solved if the y's of some representative class members are known.\n* Is not it a problem that the model is not permutation invariant? At least it seems to me that it is not invariant to the order of inputs.\n Limitations, except for runtime complexity to compute the FER matrix, are not really discussed. When does the approach fail and why does it fail?",
" This paper proposes a new learning machine for the general classification problem, which is one of the most important problems in ML/AI. The new learning machine is based on the concept cognition theory in cognitive science and fuzzy set theory in mathematics science. So its working mechanism is highly explainable and has a solid theoretical guarantee. Meanwhile, a large number of systematic experimental results demonstrate the superiority of the proposed method. The manuscript focuses on the classification problem which is one of the most important problems in ML/AI.\n\nThe manuscript re-examines the classification from the perspective of concept cognition and reveals the essence of classification. And the manuscript provides a new view to interpret the structure of the classification problem by establishing the equivalence between binary classification problem and classification problem by employing equivalence relation in set theory. Furthermore, the manuscript realizes that fuzziness of concept is the main source of uncertainty in classification and then employs the fuzzy set theory to model this kind of uncertainty.\n\nBased on the above conclusions, the classification problem is modeled as a fuzzy equivalence relation problem, which well preserves the nature and intrinsic fuzziness of the classification problem. What’s more, the manuscript designs a clever model and loss function to approximate the fuzzy equivalence relation effectively and efficiently.\n\nTherefore, in this manuscript, the main proposals have the theoretical basis of cognitive science, and the key conclusions are proved mathematically. And extensive experiments (compared with 179 methods on 121 data sets) verify the rationality and superiority of the proposed method.\n\nOverall, the manuscript is clearly written and well organized with good clarity. To enhance the readability and completeness, it is suggested that some contents in the appendix should be moved to the corresponding part of the main manuscript. For example, the analysis of the working mechanism of the existing classifiers should be moved to the Introduction of the main manuscript. However, in the current manuscript, these contents are placed in Appendix A.2.\n (1)\tThe title of the manuscript is FLM, but the performance of NN-FLM is shown in experiments. Please explain the difference between FLM and NN-FLM and why only the performance of NN-FLM is analyzed.\n\n(2)\tIt appears that the optimization model of NN-FLM is a general multi-layer neural network plus a special loss function. Please explain what good properties of the loss function are able to guarantee the performance of NN-FLM.\n\n(3)\tAccording to the descriptions in Appendix A.1, there are at least four different concept representation theories. Why choose the exemplar theory to complete the concept representation in FLM? And in some machine learning methods, ‘prototype’ is also usually used for representing a class, such as ‘means’ in ‘k-means’, ‘modes’ in ‘k-modes’, and ‘prototype ’ in ‘prototype network’. Please explain the difference between ‘prototype’ in these methods and ‘prototype’ in ‘theory theory’ and why not use prototype theory to represent the concept?\n N/A"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"7n1Ppzt0JoK",
"1wTs6e6tqdY",
"fPVfcBd8eUh",
"VAAaNCS6n65",
"qHCfxKL5JHt",
"f1er7W5Bj-B",
"nips_2022_adFLKRqRu1h",
"nips_2022_adFLKRqRu1h",
"nips_2022_adFLKRqRu1h"
] |
nips_2022_tYAS1Rpys5 | Simulation-guided Beam Search for Neural Combinatorial Optimization | Neural approaches for combinatorial optimization (CO) equip a learning mechanism to discover powerful heuristics for solving complex real-world problems. While neural approaches capable of high-quality solutions in a single shot are emerging, state-of-the-art approaches are often unable to take full advantage of the solving time available to them. In contrast, hand-crafted heuristics perform highly effective search well and exploit the computation time given to them, but contain heuristics that are difficult to adapt to a dataset being solved. With the goal of providing a powerful search procedure to neural CO approaches, we propose simulation-guided beam search (SGBS), which examines candidate solutions within a fixed-width tree search that both a neural net-learned policy and a simulation (rollout) identify as promising. We further hybridize SGBS with efficient active search (EAS), where SGBS enhances the quality of solutions backpropagated in EAS, and EAS improves the quality of the policy used in SGBS. We evaluate our methods on well-known CO benchmarks and show that SGBS significantly improves the quality of the solutions found under reasonable runtime assumptions. | Accept | The paper follows in the footsteps of alpha go and presents two methods for neural-network guided search, targeting in particular beam search. The paper was deemed a bit incremental, but the method is simple, is easier to parallelize than MCTS and obtains good results on problems under-explored in machine learning. Please review related literature in AI for games and neural guided search techniques in discrete inference.
| train | [
"cBab4SKUbJ",
"OJ8pFRi2Lk",
"rKC2Xzp1gM",
"yBkwUQ-P2dv",
"Fh3Ucuoi2x",
"2iUNEbOKTx1",
"5soqdw4_PsJ",
"M3FcX_wcUD",
"Aiqc7oLxe7",
"Wenf_JGEvuo",
"o-L2Tgth0b4",
"LKx2UlXg2FI",
"HTrnFtLESRt",
"2RTDTDSyaAf",
"7XqyiWvaZ_J"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Sorry for confusing you with our ambiguous use of the term 'policy likelihood'.\nYour description and understanding of the method is accurate, indeed. \n\nWe value your opinion and we thank you again for your hard work and time for reviewing our work.",
" I don't understand the authors when they mention that SGBS \"does not rely on this policy likelihood at all.\" Algorithm 1 indicates that the most likely actions according to the trained policy $\\pi$ are selected as candidates before being evaluated using greedy rollouts. Hence my comment.\n\nI agree, however, with footnote 2, stating that classic Beam Search usually accumulates the likelihood of the samples from the root onwards which is not done in this work. It seems indeed natural to avoid infinitely penalising the generated trajectories based on the likelihood of the sequence of actions whereas a more informative signal is available thanks to the greedy rollouts.\n\nHopefully, my understanding of the method is aligned with what's described in the paper. If so, I maintain my original statement regarding the novelty of the method. I'm mindful that this is, to a degree, subjective. I consider the modifications to the Beam Search algorithm for CO problems to be below my expectations for acceptance, although the work was scientifically well conducted and presented.",
" \n\n**Transformer vs. GNN**\n\nTraining time (i.e., time to train a neural net to properly encode graphs) depends on many factors, the model architecture being just one of them. In our experience with Transformers, training times differ widely on different problems even if we use the exact same model (due to the different ranges of the target instances, the different aspects of the graph features to learn, etc.). As the Transformer and the GNN have been applied mostly on two non-overlapping subsets of the CO problems, direct comparison in their training times can be misleading. Even for cases like the TSP, where both models have been tried, the vast difference in their performances indicates that the two models are learning different features of the graphs. \n\nHowever, it is safe to say that training a GNN is much more time-efficient than training a Transformer in general, especially when the graphs that need to be encoded are sparse. The same can be said for the inference times as well, although in this case the inference time would mostly depend on the inference strategy itself rather than the model type.\n\nSpeaking of training times, an example of the training curves for our Transformer-type model is shown in Appendix I, Figure I(a). For the CVRP with 100 nodes, it takes a few weeks to (pre-)train our model, assuming that a single GPU is used.\n\n\n \n\n**Overhead of EAS at test time**\n\nEAS does not incur any meaningful overhead at test time, because it does not require any change to the (pre-trained) policy model for it to be applied. In other words, at the early stage of EAS, when the effect of EAS training is negligible, the policy model is simply in its pre-trained state. The solutions produced by the EAS method at this point have the same quality as those without EAS (i.e., those of the sampling method). The rate at which the solutions are produced goes down a little because EAS needs to perform extra computations to change the parameters of the model, but this makes only a marginal change in the overall speed. EAS affects only the outer layer of the model and the backpropagation is quick (compared to the original Active Search, which does the full-scale backpropagation). \n\n\n \n\n**Paper Revision**\n\nIn the first paragraph of the Related work section, we will add a few sentences to compare Transformer models and the GNN models based on the contents of our discussion here. \n\nIn the third paragraph, we will add references to the GNN approaches as the examples of the NCO techniques for the graph CO problem (as there should have been). And we will also explicitly mention that SGBS(+EAS) can be applied to some of these existing neural approaches to other types of CO problems.\n\nWe are not sure about updating the paper yet. We do not get the one extra page for the revision during this discussion period. We also plan to revise our paper reflecting comments from the other reviewers as well.\n",
" Most of the points in my review have received an excellent response - thank you for your clarifications and for addressing the concerns I had.\n\nOn the point of the model comparisons - I am aware that a transformer can be thought of as a fully connected GNN, but performing computations on a fully connected graph of course brings additional computational costs, which will presumably impede training and solving times to some extent. The authors have addressed the transformer vs. GNN differences in terms of optimality, but not training + inference time. \n\nI would also still like to know whether the authors think that the training overhead of EAS at test time may incur disadvantages at test time in terms of both inference time and solution efficacy when initially deployed (inference time because you must perform network updates at test time, and solution efficacy because initially no test time training will have taken place and therefore performance will presumably be poorer).\n\nWill the authors be updating their paper with the relevant ML4CO context, related work, further work/limitations with respect to GNNs for different problem types etc. before the end of the discussion period? ",
" Thank you for reviewing our paper. We find your review helpful and insightful. Let us answer to your questions below.\n\n \n\n**Q1. Context and related work**\n\n \n\n***- Neural Approaches on Graph CO problems***\n\nThank you for pointing out the important branch of neural combinatorial optimization, the graph CO problems using the neural ML approach. Honestly, we had them in our related work section while we were preparing our scripts, but somehow they had gone missing in the midst of frenzy process of putting everything together right before submission. We are grateful that you have noticed, and we will include them in the related work section in our revision. Graph neural net (GNN) based CO methods, such as those found in the references you have provided, are particularly effective on solving graph CO problems and thus have potential in making significant impacts in the real world. The Maximum Cut problem, for example, is a fundamental CO problem to which many NP-complete CO problems found in the industry can be reduced.\n\n \n\n***- Model Comparisons***\n\nIn order to position our work properly in the context of these popular GNN approaches, we must first point out that, technically speaking, the transformer-like models we use in our paper can be also considered as a type of GNN. These models, however, differ from regular GNN models in that they are highly specialized for fully-connected graphs. As such, they fit nicely to the types of CO problems we focus in our paper (TSP, CVRP, FFSP, etc.), but they cannot be directly implemented for many graph CO problems dealing with various topological graph structures.\n\nOrdinary GNN models, as they can be applied to graph problems of any topology, have been employed in various ways to solve the aforementioned fully-connected-type CO problems, such as TSP. These attempts, however, have not led to quite satisfactory results yet. For example, Dai et al. 2017 solves TSP with 100 nodes with roughly ***7%*** gap to the optimal solution, and Drori et al. 2020 demonstrated ***3%*** optimality gap for the same problem. This is to be contrasted with ***0.1%*** optimality gap of the POMO model we use for TSP100 in our paper. (And the POMO model achieves this level of solution quality in just single greedy rollouts. EAS-SGBS reduces this optimality gap down to ***0.02%***.) GNN models, developed for universal graph topology, seem to have hard time competing with specialized transformer-like models on these fully-connected-type CO problems. \n\nApplications of neural net models in the other way around, on the other hand, i.e. adapting transformer models to graph CO problems, have not been explored as extensively as should have been by the ML community. We believe this is a very interesting and possibly quite promising future research topic. \n\n \n\n***- Construction vs. Improvement***\n\nMost learning-based methods for CO problems in modern literature can be categorized into either the construction-type or the improvement-type. A construction-type method relies on a neural network to provide a policy for building a high quality solution from ground zero. Choices made during the construction are irreversible, making the method difficult to perform on complex and highly unpredictable environments (i.e. CO problems). An improvement-type method, on the other hand, usually starts with randomly generated, low quality solutions, but keeps on modifying them into better ones. This provides a more flexible and consistent approach that usually scales betters.\n\nBoth methods have been explored on solving graph CO problems. Dai et al. 2017, Li et al. 2018 and Drori et al. 2020 are construction-type approaches, whereas Barrett et al. 2020 and 2022 are improvement-types. SGBS(+EBS) algorithm we introduce in our paper is an improvement method, but it is very unique in that it is designed to be implemented on top of the pre-existing construction-type methods. This means that our method can be easily integrated with existing construction-type graph problem solvers, such as those in Dai et al. 2017 and Drori et al. 2020, and can improve their performances even more. ",
" \n\n**Q2. Reward function**\n\nWe understand your confusion, and we will refine the texts in the section in our final version of the paper. The confusion comes from the fact that we are using the term 'reward function' and 'problem instance' interchangeably. Why this is so is explained in line 104 and its proceeding lines. In our notation, $s_N$ is just a set of values assigned to decision variables, that exist independently from the details (parameters and constraints) of a specific CO problem. We first choose what problem instance to solve and only then we can evaluate how good a solution $s_N$ is by applying the reward function $\\mathcal R$ (defined by the parameters and constraints of the problem instance) on $s_N$. There is an one-to-one correspondence between $\\mathcal R$ and the problem instance in this notation, hence the interchangeability.\n\nThe use of the symbol $\\mathcal R$ to represent a problem instance was a way to make our formal mathematical formulations concise, but it is true that it can sometimes be misleading. \nWe hope that our explanations above have resolved your confusion. When we say that we sample different reward functions, we simply mean that we are sampling the training data (random CO problems instances) for our neural network.\n\n \n\n**Q3. Action sampling methodology**\n\n\nYou are right. The greedy rollouts simply follow the argmax of the policy neural net's outputs. \n\n \n\n**Minor issues**\n\nThank you for pointing out the typo (line 218) and the ambiguous use of the term 'reward' (line 139 and other places), and for your helpful comments regarding Fig 1. We will reflect them on the revised version of our paper. ",
" Thank you for such a thorough assessment of our paper. We'd like to clarify some of the points that might have been clearer. This reviewer's deeply insightful comments and questions are particularly helpful for the next revision. \n\n \n\n**Originality**\n\nYou have summarized our SGBS algorithm as the beam search procedure augmented by greedy rollout evaluation. The term 'beam search' used here could mean two different things as we have mentioned in Footnote 1 of the paper. If you mean beam search in the most general sense of the word as in https://en.wikipedia.org/wiki/Beam_search, then your description is correct. But then this will be a very weak ground for judging our algorithm to be of small originality. If this is the case, please see our answer to Reviewer 7y6f on the comparison of SGBS with Monte-Carlo Beam Search (MCBS).\n\nIf you mean beam search as the word is commonly used in the machine learning community, which we think is more likely, then we must point out that your summary is inaccurate. Previous work on construction-type RL methods has used beam search based on the policy likelihood, a common procedure used in many other areas of machine learning as well. This is certainly the most sensible and natural way of ranking a node. This policy likelihood, or the output probability of a partial solution, is calculated as the product of all probabilities encountered as the search moves from the root to a particular node. Our SGBS algorithm, however, *does not rely on this policy likelihood at all*. A variation of SGBS that does use it is described in Footnote 2 of our paper, but we have found it inferior and never have presented it formally in the paper. While we would not repeat the explanation of SGBS procedures here, we strongly argue that our SGBS algorithm is NOT just a simple extension to the common 'beam search' method, the one already being used actively by the community. Instead, it is a completely new approach toward executing beam search (in the general sense), specialized for policy networks trained to create solutions for complex CO problems.\n\n \n\n**Significance**\n\nWe are delighted to hear that you find the combination of SGBS with EAS an interesting technique, and we hope that you would also find the SGBS algorithm a meaningful invention as well. Please check our comments on 'Originality of SGBS' above, as well as our answers to your Q3&Q4 below. \n\nIn short, SGBS is not merely a simple extension to an existing technique. It is an unconventional tree search technique (to neural CO researchers), applied to the RL-trained policy networks for the very first time. As such, our work can lead to many follow-up papers, extended by other researchers interested in creating better decision-time planning methods using policy neural networks. \n\nWe have shown that SGBS outperforms existing tree search methods, including MCTS. While SGBS is similar to MCTS in many conceptual ways, it should be noted that MCTS is ill-suited for working with RL-trained policy neural networks in terms of its performance, at least in the current form commonly accepted by the deep learning community. \n\nAnd finally, while you did indicate that you appreciated the scalability and the ease of implementation of SGBS, we would still like to invite you to read our answer to Reviewer 7y6f on 'MCTS that can replace SGBS'. There, we have described just how difficult it is to design an efficient and practical tree search method like SGBS that can be used for neural combinatorial optimization.",
" \n\n**Q1. Assumption for the recourse mechanism of SGBS to work**\n\nWhen the network is in a state where it makes only a single mistake during a greedy rollout from producing a globally optimal solution, then SGBS will certainly correct it and return the optimal solution. In practice, of course, the network can make many mistakes along the way that SGBS rarely produces an optimal solution in a single run. (This is why we also have invented SGBS+EAS so that SGBS can be applied repeatedly.)\n\nWhat we find most fascinating about SGBS is not the fact that it can sometimes produce optimal solutions, but rather the fact that it consistently improves the existing solution closer to the optimal solution better than any other existing search methods, under a reasonable computational budget. This does not require an assumption such as there being just a single mistake of the model for it to work. SGBS is capable of finding an improved solution with fewer number of mistakes (or less severe mistakes) when the model makes so many more in its greedy rollout. \n\n \n\n**Q2. Evaluation of MCTS, Active Search, and EAS in Figure 2**\n\nThe detailed explanations you seek are described in Appendix C, under the paragraphs titled with 'Search details' and 'Adjustments made for POMO-trained policy network.' We provide a simplified answer here.\n\nTake, for example, Figure 2A, for which we limit the number of candidate solutions to be created to 1.2K for each problem instance. MCTS needs to run simulations each time before choosing a node definitively, which needs to happen at least 100 times for CVRP-100. Therefore, we let MCTS run 12 simulations at each depth level of the search tree (which is the same for SGBS). \n\nActive Search and EAS use POMO-training for their gradient descents, which is the RL method that has also been used to train the neural net model in the first place. More specifically, in order to execute Active Search (or EAS), four (4) sampling rollouts are drawn, and their results are averaged to make a baseline. Using this baseline, the policy gradient descents are applied upon every probability sampled during the rollouts. The procedure is repeated 300 times that make up a total of 1.2K rollouts being produced.\n",
" \n\n**Q3&Q4. Problem with MCTS applied on RL-trained models**\n\nAt the heart of MCTS is the 'selection policy,' which controls the balance between exploration and exploitation during its selection phase. (This is not to be confused with the rollout policy, the output from the policy neural network.) For example, one of the most well-known 'selection policies' is the UCT (Upper Confidence Bound 1 applied to trees) formula used for the classical random-move-based MCTS. To run MCTS not with random moves but with moves following a given policy model, one needs a 'selection policy' that can incorporate this prior knowledge. We use the most common 'selection policy' formula in the deep learning community, the one used by 'AlphaGo' [35] as well as the MCTS-TSP paper [33] that we have used as the model for our MCTS implementation. (For readers who are interested in more details of our MCTS implementation, we refer to [33] or the Python code we share in our repository.) This formula includes the term $U(s, a)$ (see Eq. C.1), which is proportional to the prior $P(s,a)$, the output from the policy neural network. \n\nOur paper focuses on policy models that construct high-quality CO solutions trained by the policy-gradient method (Eq. 1). Over the past few years, this new learning-based way of tackling complex CO problems has gained increasing popularity and progressively more successful results on various types of problems. One of the characteristics of this method is that the policy model it produces tends to show overconfident behavior, outputting disproportionately high probability values for its most favorable choices. This behavior is normal, and it is encouraged by the RL method itself because it results better outcomes. (When you are placing a bet on a game that you know you have a 60% chance to win, you want to play it 100% of the time, not just 60% of the time.) \n\nNow, for example, if one child node has a prior probability of 99% for being selected (which we see happening very frequently in our models) while all the others share the rest, the MCTS selection formula will almost never select other child nodes and run simulations for them. This is because the conventional 'selection policy' for deep learning applications we use does not have a strong enough drive for exploration that can offset such extremely imbalanced prior. This is to be contrasted with our SGBS algorithm, which will always check a pre-defined number of child nodes for their potentials with simulations regardless of how imbalanced the policy is.\n\nIt may be possible to mitigate this incompatibility issue between MCTS and the policy models trained by RL. One could try inserting a very strong entropy regularization term into the loss during the model training to never allow such strong preference from the model in the first place (at the expense of the decreased solver performance). Or, one could tweak the selection formula directly with a few tunable parameters to allow reasonable exploration even in the cases like the one described above. Solving this issue within MCTS is certainly an interesting topic of its own, but it is beyond the scope of our work. In our current settings, MCTS does not outperform SGBS for most problem instances, even if we allow a much larger time budget for it (such as 1200 simulations per step, instead of 12 as used in Figure 2). \n\n\nLastly, this incompatibility issue also explains the performance of MCTS shown in Figure 2B and Figure 2C. When the model faces unfamiliar problems (Figure 2B, low accuracy model) and is unsure what the good choices are, the problematic over-confidence disappears, allowing good explorations for MCTS to run. We find that MCTS performs almost as good as SGBS in this case. In Figure 2C, on the other hand, fine-tuning the model exacerbates the over-confidence problem, and the performance of MCTS becomes worse than a simple beam search. \n\n \n\n**Q5. Codes for MCTS and Beam Search**\n\nWe have uploaded MCTS and Beam Search codes to the github repository we provided. You also said you cannot locate the implementations of the SGBS codes, but they are written in '*Tester.py' files, and you can run them using 'test.py' files. Please forgive our bad filename choices, and sorry for the confusion. ",
" Thank you and we appreciate your nice review of our work. They are accurate and clearly summarized. ",
" Thank you reviewing our paper, and we are grateful for the comments and important questions raised. We answer to your questions below, highlighting our responses to key concerns.\n\n \n\n**Novelty of the SGBS algorithm**\n\nWe disagree with your statement that SGBS is very similar to Monte-Carlo Beam Search (MCBS*) [41] to the point that the novelty of SGBS is undermined. SGBS and MCBS are, in fact, built upon two totally different search strategies at their cores. Their superficial resemblance comes from the fact that they both rely on a single (or possibly the equal number of) rollouts from each candidate child node, the result of which determines whether it makes it to the next beam front or not. But the mechanism behind such uniform use of rollouts among the candidate nodes are vastly different between MCBS and SGBS. \n\nMCBS explicitly avoids the delicate mechanism for balancing the exploration and exploitation (compared with the use of UCT in MCTS) for simplicity's sake. It simply gives all child nodes of the beam front equal chances for playouts. (i.e., maximum exploration within the beam scope at the cost of search inefficiency.) MCBS may work well on simple abstract problems with *small branching factors*, but it is not suitable for more complicated problems (the TSP with 100 nodes, for example). \n\nSGBS, on the other hand, is more like MCTS than MCBS in carefully shaping its scope of exploration for better search efficiency. Unlike the classic MCTS that is based on random moves, however, we have equipped SGBS with a powerful policy neural network. The neural net limits the scope of exploration by making preliminary decisions on which child nodes get to have the chance for a playout, right on the spot, without having to go through many (MCTS-like) simulations. This is the expansion (pre-pruning) phase of SGBS. \n\nA tree-search method, neural or nonneural, is identified and categorized by how it deals with the problem of balancing exploration and exploitation, which is the fundamental challenge for any tree-search algorithm. A beam search strategy aided by policy neural network via pre-pruning process (not just on the rollouts) is thus novel and an extremely powerful concept that we are introducing to the community.\n\n-----\n[\\*] MCBS can be constructed with different settings of 'level'. We assume that the reviewer is referring to MCBS of level 1 only, because MCBS of higher levels has no resemblance to SGBS.\n\n \n\n**MCTS that can replace SGBS**\n\nEfficiency and scalability are serious issues for neural approachs to combinatorial optimization, and for a good reason. Because highly efficient heuristic methods already exist that are not learning-based for most of the popular benchmark CO problems, neural approaches are facing harsh challenges from the OR community for practicality on the basis of efficiency and scalability. \n\nWhen we deal with computationally expensive deep neural net models, MCTS is not too promising. Even with good parallelization techniques for MCTS, there are many levels of difficulties in order to make it work as efficiently as SGBS. Technically, first of all, it is not trivial to implement a parallel tree search running on GPUs. It is one thing to run a classical, CPU-based MCTS algorithm in parallel via multi-threading, but when it needs operations on GPUs, you would normally need multiple GPUs each matching with its partnered thread of CPU execution. For large scale projects, it is certainly doable, but economically it should not be a favored option. \n\nSecond, there is a GPU memory issue to cope with. Neural net models contain so many parameters, and the trend is having more. The policy model for FFSP in our experiments, for example, requires a memory size of more than 100 MB. \n\nWe have developed SGBS with meticulous attention to these efficiency issues. Our SGBS is optimized in its theory of operation and also in implementation as Python code. To traverse a node one step down in the search tree, SGBS needs just a single cycle of its Simulation phase. And during the Simulation phase, all rollouts are executed in parallel as one batch, from candidate child nodes to their termination nodes, in a single loop. MCTS can hardly match this level of parallelizability. \n\nFinally, let us briefly remind you of the difficulty of designing the MCTS just to have it perform properly with the policy models we use in our experiments. This problem is explained in our answers to Reviewer q7i4, under the heading 'Q3&Q4. Problem with MCTS applied on RL-trained models.' Despite the huge success with AlphaGo, MCTS with policy neural networks has not been a popular topic in the deep learning community, and there is a lack of study on its proper structure, the development of which may not be a trivial problem. ",
" This paper proposes two novel methods for neural-network-based combinatorial optimization.\nThe two methods, SGBS and SGBS+EAS show promising performance for solving three types of combinatorial optimization problems (TSP, CVRP, and FFSP).\n Strengths:\nThe authors proposed Simulation-Guided Beam Search (SGBS), combining Monte-Carlo Tree Search (MCTS) and beam search.\nSBGS is also combined with an interesting existing technique, Efficient Active Search (EAS), which updates a small part of the network during the search.\nThe experiments show that SGBS+EAS performs better than many existing approaches.\n\nWeaknesses:\nThe novelty is somewhat limited.\nThe key idea of EAS is already proposed in [1].\nSGBS is very similar to an existing work, Monte-Carlo Beam Search [41].\nThe contribution of this paper is to prove that EAS works well when combined with MCTS. In my understanding, in the EAS paper [1], the solutions are simply sampled from the model.\nWas there any theoretical/practical difficulty in extending the idea to MCTS?\n\nI agree that SGBS easily utilizes batch parallelism, but it is not the only way.\nHave you considered comparing the proposed method with existing parallel MCTS?\nIn my intuition, parallel MCTS equipped with progressive widening may also perform.\n\n---\nPost rebuttal comments.\n\nThank you for the detailed comment.\nHowever, I did not think I needed to change my score. There is a discussion about the two hyperparameters $\\beta$ and $\\gamma$.",
" The paper presents a Beam Search strategy augmented with Monte Carlo Rollouts evaluating the “return” potential of a sampled action, instead of the commonly used policy likelihood. Besides the immediate advantage of using a search mechanism on top of a network inference (as opposed to a single-shot inference), the paper highlights that the rollouts can act as a recourse mechanism to rectify incorrect decisions made by the network (when selecting the top-k most likely actions). To leverage the additional time allocated to the generation of a solution, the method is combined with the recently published Efficient Active Search (EAS) method. The paper shows interesting synergies between these two components. The approach and its variants are validated on a set of combinatorial optimization problems (TSP, CVRP, FFSP) as well as compared to Beam Search, EAS, MCTS and non-adaptive methods. **Originality**\n\nThe method presented is relatively simple, i.e. augmenting the beam search procedure with greedy rollouts. Although I'm not aware of the exact same scheme being published before, the originality of the method is limited. In addition, most of the performance is driven by EAS. Interesting synergies are highlighted and constitute a novel contribution. Overall, although the method is well described and investigated, the originality and contribution are slightly under expectations.\n\n**Significance**\n\nMachine Learning for Combinatorial optimization problems is an important area of research as this family of problem it underlies many real-life applications. I don't consider the search mechanism to be of significant importance but rather see an interest in the discussion around EAS + SGBS. The search is however a good suggestion for practitioners to easily implement a lookahead mechanism (easier and more scalable than MCTS). \n\n**Quality:**\n\nThe paper is technically strong. The results are well reported (tables and figs), documented and discussed. The balance in the level of details provided in the main text and the appendix is adequate for the reader.\n\n**Clarity:**\n\nThe quality of the writing meets the expected standards. The content is well structured and logically organized. The results are appropriately reported and discussed. - The rollout mechanism depends on the neural network as well — Could an argument be made against the fact that we assume that the network might make a mistake at step *t* but then takes a sensible sequence of greedy actions till the end of the episode (recourse mechanism of the SGBS)?\n- Caption of Fig 2: “All methods evaluate the same number of candidate solutions per problem instance.” Can you describe what it means in terms of MCTS simulations, Gradient descents applied to the network for Active Search, etc?\n- Fig 2 - What do these numbers vary as we grow or reduce the compute budget. Here the setup has been set to favour SGBS using (β = 4, γ = 4). Does the MCTS reach a higher asymptotic performance when given additional time? Given that the authors mentioned in the appendix that SGBS quickly plateaus, where (resources) do the methods cross in terms of performance? I'm surprised that a simple Beam Search with a random rollout to evaluate the nodes outperform the more structured MCTS search. Could the authors provide additional details on this?\n- Fig 2c - why does the MCTS’s performance not follow the same performance trend as the SGBS when fine-tuning the model? SGBS seems to benefit more than the MCTS when the model is fine-tuned.\n- I haven’t found the implementation of the MCTS or the Beam/SGBS search in the code linked to the paper. It seems that only the networks and environments are made available. Will that change in the future? -",
" This paper introduces a general approach for solving combinatorial optimization problems using a hybrid Beam-Search approach that can update it's policy on the fly. In contrast to work that focuses on improving policy towards getting solutions in a single shot, this work attempts to make better use of available computation. One trick here is replacing the compute-intensive MTCS with a simpler Beam-search to pick rollout candidates. Additionally, the authors use EAS to update the policy network to make more effective choices at test time. This paper contribution is in domains where queries to a ground truth simulator are cheap but the search space is combinatorial. ### Strengths\n- Well-motivated paper. \n- Clean execution. \n- Code is provided and readable. \n- Algorithm is tested on reasonable benchmarks.\n\n- I appreciated the section on the algorithm's strengths. \n\n\n### Weaknesses\nOverall I found it a good paper, no major flaws. . I'm limited in my ability to evaluate this paper, especially in the broader context. I read the paper, and understand the main points, but I cannot place it in the wider literature. \n\nI'm confident it is well executed, and sensible in it's approach. ",
" This paper considers the problem of solving NP-hard combinatorial optimisation (CO) problems efficiently and near-optimally using machine learning, specifically in the context of beam search. Recent works have sought to use neural networks to evaluate and select candidates with the beam search optimisation heuristic, iteratively building partial solutions with the use of a decision tree in a breadth-first search manner until a near-optimal solution is found. However, such approaches are vulnerable to poor action evaluations made by any of the neural network’s imperfect predictions, which can be detrimental to overall performance.\n\nTo address this, the authors propose a simple algorithm, simulation-guided beam search (SGBS), which combines neural beam search with simulated rollouts similar to those used in Monte Carlo Tree Search (MCTS) algorithms but without the need for potentially complex state-action value backpropagation. Specifically, at each step in constructing the partial CO solution and the next depth-layer of the search tree, SGBS selects a group of candidate actions using a neural network to predict their complete CO solution score. It then simulates a greedy rollout from each of these actions to their terminal states, and then selects a sub-group (with size equal to the beam width) of these initially marked candidate actions to save in the search tree as the beam front, thus pruning initially marked actions with ultimately poor outcomes from the final search tree/solution. This process is repeated until a complete CO solution is found from root to leaf in the search tree.\n\nAlthough SGBS alone performs well in time-restricted settings, the authors note that practitioners sometimes prefer allocating larger time budgets to find more optimal solutions. Since SGBS is a deterministic heuristic, it receives no benefit from larger time budgets other than adjustment of its hyperparameters, $\\beta$ and $\\gamma$, which the authors show have limited effect on performance beyond a certain point (which is quickly reached). To address this, the authors combine SGBS with the recently published ‘efficient active search’ (EAS) method of Hottung et al. 2022 to create a new algorithm, SGBS-EAS. The authors show that SGBS and EAS have a symbiotic relationship with one another whereby SGBS helps EAS to avoid local optimal while EAS provides an increasingly performant neural network model during test-time. Both SGBS and SGBS-EAS are shown to outperform some canonical CO heuristics and state-of-the-art ML solvers in both solution quality and solving time. Strong points:\n* All sections of the paper are excellently written and easy to understand.\n* As far as I am aware, the idea to combine neural beam search and simulated rollouts in this way is novel, even if it is simple.\n* The proposed method is easy to implement and integrate with existing neural beam search techniques.\n* The experimental results outperform baselines on standard NP-hard CO problems; an important and significant application area of ML.\n\nWeak points:\n* Lack of comparison to/discussion of some other state-of-the-art graph neural network techniques which do not require simulated rollouts or search trees (see below).\n* A few things which could be clarified (see below). * **Context and related work:** The proposed SGBS and SGBS-EAS algorithms, as well as the baseline agents compared to, rely on relatively expensive (in terms of both training time and inference) transformer models with additional training overhead at test time and/or simulated rollouts. Can the authors comment on how such techniques compare to other state-of-the-art ML-CO methods based on cheaper graph neural network models which do not require simulated rollouts (Dai et al. 2017, Abe et al. 2019, Li et al. 2018, Barrett et al. 2020 and 2022, Drori et al. 2020)? Would there be differences in optimality and training and inference time performance? Might the training overhead of EAS at test time incur disadvantages at test time in terms of both inference time and solution efficacy when initially deployed? How does SGBS/SGBS-EAS fit in the context of these other literature contributions?\n\n* **Reward function:** In the SGBS Methodology section, the authors state that $P$ represents a set of target problem instances from which they sample different ‘reward functions’ $R’(S_N)$ during training, which enables the neural network to exploit the limited size of the distribution $P$ rather than having to train for the entire problem space. I’m not entire sure what this means - is it that $S_N$ (i.e. the final solution found by the RL agent) is changing and therefore that the total return evaluated by the reward function is changing (since the RL agent explores during training), or are the authors actually using different reward functions to evaluate the same solution $S_N$ in different ways? If the latter, what reward functions are used? How can the RL agent learn to predict the values of states and actions when the reward function in the MDP is changing? I do not entirely follow this MDP formulation or which reward function(s) were used for each CO problem class.\n\n* **Action sampling methodology:** What policy is used for selecting actions in the SGBS greedy rollouts? Is it just the argmax of the $\\pi_\\theta(\\cdot | s_{d})$ policy of the neural network, or some other heuristic?\n\n\n### Miscellaneous minor issues\n* Fig 1: Would be useful to have a key showing what the different colours, fills, and shapes of search tree edges and nodes mean. Also it may be helpful to add in the caption that Fig 1 is an SGBS step at $d=2$ for clarity\n\n* Pg 7 line 218: ‘the’ typo: ‘...are displayed with the respect to...’\n\n* Pg 4 line 129: Should ‘...the rewards are tagged to the...’ be ‘...the returns are tagged to the...’ given standard RL jargon which refers to per-step evaluation signals as rewards and total cumulative rewards across the episode as returns?\n\n\n### References\n\n* Andre Hottung, Yeong-Dae Kwon, and Kevin Tierney. Efficient active search for combinatorial optimization problems. International Conference on Learning Representations, 2022.\n* Hanjun Dai, Elias B. Khalil, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems, 2017\n* Kenshin Abe, Zijian Xu, Issei Sato, and Masashi Sugiyama. Solving NP-Hard Problems on Graphs by Reinforcement Learning without Domain Knowledge. arXiv:1905.11623, 2019\n* Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search. In Advances in Neural Information Processing Systems, 2018\n* Thomas Barrett, William Clements, Jakob Foerster, and Alex Lvovsky. Exploratory combinatorial optimization with reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020\n* Thomas D. Barrett, Christopher W. F. Parsonson, and Alexandre Laterre. Learning to solve combinatorial graph partitioning problems via efficient exploration. arXiv:2205.14105, 2022\n* Iddo Drori, Anant Kharkar, William R. Sickinger, Brandon Kates, Qiang Ma, Suwen Ge, Eden Dolev, Brenda Dietrich, David P. Williamson, and Madeleine Udell. Learning to solve combinatorial optimization problems on real-world graphs in linear time. arXiv:2006.03750, 2020 N/A"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"OJ8pFRi2Lk",
"5soqdw4_PsJ",
"yBkwUQ-P2dv",
"2iUNEbOKTx1",
"7XqyiWvaZ_J",
"7XqyiWvaZ_J",
"HTrnFtLESRt",
"HTrnFtLESRt",
"HTrnFtLESRt",
"2RTDTDSyaAf",
"LKx2UlXg2FI",
"nips_2022_tYAS1Rpys5",
"nips_2022_tYAS1Rpys5",
"nips_2022_tYAS1Rpys5",
"nips_2022_tYAS1Rpys5"
] |
nips_2022_3r0yLLCo4fF | Quo Vadis: Is Trajectory Forecasting the Key Towards Long-Term Multi-Object Tracking? | Recent developments in monocular multi-object tracking have been very successful in tracking visible objects and bridging short occlusion gaps, mainly relying on data-driven appearance models.
While we have significantly advanced short-term tracking performance, bridging longer occlusion gaps remains elusive: state-of-the-art object trackers only bridge less than 10% of occlusions longer than three seconds.
We suggest that the missing key is reasoning about future trajectories over a longer time horizon. Intuitively, the longer the occlusion gap, the larger the search space for possible associations.
In this paper, we show that even a small yet diverse set of trajectory predictions for moving agents will significantly reduce this search space and thus improve long-term tracking robustness. Our experiments suggest that the crucial components of our approach are reasoning in a bird's-eye view space and generating a small yet diverse set of forecasts while accounting for their localization uncertainty. This way, we can advance state-of-the-art trackers on the MOTChallenge dataset and significantly improve their long-term tracking performance. This paper's source code and experimental data are available at https://github.com/dendorferpatrick/QuoVadis. | Accept | The paper initially had mixed reviews 4567. The main concerns of the reviewers were:
1. can better show the improvement on long-term occlusions (cbmW)
2. lack of results on autonomous driving datasets w/ camera parameters. (cbmW)
3. Questions about the evaluation metrics used (yuJE, Tgjz)
4. In Tab 1, most of the HOTA gain comes from linear prediction in 3D space, i.e., Kalman filters. (yuJE)
5. comparison on 3D MOT 2015 (yuJE)
6. missing ablation study on association threshold (yuJE)
7. what is the tracking / efficiency tradeoff for forecasting (XrjC)
8. how to deal with moving cameras (XrjC, Tgjz)
9. complex pipeline requires training separate sub-models (Tgjz)
10. ablation study on the different view projection methods (Tgjz)
The authors wrote a response to address these concerns. The reviewers were largely satisfied with the response. Reviewer yuJE still had a concern about the message of the paper (Point 4; Reviewer's point [A.1]), and responded:
> The authors replied by assessing that working in BED is already trajectory forecasting. I do not agree with that, that is just 3D or metric tracking. And metric tracking + kalman filter, which explain 90% of the contribution of the paper, should not be advertised as novelty, nor as trajectory forecasting. This view that I am suggesting here, clearly help the reader in understanding that trajectory forecasting is really of little help in MTT (~0.5% HOTA), which is the opposite of what the paper is claiming.
> As I see it, the paper has merits, e.g. ways to go from image to BED in static as well as in moving sequences, but that is not the story told by this paper (the most interesting part being in the supplementary material).
Nonetheless, the final ratings were positive (5667), and the reviewers appreciated the problem solution to handle long-term occlusions, and brings a promising direction for future research. The AC agrees and recommends accept. The authors should revise the paper according to the reviewers' comments and the discussion.
| train | [
"o43vkBpKHFy",
"4NuEMlFeP7P",
"m5cGaXglJ6C",
"_aZ2fFZaSuu",
"pCQt-GvZwBY",
"MF8D486mQi",
"4H3BqEuPF-A",
"5qFN2s6VsZj",
"hMxvisqGIih",
"5tjqUxnV3e7",
"xlMOXJjrJ50",
"r-jJL5Ca4vd",
"r-nQzm78JhF",
"3Mxa-OCb_Xt",
"vbEJoxftums"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal!Most of my concerns are adequately addressed. I keep my postive rating.",
" I have read the responses from the reviewers, and they addressed my concerns. I will increase my rating after the Reviewer-Meta Reviewer Discussion phase. \n\nI recommend the authors highlight these performance analyses of occlusions in their abstract/introduction to show the appealing benefits of the work. ",
" Dear Reviewer, we hope our previous comment clarified your main concerns. We are open to further discussion for the remaining time.",
" Dear Reviewer, \nwe hope our previous comment clarified your main concerns. We are open to further discussion for the remaining time. ",
" Dear Reviewer, \nwe hope our previous comment clarified your main concerns. We are open to further discussion for the remaining time. ",
" I appreciate the feedback from the authors and I have no further concerns. Look forward to the revised paper and released code on this work!",
" \nWe are happy that the Reviewer finds our paper well-written, our method sound, and reasonable. We thank the Reviewer for their feedback.\n\n### The Reviewer is asking for additional experiments to support the strength of our paper and questions the significance of our methods because we can only improve the HOTA performance by ~0.1.\n\nEven though long-term occlusions present the biggest challenges to modern trackers, they are statistically rarer than most fully visible short tracks or shortly-occluded tracks. \n\nThus, improving a smaller number of the most challenging cases has only a marginal impact on the overall performance. \n\nFor this reason, in Fig. 1 and Tab. 1, we highlight results obtained when only evaluating object tracks that undergo occlusions. As can be seen, our method successfully resolves over $17.4\\\\%$ of long-term occlusions, which are not solved by prior work. \n\nWe believe this increase is a significant and meaningful contribution to the community, in addition to our analysis of how current trends in trajectory prediction affect tracking performance.\n\n### The Reviewer suggests expanding experimental evaluation using an autonomous driving dataset.\nPer the Reviewer's recommendation, we analyzed one of the largest vision-based datasets for autonomous driving, BDD100K (https://www.bdd100k.com/), and widely-used KITTI (http://www.cvlibs.net/datasets/kitti/). Our analysis shows that less than $0.6\\\\%$ tracks in BDD100K and $4\\\\%$ tracks in KITTI contain occlusion gaps longer than $2s$. In contrast, in MOTChallenge, $19.4\\\\%$ contain long (over $2s$) occlusion gaps. This is likely due to the semi-automated annotations process in these large-scale datasets, where objects reappearing after occlusions are often assigned new identities.\n\nTherefore, autonomous driving datasets are at the moment, not well suited for studying long-term tracking. \n\nIn contrast, identity preservation is essential for video editing, safety camera analysis, or social robots interacting with humans. In these scenarios, we often only have access to a single RGB camera without additional 3D sensor data. In these cases, BEV reconstruction combined with trajectory forecasting greatly contributes to improving long-term tracking.\n\nThe Reviewer rates our soundness as excellent, presentation, and contribution as good; however, they recommend rejection. We would appreciate insights into the final rating.",
" ### Reviewer asks about 2m threshold in the experiment shown in Fig. 4.\nFor the experiment in Fig. 4, we empirically set the threshold to $2m$. \n\nOther choices for this threshold consistently led to the same conclusion: BEV consistently outperforms pixel and pixel (L2) motion. We will include an analysis of the sensitivity of this threshold in the final paper. \n\n### Reviewer asks for an explanation of the difference between the motion models name pixels and pixels 2D in Fig. 4b.\n\nBoth models use a linear motion model applied to pixel positions in the plot. However, the matching for the pixel model only has IoU matching while the final position of pixel 2D is transformed to BEV and has an additional L2 matching of $2m$ distance. \nTherefore, we find the advantage of transforming the prediction into a \"normalized\" metric BEV compared to only pixel space.\n\nInterestingly, the pixel 2D model performs worse for longer occlusions than the H model, which directly forecasts and matches in BEV. This experiment suggests the importance of forecasting in metric space and counteracting the non-linearity of the camera projection. \n\n\n### The Reviewer ask for clarification line 322:\"tracking results suggest otherwise\"?\n\nIn line 322, we discuss the single-generator GAN model, evaluated using $k = 20$ generated samples, as is the standard in the forecasting community.\n\nIn this configuration, we follow the standard evaluation protocol applied in trajectory forecasting and report optimal (lowest) ADE and FDE. \n\nWith the sentence, \"tracking results suggest otherwise,\" we refer to the tracking performance of the model above, which yields the lowest overall HOTA score among all evaluated motion models. \n\nWhile successfully reducing the number of lost tracks for short ($-18.03 \\\\%$) and long ($-15.63 \\\\%$) occlusions, this configuration yields the lowest association precision. \n\nAs the forecasting model produces $20$ samples for each lost track, several new detections are incorrectly associated to existing tracks, resulting in identity transfer errors. This result suggests a dis-alignment of evaluation metrics used in forecasting and tracking -- a better forecaster in terms of ADE/FDE does not necessarily lead to a better tracker.\nThis is a known drawback of ADE/FDE metrics, which essentially measure only recall and not the precision of the forecasting output.\n\n",
" We are happy that the Reviewer finds our paper well written and nicely structured, the analysis of our contributions to be rigorous, and the results to favor the proposed approach. We are grateful for the Reviewer's valuable feedback and are happy to respond to the Reviewer's questions. \n\n### Reviewer asks to report additional tracking metrics.\n\nPer the Reviewer's request, we report IDR metrics to Tab. 2 (below).\n\n| | MOT17 (static) | | | | | | | | MOT 20 | |\n|-----|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|\n| | BYTE | CenterTrack | CSTrack | FairMOT | JDE | TraDeS | TransTrack | QDTrack | BYTE | CenterTrack |\n| IDR | 78.61 (+0.39) | 65.25 (+6.25) | 67.53 (+0.87) | 66.23 (+0.53) | 56.08 (+1.09) | 67.12 (+1.06) | 61.39 (+0.01) | 62.17 (+0.68) | 66.44 (+0.34) | 35.87 (+3.23) |\n\nNext to the results presented in the paper, we can see that our tracking-by-forecasting method significantly improves the performance of many baseline trackers (e.g. $+6.25$ IDR for CenterTrack).\n\n### Reviewer asks to explain the choice of HOTA (AssRe) metric over IDF1 (IDR).\n\nWe decided to focus on the HOTA metric because it is the current metric that best balances both aspects of tracking task, detection, and association, especially compared to MOTA (measures mostly detection) and IDF1 (measures mainly association). Moreover, HOTA also allows us to perform a fine-grained analysis of the association performance by inspecting both association recall (AssRe) and precision (AssPr). \n\n### Reviewer asks for clarification and definition of ID-Recall metric presented in Fig. 1.\n\nThe ID-Recall metric (Fig. 1, defined in Suppl. F) differs from IDR, defined in [A]. As IDR depends on factors such as trajectory length, the occlusion window, and global optimization behavior, we construct a straightforward but effective metric called ID-Recall. We only focus on occluded tracks and compute the ratio between successfully re-identified after the occlusion divided by the total number of occlusion cases for different occlusion lengths. We thank the Reviewer for pointing out that this should be clarified in the paper.\n\n### Reviewer asks to report results on MOTChallenge 3D MOT'15 challenge.\nUnfortunately, we were unable to accommodate this request -- we were informed by MOTChallenge support team that submissions to this challenge are no longer allowed due to inaccuracies in the computation of the 3D groundtruth. \n\n### Reviewer asks if it is possible to show the impact of the contribution of Bird Eye View Reconstruction and Trajectory Forecasting independently. \n\nHistorically tracking methods have used simple linear models, eg., Kalman filter, to model motion in image (pixel) space. \nIn contrast, forecasting methods operate in a metric, BEV space. \n\nAs the Reviewer correctly observed, estimating a homography for a BEV transformation already results in a considerable performance boost of the linear model compared to the image-based counterpart (BEV linear > image linear), reducing IDSW in long-term occlusions by $16.1\\\\%$. However, as seen in Tab. 1, we can further improve performance ($17.4\\\\%$) using multimodal trajectory prediction models. This is not the case when operating in pixel space ($8.99\\\\%$).\n\nThe analysis of different forecasting methods is part of the contribution of our paper and is only possible thanks to the BEV reconstruction. Trajectory forecasting models trained in pixel space (without a geometric meaning) would forfeit their purpose.\n",
" We thank the Reviewer for their valuable and positive feedback on our paper. We are thrilled that the Reviewer rates our paper to be accepted at the NeurIPS conference. \n\n### Reviewer asks for the trade-off between accuracy and efficiency.\n\nWe set the number of trajectory forecasts based on the performance/ accuracy of the validation set.\nThe computational cost for predicting forecasts and matching to new detections increases linearly with the number of trajectory forecasts. However, the prediction and matching are only small parts of the computation of the tracker. In practice, we only predict $3-5$ trajectory forecasts for each lost object which does not significantly affect speed performance.\n\nThe paper primarily focuses on building an entire pipeline from video to tracks studying different forecasting paradigms. Once the community starts to appreciate the benefit of BEV and trajectory forecasting, future work can focus on improving the end-to-end integration and efficiency of the algorithms.\n\n### Reviewer asks about the improvement of depth consistency with temporal depth estimator.\n\nHaving temporally stabilized depth estimates will most likely improve the robustness of the moving camera setup and the 3D point cloud for static scenes. Future work can try to use models such as [1].\n\nTo explain our good results on moving cameras against the bad performance of 3D reconstruction of pedestrians in Fig. 4(a), we measured the temporal inconsistency of the projected 3D points.\n\nThe magnitude of fluctuation is not the same for all pixels. The localization uncertainty of 3D points of smaller objects (like pedestrians) orthogonal to the scene plane is on avg. 6.5 times larger than inconsistencies on the ground plane. \nThe experiment in Fig. 4(a) supports this finding as the frame-wise positions obtained from depth estimates are not very performant for trajectory forecasting. \n\nWhile not perfectly consistent, we can use the homographies estimated for different timesteps to compute the egomotion and BEV of the moving scenes. Noise in the real-world positions is averaged in the homography estimation because we use all pixels of the ground segmentation mask (usually a larger part of the image), and small fluctuations in the depth values are averaged in the homography estimation.\n\n[1]: Tananaev et al.: Temporally Consistent Depth Estimation in Videos with Recurrent Architectures",
" We thank the Reviewer for their valuable feedback and for recognizing our work as beneficial to the community to overcome the challenges of long-term occlusions. We also thank the Reviewer for highlighting other work before the deep learning era, which we will gladly add to the Related Work discussion.\n\n### Reviewer comments that our method is complex, consisting of different components.\nWe agree with the Reviewer that our overall method is rather complex, primarily due to difficulties associated with localizing object trajectories in BEV space based on a monocular video. Each step of our model, which had to be built and studied in isolation, provides several insights and discusses alternatives in the experimental section. As usual, based on lessons learned, we hope we can simplify and streamline methods for joint tracking and forecasting in the future. \n\n### The Reviewer asks for an experiment comparing the projection approaches to represent objects in BEV presented in the MOTSynth experiment on the MOT dataset.\n\nIn Fig. 4, we compare trajectory prediction for four different projection methods on the MOTSynth dataset: 3D ground truth positions projected on the scene plane (obtained from the GT depth), pixel positions transformed by the estimated homography, 3D positions reconstructed from image depth estimates and projected to BEV, and directly predicting motion in the pixel space.\n\nThe comparison on real MOT data between the motion of the Kalman Filter in pixel space and BEV is demonstrated in the paper in Tab. 1. The results suggest a slight improvement of BEV motion in the global HOTA score of $54.08$ vs. $54.11$, but a significant increase of $7.08pp$ for long occlusions ($>2s$). \n\nWe did not report results for the ground-truth projection and 3D depth points in the paper. In the following, we explain the absence of these experiments but share some insights with the Reviewer.\n\nIn contrast to MOTSynth, we do not have ground-truth 3D positions for MOT sequences because the dataset only provides 2D object bounding boxes. \nAlso, we experimented with 3D points directly extracted from the depth estimates on the MOT sequences. Unfortunately, temporally inconsistent depth values of pedestrians result in an average 3D localization error of $0.76m$ between consecutive frames. These extracted positions form very noisy trajectories, which are not accurate enough for trajectory forecasting and matching. \n\nOur experiences with these trajectories showed significantly inferior tracking performance than with our BEV reconstruction using homography why we did not proceed with these experiments.\n\n### Reviewer reports broken hyper-references in the result tables.\nThanks for pointing this out; we will fix this. \n\n### The Reviewer asks for reporting of MOTA scores next to HOTA.\nWe provide MOTA scores for the ablation study and final submission in Tab. 3, 4 (main), and Tab. 1, 2 in Supplementary. \n\nAs per the Reviewer's request, we also report them here:\n\n| MOT17 (static) | | | | | | | | MOT 20 | |\n|-|----|---|--|-|-|-|-|-|-|\n| BYTE | CenterTrack | CSTrack | FairMOT| JDE | TraDeS | TransTrack | QDTrack | BYTE | CenterTrack |\n80.09 (+0.01) | 70.77 (+0.39) | 71.31 (+0.05) | 71.82 (+0.05) | 59.57 (+0.06) | 70.93 (+0.09) | 69.5 (+0.01) | 69.61 (+0.08) | 73.38 (+0.0) | 47.57 (+0.24) |\n\nMOTA has historically been the most prominent metric for multi-object tracking. However, the community has identified some drawbacks of the MOTA metric, and the public benchmarks like MOTChallenge and KITTI are slowly moving toward the HOTA evaluation metric. The main drawback for our studies is that MOTA dominantly focuses on detection performance and only little measures associations. As we do not modify the detected bounding boxes and only change their associations, the metric only marginally changes with improving associations.\n\n### Reviewer asks for clarification on how we compute the egomotion for moving cameras.\n\nIn the case of a moving camera, for each frame, we estimate a homography matrix $H_t$ and the optical flow of the image $O_t$. Given the homography $H_t$ we transform the pixel positions $x_t$ the corresponding BEV coordinates $X_t = H_t \\cdot x_t$. We also use the optical flow to translate the pixels to $x^\\prime_t = x_t + O_t$. Finally, we compute the translated points given the homography of the current timestep $X_t^\\prime = H_t \\cdot x_t^\\prime$. \n\nWe do this for all pixels of the semantic ground masks of two consecutive images. Hence we obtain two sets $\\{ X_t \\}$ and $\\{X_t^\\prime\\}$. Now we compute the translation $t_t$ for the timestep $t$ that optimizes the $L2$ error between the two point sets (we do not consider rotations). The translation corresponds to the egomotion of the camera. Then, the egomotion $t_t$ is added to $X_t$ to construct absolute positions in BEV. \n\nWe provide this explanation in the supplementary material C.2 -- we will clarify this in the revised paper.\n\n\n",
" This paper explores the long-term occlusion problem in multi-object tracking, and proposes to using trajectory forecasting methods to compensate the tracking losts. The forecasting module is conducted in bird-eye-view. The method can advance state-of-the-art trackers on the MOT Challenge dataset. Overall, I like the idea in this paper. I agree that most current multi-object trackers do not tackle long-term occlusions. The solution in this paper is sound and makes sense to me. However, I feel that the work of this paper hasn’t been finished yet. It doesn’t have supportive experiments. I hope the authors can improve the draft with the following comments. \n\n- Strengths\n1. The paper is well-written and easy to follow. \n2. The paper tackled an existing problem in multi-object tracking and the solution is reasonable. \n\n- Weaknesses and Suggestions \n1. It makes sense that long-term occlusion rarely occurs in MOT Challenge datasets. However, the paper should have supportive experimental results. Current results, boost tracker by ~0.1 HOTA cannot prove the effectiveness of the method. I suggest the authors try to test the oracle of forecasting matching and show the advanced percentage of the method in this part of errors. \n2. How about doing experiment in autonomous driving datasets? The dataset has camera parameters. \n See weaknesses. The authors discussed the limitations in the paper. ",
" The paper investigates the exploitability of trajectory forecasting in multi target tracking. First, unmatched trajectories and unmatched detections are projected in bird-eye view (BEV), trajectories are then extended according to some model and tested for association. Projection in BEV also introduces visibility constraints that further reduces the search space of true matchings. The author validate their projection method against the true 3D position of pedestrians (in MOTSynth), then study the applicability and benefits of different trajectory forecasting models. They show that the proper combination can improve the HOTA score and reduce ID switches of 7 SOTA methods on MOT17 validation set. In the supplementary, the authors detail how their projection method can be extended to moving sequences. - (+) paper is well written and nicely structured, the reading is fluent; figures and tables are also helpful in the understanding of the key steps\n- (+) the analysis of the contributions is carried out with an appropriate level of rigour\n- (+) results are in favor of the proposed approach \n\n- (-) HOTA is a complex and composed measure that tries to unify/project many aspects of tracking to a single scalar, so the authors have to compensate by also using AssA, AssPR as well as ID switches. Why did the authors thought these metrics to be more appropriate than ID recall [A] which measures the ability of a tracker to associate the same identity to a trajectory despite occlusions and interruptions? Isn't this exactly what the authors are trying to improve? Is the \"ID Recall\" in Fig.1 the one from [A]?\n\n[A] Performance measures and a data set for multi-target, multi-camera tracking\n\n- (-) From what I can see from Tab.1 most of the gain comes from a linear prediction method in 3D space. Other than that there is only a 0.5% HOTA left to gain from other trajectory forecasting methods. In this perspective I think the paper is overstating the importance of trajectory forecasting methods in multi target tracking and not helping the reader draw the correct conclusions. 3D tracking has been around since MOTChallenge 15 and kalman filtering even earlier and this seems to be the thing that explains 90% of the performance improvement (54.11-50.71)/(54.52-50.71).\n\n- (-) For the same reason as above, I would have liked to see a comparison on 3D MOT 2015\n\n- (-) In Fig.4 B we see a small difference between H and pixels L2, and a large difference between pixels L2 and pixels. The only way the reviewer can explain the difference between pixels L2 and pixels is through the use of a threshold in different domain, which seems to indicate that the threshold in BEV is less tight than the threshold in pixel space. As a matter of fact 2m seems like a very large margin to associate a trajectory and a detection, how did the authors choose this threshold? I think to remember that for both MOTA and IDF1 the threshold for 3D tracking was 1m instead of 2m (can be checked in the MOTChallenge evaluation kit).\n\n- (-) lines 127-129 \"simply applying trajectory prediction to MOT is not trivially possible\" are misleading, the community has been doing this for years and it's also reported in Tab.1 Kalman Filter (pixel) with decent results.\n\n\n - Can the authors add the ID recall from [A] to the ablation studies in Tab.1 (the MOTChallenge evaluation kit does provide the means to compute it as it is required to compute the final ID F1) or, in alternative, justify why they don't find the metric appropriate and instead use AssA, AssPR and IDS.\n- Can the authors help the reader putting in perspective the impact of the different contributions, i.e. working in bird eye view vs trajectory forecasting?\n- Can the authors motivate the choice of 2m as a threshold for BEV matching and try to justify the difference between pixels and pixels 2D in Fig.4b?\n- Can the authors help the reader understanding the end of line 322 \"tracking results suggest otherwise\"? How do the authors explain this unexpected result? nothing else to add",
" This paper investigates the problem of long-term multi-object tracking. This is a relatively under-explored problem that worths investigation since most existing tracking methods focus on short-term tracklets (usually shorter than 2s). The main contribution of this paper is that they project the scene into bird-eye views (BEV) using homography transformation, and then apply trajectory forecasting and tracking jointly in the BEV space. The original version described in the main text supports only videos captured from fixed views, and an extending version described in the supplementary material steps further to support videos captured from moving cameras. Results on standard test sets suggest the proposed method improves tracking accuracy when combined existing trackers, and refreshes state-of-the-art when combined with the best performant tracker ByteTrack. Originality:\n\nGood. Existing MOT datasets and evluation metics focus more on short-time tracking accuracy. While many methods already perform fairly well on these metrics, they always fail when targets lost for relatively long time. This paper introduces trajectory forecasting in the BEV space to handle long-term lost tracklets and shows promising results. To my knowledge, this is not well-explored in previous literature so the originality is good.\n\nQuality:\n\nGood. The entire pipeline is simple, easy to understand, and seems work well for long-term tracklets. \nMy biggest concern is about the forecasting module. From Table 1 it can be seen that applying Kalman Filter in the BEV space already performs well in terms of both prediction and tracking, and using advanced learning based methods seems do not bring too much gain. This is somehow below my expectation since Kalman Filter is such a simple linear model. Perhaps the problem lies in that most motion in the considered dataset is linear, and in this case, comparing different forecasting modules in a dataset with more complicated (non-linear) motion partterns may be helpful to validate the effectiveness of GAN. \n\nClarity:\n\nGood but can be improved. In general, the presentation is clear and easy to follow. However, it would be better if key experimental conclusions are highlighted more clearly, especially for Table 1. \n\nSignificance:\n\nGood. The long-term tracking problem is absolutely an imporant problem that is not well-explored due to limitation of existing datasets and metrics. This paper sheds new light on a promising direction towards solving this problem: combine forecasting and tracking in BEV space.\n 1. As mentioned in the paper, forecasting multiple possible trajectories for lost tracklets bring more computation cost when perform association. It would be good to see how this affect the tracking accuracy/efficiency and how to do the trade-off.\n\n2. When dealing with moving cameras, homography is estimated for each single frame. This seems contradict conclusions in Figure 4(a), where it is shown that per-frame depth estimation is not stable so that the esitmated homography is largely affected. Is it possible to stablize the depth estimation along the temporal dimension? \n\nTypo:\nL#309 0.43% -> 43.0% The authors carefully discussed the limitation and and potential negative societal impact of their work in the main text.",
" This paper discovers that trajectory predictions for moving agents will significantly reduce this search space and explores the trajectory prediction can improve long-term tracking robustness of MOT. Furthermore, they show that their proposed method reasons MOT in the bird-eye space and generates a small yet diverse set of forecasts while accounting for their localization uncertainty. Therefore, they manage to advance state-of-the-art trackers on the public benchmarks.\n\n ### Strengths\n- This idea is interesting and it will benefit the development of MOT community. Considering occlusions are one of the main challenges in tracking, this work proposes a novel way to overcome this concern. \n- This paper identifies several hurdles in the integration of trajectory prediction and MOT. Besides, there are several interesting and inspiring conclusions. \n- The presentation of this paper is fine and the organization is clear.\n- They achieve the state-of-the-art performance on public benchmarks. Maybe, it may inspire more following MOT methods on this path. \n\n### Weaknesses\n- The framework seems too complex. It includes five independent sub-models, e.g., depth estimation network and segmentation network. All of these sub-models require separate training, which may degrades the robustness of the framework.\n- Before the deep learning era, there are a few works that already attempted to incorporate crowd motion prediction or crowd motion models, e.g., social force, to MOT, some of which are listed below. It'd be better to refer to these works.\n> G Antonini,SV Martinez,M Bierlaire,JP Thiran: _Behavioral Priors for Detection and Tracking of Pedestrians in Video\nSequences_ IJCV 2006 \n> Stefano Pellegrini, Andreas Ess, Luc Van Gool: _You'll Never Walk Alone: Modeling Social Behavior for Multi-target Tracking_ ICCV 2009 \n> Kota Yamaguchi, Alexander C. Berg, Luis E. Ortiz, Tamara L. Berg: _Who are you with and Where are you going?_ CVPR 2011 \n> Wenxi Liu, Antoni B. Chan, Rynson W. H. Lau, Dinesh Manocha: _Leveraging Long-Term Predictions and Online Learning in Agent-Based Multiple Person Tracking_ TCSVT 2015 \n\n\n\n\n - The proposed model estimates the depth and segmentation masks of the video frame, and thus project to BEV space. The quality of BEV is the basis of trajectory prediction. This paper indeed investigated how good the view projection (in Fig. 4) in synthetic data. I wonder if is it possible to conduct experiments to assess the impact of different view projection methods on MOT? \n- The pointers to tables are missing, which makes reading a bit difficult.\n- MOTA seems to be a more popular metric in MOT challenge. In previous MOT works (e.g., Bytetrack), they provide MOTA metrics. For better reference, it would be better to provide MOTA as well.\n- The implementation details on how to cope with the moving camera are not clearly provided. - The major limitation of this work on the uncertainty of homography transformation has been discussed. Their proposed method partially account for these uncertainties via simple strategies. Considering this is a novel exploration along this direction, I think the limitations have been sufficiently addressed. \n\n- The potential negative societal impact of their work was not dicussed. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"5tjqUxnV3e7",
"4H3BqEuPF-A",
"3Mxa-OCb_Xt",
"r-nQzm78JhF",
"r-jJL5Ca4vd",
"xlMOXJjrJ50",
"r-jJL5Ca4vd",
"r-nQzm78JhF",
"r-nQzm78JhF",
"3Mxa-OCb_Xt",
"vbEJoxftums",
"nips_2022_3r0yLLCo4fF",
"nips_2022_3r0yLLCo4fF",
"nips_2022_3r0yLLCo4fF",
"nips_2022_3r0yLLCo4fF"
] |
nips_2022_cYeYzaP-5AF | Meta-Reinforcement Learning with Self-Modifying Networks | Deep Reinforcement Learning has demonstrated the potential of neural networks tuned with gradient descent for solving complex tasks in well-delimited environments. However, these neural systems are slow learners producing specialized agents with no mechanism to continue learning beyond their training curriculum. On the contrary, biological synaptic plasticity is persistent and manifold, and has been hypothesized to play a key role in executive functions such as working memory and cognitive flexibility, potentially supporting more efficient and generic learning abilities. Inspired by this, we propose to build networks with dynamic weights, able to continually perform self-reflexive modification as a function of their current synaptic state and action-reward feedback, rather than a fixed network configuration. The resulting model, MetODS (for Meta-Optimized Dynamical Synapses) is a broadly applicable meta-reinforcement learning system able to learn efficient and powerful control rules in the agent policy space. A single layer with dynamic synapses can perform one-shot learning, generalize navigation principles to unseen environments and demonstrates a strong ability to learn adaptive motor policies, comparing favorably with previous meta-reinforcement learning approaches. | Accept | This is exciting work that demonstrates the ability of self-modifying networks to solve meta-reinforcement learning problems. The reviewers all agree that this is strong work, and the authors have convincingly addressed most of the concerns the reviewers brought up during the reviewing phase. There are a few lingering questions about the applicability of the baselines, but these are quite minor. The authors have further promised to add analytical comparisons and additional details /motivation on the Hebbian update. Given this, I view this paper quite positively and encourage the authors to integrate the additional experiments and details they mentioned in the feedback stage. | test | [
"WDS5R5cT7h7",
"UQZxk0qbWgf",
"71YuwtONQnj",
"nZqGD4lrn0Q",
"QbhWR-qNREIl",
"Ed9J8gzNQFS",
"1tqcFQrkQjE",
"0JtnoR4Ukz5",
"F7XY0KWVDoB",
"GjVHFlvQgv_",
"ZWgkiaQcwQ",
"-c2alWfalf",
"3KVlDNW2XWv",
"J-yjRplRa8G"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewers for their diligence in evaluating our modifications and willingness to increase their score. We are enthusiastic about our work forming a stronger contribution thanks to their feedback! \n\nRegarding last comments from reviewers, we are currently working on delivering an additional analytical comparison (rev. 7zkD) between MetODS and MAML synaptic updates in the Harlow task (section 5.1) that will connect better our perspective on policy transport (rev. aCER) and provide additional justification for our dynamic Hebbian update compared to gradient-based approaches (rev. dG2z), that we will integrate to a definitive version in the coming days.",
" Thank you for your clarifications. \n\nWhen I asked \"What is the (high level) intuition behind the \"read\" and \"write\" operations of the update rule ?\", I meant a higher level one. As you said, this rule isn't commonly used in the field yet. Maybe you could provide a concrete example of a situation (in which an agent is having a suboptimal behaviour), detailing the high level computations of these two operations. \n\nI think it could help readers that are not familiar with this rule to grasp how it works, and its difference to GD. ",
" Thank you for your clarifications! Most of my concerns are addressed and I am willing to improve my score accordingly.\n\nHowever, I still feel that the advantage of this approach is not explained clearly enough. The reply provides some intuitive explanations which are helpful but not convinced enough. Maybe providing some small analytical experiments would be more insightful.",
" Thanks to the authors for their thorough response, and apologies for failing to include the references list with my original review—I have attached it below, though it sounds like the authors figured out which papers I meant. I do feel like the paper has improved, and more clearly states what is conveyed by the experiments. I still feel that it would be better to run a full comparison to the MetaWorld baselines (and while the authors are right that their approach is on par with 20% test performance on ML10, where every method does relatively poorly, 33% is comparable to asymptotic performance for ML1-Push), but I understand that it may be infeasible to do so. I will update my score accordingly.\n\n\n\nReferences\n-------\n\nSarafian et al., 2021: http://proceedings.mlr.press/v139/sarafian21a.html\n\nSchaul et al., 2022: https://arxiv.org/abs/2206.00730\n\nWang et al., 2016: https://arxiv.org/abs/1611.05763\n",
" We thank all reviewers for their time and interest in reviewing this paper, as well as for the helpful comments that help us strengthening our submission. Along with detailed responses below, we uploaded an updated version addressing issues raised by reviewers and are actively working on a definitive submission proposal in line with all comments. \n\n - Specifically, **we focused on improving definitions and motivations of the computational mechanisms supporting MetODS learning** as we identified this comment to be shared by reviewer dG2z, aCER and 7zkD:\n\t- We improved model introduction in section 3 to better motivate the exploration of fast weight for meta-RL with respect to previous theoretical discussion (section 2) on policy transport. (aCER)\n\t- We detailed better the computational principles on which MetODS is based (local tuning, recursive updates, read-write mechanism…) emphasising their originality and grounding in neuroscience. (dG2z, 7zkD). We additionally reworked figure 1 to better serve our model introduction.\n\t- We justified the importance of each computational component by running an ablation study in the maze experiment (section 5.2) as well as proposing experimental variations of the writing mechanism. (aCER) \n - Additionally, we noted that some **clarifications were needed regarding baselines settings** and we proposed to add a detailed description in S.I. (aCER, 7zkD)\n\nLastly, we want to re-emphasize that one major contribution of our work is, in our opinion, to demonstrate that self-contained learning program can emerge spontaneously from sheer optimization of the right class of synaptic control models (here based on neuroscience principles) and can depart strongly from classical gradient-based algorithms. As machine learning is evolving more and more towards automatic discovery of computational components versus engineered ones, we believe that this works is a natural step in this direction and can inspire researchers towards automatic discovery of new learning algorithms. ",
" \"The authors do not necessarily have to run every experiment suggested above (which might be infeasible) for me to consider the paper publication worthy. However, the paper would need to would need to compare to stronger baselines (including the original MetaWorld results, if it is computationally infeasible to train their own strong baselines), would need to more clearly describe the experiments they performed, and would need to outline the limitations of the experiments and approach in more detail.\"\n\n> **We take good note of this synthesis and present along with this reply, a novel version of our submission. Specifically, we sum up the main modifications in relation to your comments:**\n\n**1 ) We augmented our section 5 (Experiments) with a more comprehensive description of our experiments as well as additional results on MetODS:**\n\n- We add a more thorough description of the baselines used in all experiments used to compare with our model.\n- We add full results reported in [1] for the Meta-World experiment for comparison with our restrained training budget.\n\n**2 ) We better motivated and clarified our synaptic meta-learned rule in section 3 (MetODS)**\n\n- We reworked our model introduction by better motivating the components of our weight update rule in relation with experimental results and ablations.\n- We integrated to the maze experiment a suggested ablation study as well as a discussion on variations of the update rule of MetODS. \n- We also add additional results for baselines, notably augmenting MAML with element-wise synaptic tuning parameters and on the policy transport perspective in the Harlow experiment.**\n\n\n **3 ) To accommodate for these changes and better connect our discussion with previous litterature, we reworked our theoretical discussion on the policy transport perspective in section 2 (Background)**\n\nWe connected our theoretical perspective with previous concepts of Reinforcement Learning (namely, cumulative regret, RL as bayesian task inference…) in section 2 and mention more recent work on fast weights for Meta-RL such as [5].\n\n**We thank you again for the many insightful comments as we think they contribute to strenghten overall our submission. As you seem supportive of this work, we hope that such changes could help you reconsider your grade and are happy to discuss further with you these different points.** \n\n[1] Garage: A toolkit for reproducible reinforcement learning research, 2019, The garage contributors\n\n[2] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks\nChelsea Finn, Pieter Abbeel, Sergey Levine Proceedings of the 34th International Conference on Machine Learning\n\n[3] RL2: Fast Reinforcement Learning via Slow Reinforcement Learning Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, Pieter Abbeel\n\n[4] The Phenomenon of Policy Churn, Tom Schaul, André Barreto, John Quan, Georg Ostrovski, Preprint\n\n[5] Recomposing the Reinforcement Learning Building Blocks with Hypernetworks Elad Sarafian Shai Keynan Sarit Kraus \n",
" ## 3 - Theoretical introduction and connection to previous literature \n\n\"I found the connection to the policy transport perspective somewhat unsatisfying. A substantial portion of the paper (~1.5 pages) is spent presenting this perspective, but it seems like the efficiency, capacity, and generality can be (and have been) previously defined without adopting this perspective. I believe that the policy transport perspective could be removed or substantially reduced; indeed, I think that this would perhaps make the paper clearer by focusing more directly on the algorithmic contributions.\"\n\n> **Although we considered this a necessary discussion to introduce the different aspects tested for our meta-RL adaptation mechanism, we retrospectively agree with your remark and worked on condensing this expended interpretation, in order to leave room for an improved model and experiments description as well as ablations mentioned above.**\n\n\"The three concepts defined have prior precedents in the literature; for example the notion of cumulative regret has a long history in RL, has often been used in meta-RL (e.g. Wang et al., 2016), and is effectively a measure of efficiency. The paper should connect these concepts to the relevant prior ideas.\"\n\n> **Agreed. In accordance with your previous point, we propose to ground better our description of the considered aspects of meta-RL (efficiency, capacity and generality) in previous literature by condensing the policy transport section.**\n\n\"If the authors include the policy perspective in the paper, it would be ideal to compare their approach to the baselines with respect to policy transport. For example, if they visualized policy transport for other approaches (e.g. MAML) like they do for their own approach in Fig. 2C, and then quantified the advantage of their approach in terms of policy transport (e.g. showing that their method takes a more direct path in policy space than the baselines), that would be much more compelling.\"\n\n> **Indeed, we agree that this comparison would be beneficial to connect our perspective on policy transport with the different particularities of baselines and give a qualitative comparison of the benefit of the update rule. We are currently trying to align the policy space visualisation with respect to the different adaptation mechanisms at the moment and we will eventually try to add a full comparison with MAML in weight space as well as RL^2 in activation space in section 5.1**\n\n\"As one additional note of interest, the policy transport perspective reminded me a bit of a recent paper I saw on how fast policies update with a gradient update to the weights (Schaul et al., 2022); it seems potentially relevant to thinking about the policy transport perspective, although it is not primarily focused on optimal changes in the policy so much as noisy ones (to my understanding; I’ve not actually read Schaul et al. yet).\"\n\n> **We postulate that you refer to [2]. Thank you for raising this recent reference. Indeed, quantifying the effect of rapid weights updates on a neural network policy changes is crucial for RL at large and we believe that our original update rule can bring additional insight on the sensibility of a neural network policy to its weight state. For instance, in the Harlow one-shot learning experiment, we interpret weight updates as a drastic change in Hopfield energy of the fast weights, which in turn alter the network response to future stimuli. We plan to explore more this perspective in future work, as well as exploring the notion of stochasticity in fast weight updates as a substrate for curiosity during policy adaptation.**\n\n\"The discussion of the prior literature seemed a little skewed to me in places; while it is true that few prior approaches have pursued fast weights “as a function of the current synaptic state or external reward signals” per se, there has been work on using fast weights in Meta-RL (e.g. Sarafian et al., 2021) that probably should be discussed as relevant background.\"\n\n> **Thank you for sharing this very relevant work, of which we must admit that we were not aware. Hypernetworks for Meta- or Multi-task RL is an interesting proposal for learning non-linear mapping from context to weight parametrisation that can improve over vanilla gradient update and we will definitely include this line of work in our discussion.**",
" ## 2 - Presentation of model, ablation and variations of the update rule\n\n\"MetODS consists of a complicated set of changes to the architecture; some ablation experiments should be performed to demonstrate the contributions of different changes. A few such changes are tested for Harlow (S=1 and no element-wise weights), but it would be ideal to see these on a few other tasks, as well as conditions like the following:\"\n\n> **We are not sure about the architecture that you are referring to in the first sentence of this paragraph. Could you help us clarify this point? Nevertheless, the different points below are well taken and converge with other reviewers comments. Hence, we retrospectively agree that we can better clarify the model definition, specifically stating more clearly the originality of the model and the role of its different components, which we propose to do in our revised version.**\n\n\"It wasn’t entirely clear whether beta and kappa are element-wise parameters or scalar; assuming the latter, switching to the scalar version would be a useful ablation.\"\n\n> **Beta and Kappa are scalar parameters and not matrices, but they are differentially tuned with respect to the iteration of the recursive read-write operations. (i.e K^(l)_s refers, at iteration s, to the contribution of the pattern in the previous iteration l.) We will add clarifications to the model description in section 3 line 160.) These parameters could be compared to temporary interacting neuro-chemical constants driving changes in signal transduction in biological synapses. In this sense, they were thought to be shared at the neuron population level. But indeed, there could be interest in defining synapse-wise parametrisation of such constants, at the expense of a substantial parameter increase in O(N^2) with N the numbers of neurons in the plastic layer.**\n\nWhat if the authors kept all aspects of MetODS but switched from the Hebbian update rule to an alternative rule. Ideally this would include something like a hypernetwork (see below), but if that is too computationally expensive there are a variety of changes that could be explored, e.g. instead of the updating based on the outer product of v with itself, what if learned linear projection weights from v to two new vectors q and p and then used a (weighted) outer product of q and p? This may not be the best experiment; the point is just that more exploration of which aspects of the update rule are important (e.g. whether the Hebbian update is somehow intrinsically useful) would help future researchers understand what to explore.\n\n> **This is a very interesting perspective for developing Meta-RL programs as we believe in the general idea of learning parametric learning rule, be they Hebbian or driven by another neural networks. MetODS is a demonstration of the former and has the advantage of a biological grounding and lean parametrisation, but we agree that the advantage of this framework is also the flexibility in the definition of the read-write functions that calls for more exploration.**\n\n> **In this sense, we actually tested several variations for the maze experiment, namely the outer-product linear key-query projections that you are hinting at and will gladly incorporate these results to better build insights on the inner working of the recursive scheme. We also find the hypernetworks idea interesting as we think that non-linearity could potentially update weights in a more complex way, although define non-linear meta-networks did not yield improvement in our exploration and changed the nature of update that we explore in this work.**\n\n\"Adding features to the baselines could be useful too in determining where the real benefits are; e.g. if the authors used MAML with S (smaller) updates per timestep rather than 1, how does it compare? What if they augmented MAML with parameter-specific meta-learned update weighting and smoothing constants alpha and beta? Etc.\"\n\n > **We agree that densifying the baselines experiments could help better seize the benefits of this particular update rule, however we note that proposing S smaller updates of MAML per time-step is not a suitable experiment since the gradient estimate is a function of the current weight state: Either that would require sampling again the environment for producing a new gradient estimate which boils down to the single update scheme, or that would require to use the previous gradient estimate several times, which will also amount to the original single update. Instead, we propose to integrate the experiment on adding synapse-wise tuning of plasticity in MAML and note that contrary to MetODS, this supplementary parametrization is adversarial to performance. Additionally, we note that MAML performance in this experiment is nowhere near the one obtained through continual update mechanisms (MetODS and RL^2).**\n",
" **Thank you for the very thorough review that you conducted on our work. We appreciate your interest and your supportive comment despite the grade and would like to engage discussion on a few points that you raised. Please see below, we refer to your review augmented with our reply.**\n\n## 1- Unfair comparison to baselines\n\n>**We answer with additional clarifications for comment 1 regarding unfair training budget and we correct the misunderstanding regarding comment 2 about restriction of baselines.** \n\n\"[...] The authors limit to 10M steps for the MetaWorld ML1 and ML10 environments; but the original MetaWorld paper achieved much higher performance with RL^2 and MAML after much longer training (300M steps). [...]\"\n\n> **Indeed, for ML1 and ML10 in the MetaWorld experiment, we are testing all of our models on a limited but fair-to-all computation budget of 10M steps, using the official benchmark pipeline and parameters provided by the _Garage_ library [1]. The reason we set this particular experiment to 10M steps is because it represents an already large amount of compute and time (each run for any method takes approximately a week for reaching 10M steps despite parallelisation on a 10 cpu cluster) as well as a large amount of interaction with the environnement (20K episodes and ~1000 PPO iterations). While we agree that baselines have not converged to the final 300M steps performance reported in the Meta-World paper benchmark, nor has METODS, as we still witness increases in test performance at 10M steps. We add that success rate curves can be noisy (crossing curves) but test performance remains above what is reported for RL2 and MAML on ML10. To convince you more, we ran our model for an extra 5M steps on ML1-push and ML10 and found that MetODS is still ahead in terms of test performance (Push at 0.33% and ML-10 at 0.19% at 15M steps which is already the asymptotic level of test performance for MAML and RL2 at 300M steps).** \n \n> **In all other experiments, notably the Mujoco robot control and the maze navigation experiments), we trained models to full convergence (at 1e7 timesteps) and in these cases, MetODS consistently overperforms baselines early-on in training and achieves overall better performance. Hence, we considered that this budget, given computation constraints, was sufficient to demonstrate the potential of our synaptic reinforcement learning rule.**\n\n>**We agree with your commentary regarding the necessity of mentionning the official benchmark of MetaWorld and added these final baseline results to our experimental section for comparison purpose.**\n\n\"[...] The authors limit their approach to a single layer of adaptable weights [...] If the baselines are restricted compared to the original work this limitation would be a major caveat that should be stated much more explicitly in the text. [...]\"\n\n> **Regarding the baselines, we strictly follow the original implementation of authors and do not restrict their adaptation in any way. For MetaWorld, we run the official baselines from Garage with the exact provided setting and note that baselines performance closely follows reported performance metrics on this budget. For the Maze, Mujoco and Harlow experiments, we strictly follow [2] for MAML for model architecture with a three layer perceptron of 100 hidden units with ReLU non-linearities and use gradient descent over the whole network. We tried to tune the inner learning-rate in the Maze experiment with not much difference in overall performance (reported lr=0.05). For RL2 [3], we used a GRU cell with 100 hidden units and tank non-linearity that has the same number of parameters than MetODS, hence matching model complexity. In both cases, performance closely follow metrics already presented in other works. We regret this misunderstanding and we clarify this by adding a specific section for describing baselines in the main text.** \n\n\"[...] Unless I’m misunderstanding both the above points, I find statements like “producing better agents than previous meta-RL approaches” [...] to be somewhat misleading.[...]\"\n\n> **While we reaffirm that we tried to give the fairest comparison possible to our model in every proposed experiments with proven meta-RL frameworks unmodified from original description, we agree that our goal is not to propose a definitive model for Meta-RL, but rather to inspire researchers with original computational principles such as the presented meta-learned hebbian plasticity and recursive updates.**\n\n>**To your point, we mitigate these claims in the second version to better emphasise our explanatory perspective instead. Again, all experiments apart from MetaWorld are trained to full convergence and we preferred running multiple experiments over diverse domains as we believe that the diversity of tasks in which the synaptic rule performed well is a stronger demonstration of the potential of exploring recursive Hebbian updates for meta-reinforcement learning.**\n\n[1/n]",
" **Thank you for your interest in this work as well as for your questions that notably helped us clarify our model presentation (section 3 - MetODS). Indeed, it seems that your main call, converging with other reviewers comments, is for gaining further insight into the original computational mechanisms explored by MetODS. Hence, we would like to clarify at the algorithmic level and justify at the theoretical level by:**\n - **Answering more specifically your questions below.**\n - **Proposing an updated version of our work where we explain more in depth the motivation and mechanisms for MetODS (notably in section 3).**\n\n[...] What is the critical difference between self-modifying networks and classical neural networks? [...]\n\n> **We agree with you on the fact that a “RL model using traditional neural networks optimized with SGD also update its weights through interactions with the environment, based on its current weights”, however, we believe that the learning rule presented in MetODS radically differs from gradient-based learning:** \n\n > - **In SGD, the analytical expression of the update rule is “rigid” in the sense that it depends on the weight state solely through the error signal coming from a predefined loss function (More specifically, through the chain rule, it will be an affine function of the error with respect to activations $\\Delta(W) = \\frac{\\partial a}{\\partial W}.\\frac{\\partial \\mathcal{L}}{\\partial a}$). Here, we take a different approach, by considering a local update rule that depends on the current weight state through the non-linear recursive read-write scheme. This allows to make the expression of the update rule a potentially much more sensitive function of the weight content itself. This self-reflexive property of MetODS update rule is furthermore strengthened by a synaptic-wise parametrisation, which allows synapses to be differentially sensitive to this rule, which is not the case in gradient descent. We show in additional results of our revised version that this synaptic-wise parametrisation is a crucial component of our found learning rule and that is interestingly, does not benefit gradient based Meta-RL methods such as MAML. (section 5 - Maze experiments)**\n\n> - **Moreover, while being principled in terms of convergence, SGD is not biologically plausible and separate inference from learning. Our adaptation is on the other hand, continuous with respect to the agent interactions with environment, which allows the agent to track closely state transitions structures, and articulate temporal strategies, similar to memory-based models such as RL2. This is one key feature that we believe crucial for performance in tasks were sequential planning matters such as in the maze experiment.** \n\n\"What neural network structures are used for baseline algorithms such as MAML and RL2?\"\n\n> **This is an important question also converging with other reviewers comments. We will dedicate a section to more clearly describe the baseline implementations in the updated submission: In all experiments, we followed original implementation: MAML is based on 3-layer deep fully-connected neural network with ReLU non linearities consistent with original implementation, while RL2 consist in the implementation of GRU cell. Particularly for the MetaWorld experiment, we use the official implementations from the _Garage_ library in order to ensure consistency with previously reported results in the literature.**\n\nSmall typos:\n\n> **Thanks for raising these typos. We correct them in the updated submission version. Regarding figure 1, we will adjust it to better reflect our explanation from the first question.** \n\n\"[...] I may misunderstand something here, but in equation 5, it seems that computing v(s) and W(s) requires whole trajectories of v(l) and W(l) to be stored, which leads to a large memory requirement.[...]\"\n\n> **At each time-step we apply S times the recursive scheme, starting from [v^(0),W_t] and gathering intermediary versions [(v^(l),W^(l)]. After S iterations, we simply discard the intermediaries tensors as they are no longer needed and simply store W^(S) as W_{t+1}. Hence, the memory cost of such a procedure is not that important as we only have to keep at most S matrices into memory and 1) S does not need to be very large (we set S=4 for our experiments) and 2) the memory cost is not growing as a function of the episode length and 3) we are dealing with very lightweight model here as we only have a single hidden layer. Hence, our model memory requirement remain largely manageable even with mini-batching. Additionally, note that the adjoint sensitivity method has at least the same memory cost with respect to dynamic variables in backward than in forward (as it integrates the same dynamic but reversed in time) so there is no difference in this regard.** \n",
" **Thank you for this positive review! This is sincerely motivating and we are glad that our work convinced you. We answer more specifically your comments below:**\n\n\"The paper is well motivated by biological and existing artificial neural methods. It clearly presents a novel meta RL approach and compares its performances to Meta RL baselines (namely RL2 and MAML). While the claim on efficiency and generality seems to me well-supported by the experimental evaluation, I have difficulties to understand how the evaluation on the randomly generated grid world helps to evaluate the capacity of the learner. Are different generation patters used between training and evaluation? Otherwise, I think that the authors could sometimes add high level motivational details on the design choices of the algorithm (notably on the multi-step scheme).\"\n\n > **Here we refer to capacity of a learning algorithm as the level of adaptation that an agent can achieve when exposed to a given task. In the maze experiment, we test our meta-learners on a batch of 1000 original generated mazes that the learners have not seen during training. Since the mazes are all different with a much higher algorithmic complexity than Harlow or motor control experiments, it directly tests the learner ability to tune its policy to the precise structure of these instances (here the maze configurations). We believe that this a good test of a learner capacity to adapt its policy because it requires to dynamically articulate diverse pieces of experience into a coherent and efficient policy that match the precise maze configuration (for instance “after encountering such corner, that follows such corridor, I know that it is where the target is…” etc)**\n\n\"[...] Otherwise, I think that the authors could sometimes add high level motivational details on the design choices of the algorithm (notably on the multi-step scheme).\nQuestion: What is the (high level) intuition behind the \"read\" and \"write\" operation of the update rule ?\"\n\n> **We duly note this comment and worked on expanding the explanatory section of the model in a second version of the submission that we push along with this reply. The read and write operations are motivated by biological synaptic computation: Writing consists in an outer product that emulates a local Hebbian rule between neurons, while reading correspond to the non-linear response of the neuron population to a specific activation pattern. (\\sigma(Av) = v’).** \n\n> **While a single iteration of these two operations can only add external information into weights (for writing) and retrieve a similar pattern (as reading consist in a Hopfield update), augmenting the system with recursive iterations offers a much more potent computational mechanism to filter external information with respect to the current weight state: The final activation pattern now becomes a non-linear mix of the incoming impulse v(0) and previous stored patterns which are presumably relevant to inform the agent policy, while for writing, it can also update or reinforce previous belief stored in the weights.**\n\n>**Additionally, differentially tuning the influence of previous activation patterns in the recursion through parameters kappas and betas allows to potentially emulates complex cascades of temporal modulation mechanisms found in biological synapses.**\n\n\"writing suggestion/typos: l 99 and l 296: broken citation l 139: Synapses (with capital S) l 174: s instead of S l 290: \"to\" is repeated\"\n\n> **Thanks for the raising typos, of course they will corrected in the updated submission version. However, note that we refer indeed to capital S in line 174.**\n\n\"I haven't seen any discussion on potential negative societal impact. I, however, cannot really find non-generic remarks on this topic for this work.\"\n\n> **Indeed, at this point, we thought that our work was too exploratory to be directly exposed to negative societal impact. However, we generally believe that the potential of meta RL to automatically reveal new learning strategies must come with precaution regarding the risks to introduce new uncontrolled biases in machine learning.**",
" The authors propose MetODS: Meta Optimized Dynamical Synapses, a meta-RL algorithm that learns by updating its weights through interaction with the environment and its own current weight state. The update rule is recursively applied and allow the algorithm to learn relations between stored patterns and incoming information. MetODS notably uses an element-wise weighting, that allows for different plasticity amplitudes at every connection. The authors then experimentally compare their algorithm to RL2 and MAML and analyze the performances of their algorithms against these baselines in terms of efficiency (One-shot learning and rapid motor control), capacity (achievable level of performance for a distribution of task) and generality (how well the policy transfers to tasks unseen during training). The paper is well motivated by biological and existing artificial neural methods. It clearly presents a novel meta RL approach and compares its performances to Meta RL baselines (namely RL2 and MAML). While the claim on efficiency and generality seems to me well-supported by the experimental evaluation, I have difficulties to understand how the evaluation on the randomly generated grid world helps to evaluate the capacity of the learner. Are different generation patters used between training and evaluation? \nOtherwise, I think that the authors could sometimes add high level motivational details on the design choices of the algorithm (notably on the multi-step scheme). What is the (high level) intuition behind the \"read\" and \"write\" operation of the update rule ?\n\nwriting suggestion/typos:\nl 99 and l 296: broken citation\nl 139: Synapses (with capital S)\nl 174: s instead of S \nl 290: \"to\" is repeated I haven't seen any discussion on potential negative societal impact. I, however, cannot really find non-generic remarks on this topic for this work. ",
" Post-response update\n----------------\n\nThe authors have improved the paper and addressed some of my concerns; while I still find the assessment to be problematic in some areas, I think the present results, together with the interesting idea, make for a paper that could be a useful contribution to the field. I have updated my score accordingly.\n\nOriginal review\n--------------\n\nThis work presents an interesting and fairly novel perspective on meta-RL: having an agent adapt to a task by modifying its weights through an iterated Hebbian update process. The paper demonstrates that this approach outperforms vanilla baselines (RL^2, MAML, and sometimes PEARL) across a variety of settings that stress different aspects of adaptation. This is overall an intriguing approach that I hope to see published either here or in the future.\n \nStrengths:\nThe idea is interesting—it’s nice to see a fresh perspective on adaptation drawing on dynamical systems perspectives from neuroscience.\nThe breadth of demonstrations is fairly compelling; it’s great that the authors articulate different aspects of the meta-RL problem and test each of them.\nThe results on impared robots and with continual adaptation (in the appendix) are particularly intriguing, because they suggest an increased robustness inherent in this approach; however, this difference would be more compelling if the comparison algorithms were trained to matching train performance (see below). \n\n\nWeaknesses (in order of importance):\n* Some of the evaluation choices seem like they could unfairly handicap the baselines; therefore I am not sure how much to trust the overall conclusions of the paper. \n - The authors limit to 10M steps for the MetaWorld ML1 and ML10 environments; but the original MetaWorld paper achieved much higher performance with RL^2 and MAML after much longer training (300M steps). So the evaluation seems to artificially handicap the baselines, by training for much less time than the baseline approaches require. Indeed, in ML1-reach and ML10 the curves for one of the baselines appear to be passing the proposed MetODS approach just where the plot ends. Would the baselines perform better than MetODS if the experiments were run for longer?\n -The authors limit their approach to a single layer of adaptable weights—did they make a similar limitation for MAML? I wasn’t entirely clear from either the main text or the supplement. If the baselines are restricted compared to the original work this limitation would be a major caveat that should be stated much more explicitly in the text. Algorithms like MAML are generally developed and tested with deep networks; it would not be appropriate to test them with a different architecture without doing full hyperparameter tuning; and even with hyperparameter tuning the conclusion should come with caveats if the authors are artificially limiting the baseline approaches. It’s possible that the authors are not doing this; I am happy to be corrected if so, but in either case I would suggest they be more explicit about precisely what the baseline approaches were in the paper when they revise it.\n - Unless I’m misunderstanding both the above points, I find statements like “producing better agents than previous meta-RL approaches” or “compares favorably with prior meta-RL algorithms” to be somewhat misleading. Ideally the baseline experiments should be run using architectures comparable to the original papers, for as long as the original paper in at least a subset of the domains. At the very least the plots should show e.g. the max performance figures achieved in the original MetaWorld paper as a baseline comparison in each case (e.g. a star or dashed line with label like “Fully trained MAML” etc.), and the statements about comparisons (e.g. in intro and conclusions) should qualify that MetODS performs better “[for adapting a single layer of weights] at the beginning of training.”\n* MetODS consists of a complicated set of changes to the architecture; some ablation experiments should be performed to demonstrate the contributions of different changes. A few such changes are tested for Harlow (S=1 and no element-wise weights), but it would be ideal to see these on a few other tasks, as well as conditions like the following: \n - It wasn’t entirely clear whether beta and kappa are element-wise parameters or scalar; assuming the latter, switching to the scalar version would be a useful ablation. \n - What if the authors kept all aspects of MetODS but switched from the Hebbian update rule to an alternative rule. Ideally this would include something like a hypernetwork (see below), but if that is too computationally expensive there are a variety of changes that could be explored, e.g. instead of the updating based on the outer product of v with itself, what if learned linear projection weights from v to two new vectors q and p and then used a (weighted) outer product of q and p? This may not be the best experiment; the point is just that more exploration of which aspects of the update rule are important (e.g. whether the Hebbian update is somehow intrinsically useful) would help future researchers understand what to explore.\n - Adding features to the baselines could be useful too in determining where the real benefits are; e.g. if the authors used MAML with S (smaller) updates per timestep rather than 1, how does it compare? What if they augmented MAML with parameter-specific meta-learned update weighting and smoothing constants alpha and beta? Etc.\n* I found the connection to the policy transport perspective somewhat unsatisfying. A substantial portion of the paper (~1.5 pages) is spent presenting this perspective, but it seems like the efficiency, capacity, and generality can be (and have been) previously defined without adopting this perspective.\n - I believe that the policy transport perspective could be removed or substantially reduced; indeed, I think that this would perhaps make the paper clearer by focusing more directly on the algorithmic contributions.\n - The three concepts defined have prior precedents in the literature; for example the notion of cumulative regret has a long history in RL, has often been used in meta-RL (e.g. Wang et al., 2016), and is effectively a measure of efficiency. The paper should connect these concepts to the relevant prior ideas.\n - If the authors include the policy perspective in the paper, it would be ideal to *compare* their approach to the baselines with respect to policy transport. For example, if they visualized policy transport for other approaches (e.g. MAML) like they do for their own approach in Fig. 2C, and then *quantified* the advantage of their approach in terms of policy transport (e.g. showing that their method takes a more direct path in policy space than the baselines), that would be much more compelling.\n - As one additional note of interest, the policy transport perspective reminded me a bit of a recent paper I saw on how fast policies update with a gradient update to the weights (Schaul et al., 2022); it seems potentially relevant to thinking about the policy transport perspective, although it is not primarily focused on optimal changes in the policy so much as noisy ones (to my understanding; I’ve not actually read Schaul et al. yet).\n* The discussion of the prior literature seemed a little skewed to me in places; while it is true that few prior approaches have pursued fast weights “as a function of the current synaptic state or external reward signals” per se, there has been work on using fast weights in Meta-RL (e.g. Sarafian et al., 2021) that probably should be discussed as relevant background.\n\nThe authors do not necessarily have to run every experiment suggested above (which might be infeasible) for me to consider the paper publication worthy. However, the paper would need to would need to compare to stronger baselines (including the original MetaWorld results, if it is computationally infeasible to train their own strong baselines), would need to more clearly describe the experiments they performed, and would need to outline the limitations of the experiments and approach in more detail.\n See above. Briefly:\n* Which aspects of each experiment might be unfair to the baselines? Would the baselines outperform the proposed algorithm if the experiments were run for longer?\n* Which aspects of the approach are necessary (justified with ablation experiments)?\n* Can the authors justify the policy transport perspective further, e.g. by comparing to the baselines with this perspective?\n\n See above. Briefly:\n* Unless I am misunderstanding something, the experiments are a limitation because the baselines are handicapped (e.g. by far less training).\n* The conclusions we can draw from the paper are limited by the lack of ablation experiments, to identify which aspects of the approach are essential. \n* The authors mention briefly that they leave extending their plasticity rule to multiple layers to future work. Of course, Hebbian rules don't tend to work well for training deep networks, and this point might deserve slightly more emphasis (e.g. a brief sentence in the discussion to highlight the challenge). ",
" This paper presents a new neural network that is able to modify its own weights, known as the self-modifying network.\nBy incorporating self-modifying networks into the meta reinforcement learning (RL) framework, the new model Meta-Optimized Dynamical Synapses (MetODS) shows better empirical performance in terms of efficiency, capacity, and generality, compared to several meta RL baseline algorithms. This work is well motivated. The proposed algorithm MetODS shows significantly better performance across different kinds of tasks, from maze navigation to motor control.\n\nNevertheless, It is hard for me to understand the reasons behind the success of self-modifying networks and MetODS.\nIn Section 3, it is claimed that \"Our model learns to train itself by updating its weights through interaction with the environment and its own current weight state. This mechanism enables MetODS to rapidly compress experience of a task $\\tau$ into a particular synaptic configuration ......\" Arguably, an RL model using traditional neural networks optimized with SGD also update its weights through interactions with the environment, based on its current weights. What is the critical difference between self-modifying networks and classical neural networks? More explanations and analytical experiments are needed to show the advantage of self-modifying networks.\n\nSmall typos:\n- Line 48, conjonction --> conjunction.\n- Line 11 in Algorithm 1.\n- Missing one citation in Line 296. \n- Missing x & y labels in Figure 6.\n\nFinally, I do not find Figure 1 very helpful. What neural network structures are used for baseline algorithms such as MAML and RL2? I may misunderstand something here, but in equation 5, it seems that computing $v^{(s)}$ and $W^{(s)}$ requires whole trajectories of $v^{(l)}$ and $W^{(l)}$ to be stored, which leads to a large memory requirement.\nThe discrete adjoint sensitivity method only helps with the backward pass but not the forward pass (i.e. computing $v^{(s)}$ and $W^{(s)}$). How about the forward pass?"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"nips_2022_cYeYzaP-5AF",
"ZWgkiaQcwQ",
"GjVHFlvQgv_",
"Ed9J8gzNQFS",
"nips_2022_cYeYzaP-5AF",
"1tqcFQrkQjE",
"0JtnoR4Ukz5",
"F7XY0KWVDoB",
"3KVlDNW2XWv",
"J-yjRplRa8G",
"-c2alWfalf",
"nips_2022_cYeYzaP-5AF",
"nips_2022_cYeYzaP-5AF",
"nips_2022_cYeYzaP-5AF"
] |
nips_2022_5K3uopkizS | Robust Models are less Over-Confident | Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation | Accept | This paper empirically demonstrates that adversarially trained models are better calibrated than naturally trained counterparts. The reviewer found this paper interesting, and initial concerns are mainly about 1) missing discussions of prior works, and 2) requiring more ablations.
The rebuttal well addresses most concerns (especially regarding the novelty w.r.t. prior works). As a result, three (out of four) reviewers unanimously agree to accept this submission. The reviewer itb7 is the only one against accepting this paper; nonetheless, the original review from itb7 is sort of vague and does not provide useful information for instructing authors for preparing a high-quality rebuttal accordingly. Also as the AC, I cannot see any significant concerns/drawbacks raised in the reviewer itb7's comments, therefore decide to ignore it.
In the final version, the authors should include all the clarifications and the additional empirical results provided in the rebuttal.
| train | [
"VeZeWmwHqOC",
"SMO3-gOlQ7",
"dIR5wPW5SJ0",
"BxRybqJEJI",
"KHF63cv0ej3",
"i5aysvd7LR",
"VTIiNodMGN5",
"WNmQ1Rt1jXT",
"ros6J3vHhHb",
"kKx5aGs48SA",
"m7K7mEp-cQe",
"te85nc0R5M4",
"We6q89kvsWt",
"vfti1vnaRVx",
"PYahQ2KkK_Y",
"UQQ-egjPTIt",
"olr7zP9Lv2FQ",
"hjlbEh-KU6",
"p79-zaN84oN",
"FDSAQQqQCgA",
"nq_2pyo_YgA",
"r-tumqwhndq",
"N9YTfafZdb",
"70SIueqjUL",
"ousUEyFjbDM"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would also like to point out our following comments regarding related work:\n\nhttps://openreview.net/forum?id=5K3uopkizS¬eId=nq_2pyo_YgA\n\nhttps://openreview.net/forum?id=5K3uopkizS¬eId=We6q89kvsWt\n\nhttps://openreview.net/forum?id=5K3uopkizS¬eId=p79-zaN84oN\n\nPlease also pay attention that the other reviewers were unable to provide further references",
" Thank you for the response. Two reviewers changed their score: pPVH, 92ac. Therefore, it is fair to say that reviewer*s* changed their score*s*. Either way, we would like to spend the limited remaining time focusing on our paper and not rhetorics. \n\nYou now have stated numerous times that you are having issues with \"novelty/contribution\" - Please provide references that back this statement and show a lack of novelty.",
" Please, don't get the comment wrong. I do not just follow the trend. I explicitly stated that the novelty/contribution is not enough. I said I also found similar concerns on this point from other reviewers. And just curious, I see that only one reviewer raised the score, but others kept the original score. What makes you say \"other reviewer**s** have raised their score\"?",
" Dear reviewer,\nWe are grateful that you raised this question. We are also not aware of any such study and we think that our paper makes an important contribution in this respect. Since we did not receive any answer from reviewer pPVH regarding this claim on missing novelty, we are depending on your final score. If you agree that our paper should be presented at NeurIPS 22, please consider to increase your rating to weak accept. ",
" Thank you for specifying your final justification. We understand that you want to follow the general trend of your fellow reviewers. In this context, we want to point out that the other reviewers have raised their score over the course of the discussion period and would ask you to do the same. Please also refer to their arguments in favor of our paper.",
" Dear reviewer 92ac, \nthank you for the discussion! To further improve our submission, we would also be very interested in the mentioned reference! Which paper are you referring to, that analyzes the decrease in confidence of adversarially robust models? We would really like to include such work into our analysis - but are not aware of such prior study. If you don't have any references in mind, please consider raising your score to an accept score. \nThank you again for helping us to improve our submission!",
" Dear authors,\n\nThanks for the response and modifications to answer some of the concerns. \n\nTable 3 is a subset of AUCs that I wanted to see (like figure 4). Some of these can be included in the plots with close results. For example, the AUCs can be put in legends or titles of Figure 4 for Square attacks. \n\nThanks for including non-robust results. Overall it seems that on unseen attacks the ROCs are close for robust vs non-robust models (Figure 4). Some of the CIFAR10-C results are mixed (saturate, contrast, brightness).\n\nI read the reviews from the other reviewers. I agree that other priori work hinted at this property of AT, but this paper's extensive experimentation methodology for establishing this fact is interesting and above borderline.\n\nOverall I don't change my score. The work is definitely above borderline, but the results on Squares and CIFAR10-C discourage me from a 7+ rating, given that robust models are marginally better than non-robust ones in detecting the attacks. The results on learnable activations and FLC are interesting, but still not a clear win.\n",
" I was not aware of the \"less overconfident\" results previously. What results are you thinking of when you write that the results in the paper are already known?",
" Thanks for your response and revision. I do have a misunderstanding before about the models. But I still think there is not enough novelty for acceptance. Most results are known empirically, this work just verifies these in a more statistically scientific way, e.g., robust models have lower confidence, and confidence can be used in adversarial examples detection. So I would like to keep a borderline score.",
" The end of the discussion phase is approaching and we still did not receive any feedback on our response. In our rebuttal, we clarified that we dit **not only evaluate off-the-shelf models** but rather provide **71 newly trained models** to allow for an in depth and statistically significant analysis of paired robust and non-robust models. \n\nBased on your review, we understand that the low score was assigned due to an honest misunderstanding with respect to this point. After our clarification, do you have any additional questions or concerns? If not, please consider updating our score accordingly.\n\nPlease don't hesitate to let us know if you have any further questions or remarks!\n",
" Thank you for adjusting your score! Given that you indicated that we have addressed all your concerns: would you consider accepting the paper or could you elaborate what you feel is missing to further increase your score?",
" Thank you for the quick response and for making these changes, my concerns are addressed. ",
" Thank you for your reply! Due to the page limitation in the revision, we have only added the references with a short discussion in the revised paper. Specifically, we have added the mentioned reference [0] in line 78 (as our reference [58]) and briefly discussed [1] in line 136ff as our reference [70]. We fully agree that a more in-depth discussion such as the one provided in our initial reply will be helpful and the final paper template also offers some additional space. We would therefore gladly add the following discussion after line 133 of our paper:\n\n“Yet, only few but notable prior works such as [44,58] have investigated adversarial training with respect to model calibration. Without providing a systematic overview, [44] show that AT can help to smooth the predictive distributions of CNN models. Qin et al. [58] investigate adversarial data points generated using [5] with respect to non-robust models and find that easily attackable data points are badly calibrated while adversarial models have better calibration properties. In contrast, we analyze the robustness of models using paired model samples rather than investigating individual data points. Importantly, our proposed large-scale study allows a more differentiated view onto the relationship between adversarial training and model calibration, as discussed in Section 3. In particular, we find that adversarially trained models are not always better calibrated than vanilla models especially on clean data, while they are consistently less over-confident.”\n\nWe hope that our response addresses all of your concerns. Thank you for your time and feedback on our submission! Please don't hesitate to let us know if you have any further suggestions!\n",
" Thank you for the response. I can't see this discussion of related work in the paper---can you add a discussion?",
" Dear Reviewer, \nWe did not receive any feedback on our response yet.\nPlease let us know whether our rebuttal and revision have addressed your concerns and whether you have any additional questions or concerns we should address.\nThanks!!",
" Dear Reviewer, \nCould we address your concerns in our rebuttal and revision? Please let us know in case we missed anything or in case you have any additional questions that we could address. \nThanks!",
" We would like to thank all reviewers for their reviews and valuable suggestions. To focus your attention on the changes we have uploaded a revised colour-coded (in orange) manuscript.",
" Thank you for your time and effort you put into the review of our paper. We address the points listed under weaknesses and questions one-by-one in the order they appear in the review.\n\nW1: [ROC of clean models in Figure 4] Thank you for the suggestion, we added non-robust models into Figure 4. We could observe that those models indeed fail to recognize PGD samples. They are able to distinguish clean from Squares samples quite well. \n\nW2: [More quantitative metrics] We report the density plots of all models in the appendix in Figures 9 and 10. There one can see that almost all models show similar calibrations except for two models which are described from line 199 to line 204 in the manuscript (202 to 205 in the revised manuscript). The ECE for the different models are reported in the appendix Figure 12 and Figure 13. Due to the amount of models we only reported the values without each specific name of the model. Figure 8 where we show the Precision-Recall Curve for ImageNet, the equivalent ROC curve is reported in the appendix Figure 17 (revised manuscript Figure 22). Further, we report the Precision-Recall curves for CIFAR10 and CIFAR100 in the appendix Figure 14 and Figure 15. We tried to restructure the appendix for better clarity.\nAdditionally in our revised manuscript, we added an evaluation on the improved downsampling and activation by inspecting the ROC curves and AUC values for these models and their comparable models in detail in Figure 20 and Table 3 in the appendix.\n\nW3: [Unseen attacks] Please note that the Squares attack is an unseen attack during training for both robust and non-robust models. To further strengthen our evaluation in this respect, we additionally evaluate CIFAR10-C as a generalization task on the robust model and their non-robust counterparts. CIFAR10-C is a dataset with common corruptions and therefore usually allows to make observations on model behavior in unseen scenarios. We observe a similar trend as in the adversarial samples. Robust models are less over-confident. The full evaluation is now included in our revised manuscript (Section C).\n\nQ1: [ROC AUC for attack detection in Figures 5 and 6] FLC, as well as the learned activation functions, use AT to achieve robustness, thus only swapping the building blocks leads not to an increase in robustness, however, the disentanglement of the confidence score is better calibrated for those two approaches. However, when comparing the AUC for each ROC curve we can see that the improved building blocks lead to higher AUC. We added the full results in the appendix of the revision of our paper (Table 3).\n\nQ2: [“learnable activation blocks and FLC generalize better to unseen attacks compared to AT”, Unseen attacks] \nFrom our results, it can not be concluded that learnable activation blocks or FLC generalize better than AT, because both models are additionally trained with AT. We can only conclude that FLC or learnable activations can have an additional positive impact.\nWe used the black-box attack Squares to evaluate against unseen attacks. Specifically, none of the models has seen Squares samples during training. Further, the FLC pooling is trained with simple FGSM thus the PGD samples are also unseen for this model. However, the model including learned activation functions is trained with PGD and thus has seen PGD samples already during training. Squares samples are out-of-domain.\n\nIn summary, we incorporate your suggestions into our revised manuscript as follows:\n- We included the non-robust models in Figure 4. \n- We restructured the appendix for more clarity.\n- To further strengthen our evaluation for unseen domain shifts, we additionally evaluate CIFAR10-C as a generalization task on the robust model and their non-robust counterparts and observe a similar trend as on the adversarial samples. Robust models are less over-confident. The full evaluation is included in our revised manuscript (Section C).\n- We evaluated the ROC curves (Figure 20) specifically for the improved downsampling and activation function and report the AUC values (Table 3) for the models in the appendix (Section E).\n- We fixed the Typo in the caption in Figure 7, according to the last point mentioned in the questions.\n",
" Thank you for your review of our paper. In the following, we will address the points mentioned under weaknesses and questions one-by-one.\n\nW1: [just using off-the-shelf checkpoints] Actually, we do not only provide statistics on off-the-shelf checkpoints (see lines 154ff, Experimental Setup in our paper). We access checkpoints of different adversarially trained models from RobustBench, which we call “robust”. Thus, we understand a model to be robust, if it shows robust accuracy on RobustBench of more than 41.44 % on CIFAR10, 18.95% on CIFAR100 and 25.32 on ImageNet (listed on RobustBench) accuracy. All non-robust models are self-trained and have 0% robust accuracy! To facilitate the presented analysis, we train all 71 architectures with the respective training schemes to high clean validation accuracies as seen for example in Figure 1. Thus, our paper facilitates the first solid analysis of the behavior of robust versus non-robust models by providing direct comparison of 71 models. As stated in line 157, we will publish all trained models upon acceptance to facilitate future research. \n\nW2: [distinction between robust and non-robust models] We understand a model to be robust if it shows robust accuracy on RobustBench of more than 41.33 % accuracy on Cifar10. All non-robust models are self-trained and have 0% robust accuracy. We specify this understanding of “robust models” versus “non-robust models” in the revision. \n\nW3: [observation is not surprising] While we agree with the reviewer’s intuition, we would like to point out that we are not aware of any prior theoretical proof or any other statistically significant empirical analysis which has actually shown this before. Our non-trivial contribution is to replace intuitions with a scientific analysis, providing a solid base for further works in model calibration and robustness. \n \nW4: [ablations of architecture influences] We have discussed this aspect already in the limitations section of our paper. We agree that our submission can only be understood as the starting point of analysis for exactly this reason. Yet, we want to point out that we show some more details on the behavior of FLC-pooling networks in the supplementary material in Figure 18. Since training high-quality adversarially robust models is equally expensive as it is technically non-trivial, especially for novel architecture designs, we understand this first dataset of paired “robust” and “non-robust” models as an important first step, even though the variance in the underlying architecture design is still limited. \n\nQ1: The over-confidence, as defined by Naeini et al, [54] and in equation (3) of our paper, takes into account the confidence of models for incorrect predictions while disregarding the confidence in correct predictions. Low overall confidence can indeed reduce the model over-confidence, but this would lead to an increased calibration error. Please refer to Equations (1) to (3) to clarify.\n\nQ2: We are not aware of any detailed study that would provide such empirical evidence. Yet, following the argumentation of Grabinski et al., ECCV 2022, models with traditional pooling operations can suffer from aliasing and thus focus on high frequency information that is not reliable. This concept is in line with the implications of the widely discussed texture bias [2], as well as the robust model by Saikia et al [3]. We add this discussion to the revision of our paper. \n\n[2] Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., & Brendel, W. (2018). ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231.\n[3] Tonmoy Saikia, Cordelia Schmid, Thomas Brox, Improving robustness against common corruptions with frequency biased models, CVPR 2021.\n",
" With all due respect, this review hardly meets the minimum standards one should expect from NeurIPS: it does not provide any helpful or constructive criticism. Instead, it reads like a template rejection phrase where the reviewer even forgot to insert the correct venue (we are not at ICLR (!)). Beyond this, the summary of our paper is also incorrect: While one would, from the spurious results of previous methods [0] and [1] mentioned by Reviewer pPVH and our reference [44], expect robust models to be better calibrated, our study shows that this is not necessarily the case (see for example Figure 3). Robust models are less overconfident, but in their training domain (clean samples, Figure 3, left), the calibration is better for non-robust models than the calibration of robust models. Only when adversarial examples or out-of-domain samples are considered (Figure 3, center and right, respectively), they actually show improved calibration. Yet, robust models are always less over-confident. We will state this more clearly in the revision.",
" Thank you for your valuable suggestions: We will add references to [0] and [1].\nTheir initial experiments confirm the necessity of a solid empirical analysis of the relationship between model confidence, calibration and robustness.\nIn particular, [0] do not investigate adversarially trained models but instead only look at vanilla networks and construct adversarial samples using the Carlini&Wagner attack. This is a highly limited perspective. They find that easily attackable data points are badly calibrated and that adversarial models have better calibration properties. In contrast, we analyze the robustness of models rather than individual data points on many paired samples. \nImportantly, the findings of our large scale study are more differentiated (see Figure 3). While robust models are always less over-confident, they are not always better calibrated with respect to clean (in-domain) data (Figure 3). Their calibration is much improved on the adversarial data they are trained on (this is less surprising). Their calibration is also improved with respect to out-of-domain data (Squares adversarial samples). \n[1] limit their analysis to one baseline model for which they report overconfident behavior of a robust model on SVHN. Further, they only report evidence for low resolution data. Their aim is to motivate the use of calibration to perform improved adversarial training. \n\nInstead, our submission facilitates a large-scale analysis of both, low resolution (CIFAR10/100) and high resolution models (ImageNet), on a substantial number of model pairs of robust and non-robust models. Given these remarks, we are the first to offer such a study and accompanying dataset and hope that it will be of broader use to the community.",
" In this work the authors investigate differences in calibration-related properties between standardly trained models and those with robustness interventions. Starting from the well known result that standardly trained models are over confident (in that they show high confidence even on incorrectly classified examples), the authors find that models with adversarial robustness interventions generally are less overconfident and better calibrated in terms of empirical calibration score, a heuristic for measuring calibration. They also find that while one can predict whether or not a model will correctly classify a given example about the same for both categories of models on natural test set examples, models trained with adv. interventions can much better predict whether or not an example that has been adversarially perturbed will be predicted correctly (both in the original threat model, and in an $\\ell_0$ threat model). The authors claim these results as the basis for a newly discovered relationship between adversarial robustness and calibration. The paper presents results that uncover interesting properties of robustly trained models. However, there are some concerns with treatment of prior work / novelty.\n\nPrior work/novelty: there are two prior related papers connecting adversarial robustness and calibration:\n- [0]: https://arxiv.org/abs/2006.16375 (NeurIPS 2021): finds that adversarially robust models have much better calibration properties according to the ECE heuristic (Section 3).\n- [1]: https://arxiv.org/abs/1910.06259 (ICML 2021): Use calibration to perform adversarial training by reducing confidence on adv examples at train time.\n\n[0] in particular looks like it contain results that could be highly related to the results presented here (i.e. seems to show a result very close to one of the core contributions in this paper); it would be good to clarify this situation. Questions are in strengths and weaknesses. Yes",
" The paper studies the calibration abilities of robust models. The paper investigates 71 adversarially trained models and compares these with naturally trained counterparts. The paper observes that most non-robust models are over-confident but robust models are less confident so they are better calibrated. Additionally, the paper observes that specific layers, downsampling, and activation functions can lead to better calibration. The paper analyzes the calibration abilities thoroughly with many robust models and arrives at the conclusion. However, the observation is not very interesting as it is expected behavior. Additionally, technical contributions are limited. I do not believe the contribution of this paper is enough for top-tiered conferences including ICLR. \n \n The paper can be improved better if the authors can actually improve applications or existing methods based on the observation in this paper.\n The limitation is discussed.\n\n\n-- Post rebuttal\n\nThank the authors for their response. I am sorry for the reference, but this definitely does not affect my final score. Even though your claim is correct, I still think that the contribution of this paper is very marginal and not enough for acceptance. And after reading other reviews, it seems like other reviewers also agree on this point. Therefore, I kept my original score.",
" This paper collected 71 robust models and their counterpart non-robust models, do inference on CIFAR-10, CIFAR-100, and ImageNet. They find that generally robust models have less confidence in both clean data and attacked data. They also find that downsampling strategies and activation functions influence much on prediction confidence. Strength: \n1. Writing is easy to follow.\n2. Enough results to support their argument.\n\nWeakness:\n1. Just making statistics on off-the-shelf checkpoints and reporting the results. No novel designs of architectures or training strategies are proposed. \n2. Models are simply classified as 'robust' and 'non-robust'. For different 'non-robust' models, there is less analysis of the adversarial training strategies.\n3. The conclusion, that robust models are less overconfident, is not surprising. Robust models are trained on examples that maximize the loss function during adversarial training, so empirically they are less confident about their decisions. \n4. Lack of detailed ablations of architecture influences.\n\n======== Post-rebuttal Update =========\nAfter a more thorough investigation, I do find that this idea is not covered by prior works. This work shows the lower-confidence phenomenon of robust models through large-scale contrast experiments, and also gives applications including predicting erroneous decisions and detecting adversarial samples. Based on these reconsiderations, I would like to raise my score to acceptance. 1. Obviously, high confidence in correct predictions and low confidence in incorrect predictions is good. But why low confidence in both correct and incorrect predictions is better than high confidence in both correct and incorrect predictions?\n\n2. Are there any straight connections between downsampling strategies and prediction confidence? An extensive report, but lacks novelty.",
" This paper focuses on the problem of CNNs being overconfident with their predictions and the effect of adversarial training on this matter. It contains extensive empirical analyses of model confidence scores: \n- Adversarially trained (AT) robust models.\n- Model architectures with parametrized activation functions and downsampling layers (as explored in [13]). \n\nThe authors show:\n- AT results in more calibrated models. They do so by:\n1. Taking existing robust model checkpoints from [15].\n2. Train an identical model that's not trained with AT.\n3. Creating a validation set of clean and adversarial samples in white-box and black-box settings.\n4. They support their claims on CIFAR-10 and -100 and ImageNet datasets. They visualize correct and incorrect class confidence scores, predicted score distribution, and the expected calibration error metric.\n\n- The robust models are less confident on attacked samples. This has been shown by comparing the ROC of robust vs. non-robust models on clean and attacked samples. They also show that the robust model confidences can be used to detect adversarial examples directly.\n\n- They show that improved building blocks result in lower confidence scores on adversarially attacked samples. They visualize the distribution of confidence scores when these modifications are made.\n\n Strengths:\n- Highlighting an important property of adversarial training.\n- Extensive empirical analysis covering different aspects of their hypothesis.\n- Paper writing and organization.\n\nWeaknesses:\n- In Figure 4, I would expect to see the ROC of clean models used for the same purpose although potentially it's not great.\n- More quantitative metrics could be reported: (a) The ROC AUC for attack detection of different approaches for experiments in Figures 4, 5, and 6, 8 for robust and non-robust models and the different building blocks. ECE for the experiments is interesting to see. I couldn't find these values in the appendix either.\n- There could be more experiments focused on the generalization of claims to unseen attacks. - In Figures 5 and 6, it seems that the ROC AUC of the second row can be good for attack detection. Can you please report this value? It can be the case that AT and learnable activation functions or FLC pooling can result in similar AUC. This could mean that in case of calibration is not important and we just care about attack detection, one can rely on swapping the building blocks which have lower complexity compared to AT.\n\n- Is this a fair conclusion: learnable activation blocks and FLC generalize better to unseen attacks compared to AT for attack detection How do Figures 4, 5, and 6 look like on unseen attacks?\n\n- typo in Figure 7: rigth -> right. The authors have adequately addressed the limitations of their work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"dIR5wPW5SJ0",
"dIR5wPW5SJ0",
"KHF63cv0ej3",
"WNmQ1Rt1jXT",
"N9YTfafZdb",
"ros6J3vHhHb",
"hjlbEh-KU6",
"ros6J3vHhHb",
"kKx5aGs48SA",
"p79-zaN84oN",
"te85nc0R5M4",
"We6q89kvsWt",
"vfti1vnaRVx",
"nq_2pyo_YgA",
"r-tumqwhndq",
"70SIueqjUL",
"nips_2022_5K3uopkizS",
"ousUEyFjbDM",
"70SIueqjUL",
"N9YTfafZdb",
"r-tumqwhndq",
"nips_2022_5K3uopkizS",
"nips_2022_5K3uopkizS",
"nips_2022_5K3uopkizS",
"nips_2022_5K3uopkizS"
] |
nips_2022_tmUGnBjchSC | Generalizing Bayesian Optimization with Decision-theoretic Entropies | Bayesian optimization (BO) is a popular method for efficiently inferring optima of an expensive black-box function via a sequence of queries. Existing information-theoretic BO procedures aim to make queries that most reduce the uncertainty about optima, where the uncertainty is captured by Shannon entropy. However, an optimal measure of uncertainty would, ideally, factor in how we intend to use the inferred quantity in some downstream procedure. In this paper, we instead consider a generalization of Shannon entropy from work in statistical decision theory (DeGroot 1962, Rao 1984), which contains a broad class of uncertainty measures parameterized by a problem-specific loss function corresponding to a downstream task. We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures such as knowledge gradient, expected improvement, and entropy search. We then show how alternative choices for the loss yield a flexible family of acquisition functions that can be customized for use in novel optimization settings. Additionally, we develop gradient-based methods to efficiently optimize our proposed family of acquisition functions, and demonstrate strong empirical performance on a diverse set of sequential decision making tasks, including variants of top-$k$ optimization, multi-level set estimation, and sequence search. | Accept | The paper proposed a novel acquisition function for BO, based on a generalization of Shannon entropy that enables one to incorporate problem-specific loss functions corresponding to a downstream task. The authors show that the proposed acquisition criterion generalizes a number of well-known BO acquisition functions, including EI/KG/ES/PES. A detailed training procedure for optimizing the acquisition function was discussed in the paper, and experimental results show that the proposed acquisition function with the optimization procedure performs well over a diverse set of tasks.
All reviewers agree that this paper is well written, and the idea of unifying a collection of “classical” BO acquisition functions is interesting. There were a few concerns about the sufficiency/significance of the experiments, mainly due to the (lack of) baselines considered in the tasks. The authors clarified the concerns by including preliminary runs of several new experiments, and highlighting that the proposed approaches were targeting novel tasks that went beyond the vanilla optimization tasks. There were no other critical concerns in the reviews. The authors are strongly encouraged to address the questions raised in the reviews when preparing a revision of this paper.
| train | [
"-qm1397_Ir",
"N967pn6zYC",
"htPUt2TbtE",
"khI9ukrSZ-f",
"4huYRc6HHSv",
"H4bFFn1fhLa",
"QpIUsZst4ad",
"uWIYyuuDO81"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have adequately addressed my concern and questions.\nI am glad they have also shown \"Probability of Improvement\" to be a special case of their approach in response to another reviewer.\nMy rating remains unchanged after considering the discussion between authors and reviewers thus far.\n",
" Thank you for your helpful review! We appreciate the positive feedback, and address your comments and questions below.\n\n### **Experiments on traditional optimization settings**\n\nNote that the main reason we strayed away from experiments on traditional settings (like vanilla optimization) is that, for the typical losses in this setting, our EHIG acquisition function is *equivalent* to existing acquisition functions, such as EI/KG/ES/PES. Therefore, for these traditional settings, we are not aiming to show improved performance of the EHIG over existing acquisition functions — as they should achieve roughly the same performance!\n\nWe do, however, think that we make a contribution for these more-traditional settings, though not in terms of performance. Instead, for traditional optimization settings, our EHIG framework sheds light on when it is more suitable to choose one of the existing acquisition functions over the others (e.g. when to use EI vs KG vs ES vs PES), depending on the details of the optimization setting and the final error metric. This is because the EHIG sheds light on which acquisition function is optimal, depending on a problem-specific loss and action set to which the terminal action belongs, and thus gives guidance on which acquisition function to choose given the details of the optimization problem.\n\nWhile our HES method enjoys this unified perspective and provides new insights on the selection of existing acquisition functions according to different use cases, in our empirical study we thought it was more exciting to focus on the ability to easily adapt to new/customized optimization settings, which we think represents the next important step of applying (generalized) BO to broader applications. Thus we structured our experiments from this perspective.\n\n### **Questions**\n1. Although the random search (RS) baseline draws samples randomly from the domain, this does indeed optimize the negative loss, albeit quite slowly. For example, in Fig 2, these random samples do give some information about the top-$k$ points (with diversity) in the space, they just give far less information than the points chosen by HES (or the other baselines). If you consider an extreme case, where there are no limits on the sampling budget and we can draw an infinite number of random queries, we could then recover the black-box function up to any given accuracy — thus the curve will rise, but slowly compared to the other baselines.\n\n2. Thanks for the suggestion here. In Figure 4, we primarily included the visual result to illustrate the sequence search task, and since the space was quite small, only included this single HES result so that the figure was not too crowded. However, we will definitely include a visualization of all methods in the appendix for a clearer comparison.\n\n3. We will increase the font on both of these!\n\n\n**Thanks!** We hope that we have addressed each of your questions and comments. If there is anything else that we could do to increase your score, please let us know!\n",
" Thank you for your helpful review! We appreciate the positive feedback and aim to address each of your questions and comments below.\n\n### **Baselines for custom tasks**\n\n> It is unclear if any Bayesian Optimization approaches already exist for the custom tasks.\n\nFor our experiment results, we intentionally tried to focus on useful tasks where there does not already exist Bayesian optimization approaches that are specifically designed for these tasks (see our discussion on this in Section 1: Introduction, paragraph 4, and our note below the list of comparison methods in Section 7: Experiments).\n\nWe feel these types of custom/novel settings are some of the best motivators for our EHIG framework, which can be customized to incorporate a problem-specific loss. Therefore, for these tasks in our experiments, we used—as far as we could determine—the best set of comparison acquisition functions that we could find as baselines for each task.\n\n### **Question about distance function**\n\nFor our experiments in Section 7, for the distance function in the diversity term of Equation (4), we indeed used the Euclidean distance. In this formulation, one could easily incorporate a parameter into the loss which controls the strength of the diversity penalty (and this seems like a nice idea to add!) — in general, we intend for the loss to be defined based on domain knowledge of a given problem at hand. We will add both details to the revised version of our paper.\n\n### **Question about multi-level set estimation experiments**\n\nFor our experiments in Section 7, we only went up to two levels. Our code, however, is written generally and can easily extend up to larger numbers of levels, though we didn’t focus on experiments on a higher number of levels in the paper.\n\n**Thank you**, and we hope that we have addressed each of your questions and concerns.",
" **(Continued from comment above)**\n\n### **Lacks some classic BO baselines, such as PI or GP-UCB:**\n\nWe are very happy to include these standard BO baselines (such as PI and GP-UCB) and have added them to the revised paper for a few initial experiments (Appendix B.1, Figure 7).\n\nWhen writing the paper, we originally thought to disclude these baselines — the reason was because we chose to focus our experiments on tasks such as top-$k$ optimization and other custom settings (rather than vanilla optimization), and since these classic acquisition functions like PI/GP-UCB are not designed for these custom settings we thought it didn’t make sense to include more than one “vanilla optimization” baseline (for which we chose knowledge gradient).\n\nThat being said, we are happy to include PI/GP-UCB as suggested! We’ve added plots that show the comparison among these acquisition functions in the first experiment (Figure 7) so far, and will add them to the further experiments as well if it is desired.\n\n### **EI in experiments**\n\nSimilar to the discussion above, based on our experiments on non-vanilla optimization settings, we wanted to focus on losses under our HES framework that were tailored to these tasks. This is in contrast with EI, which we prove (in Section 4) is an instance of our framework that is well-suited for vanilla optimization.\n\nHowever, we are happy to include EI experimental comparisons if these are desired — for a start, we’ve implemented/run this baseline, and added the results to the comparison plots for the initial experiments in our revised paper (Appendix B.1, Figure 7), and will add them to the further experiments if desired.\n\n### **Probability of improvement as an example of proposed framework**\n\nThanks for bringing up this discussion. When we originally worked on our submission, we had trouble coming up with a good way to fit the probability of improvement (PI) acquisition function within our EHIG framework, and did not pursue it further. However, we made another attempt based on your suggestion and believe that we’ve found a way to incorporate this acquisition function. We’ve added a Theorem and Proof to Appendix Section A.4 that shows how (PI) is a special case of our EHIG framework under a particular loss and action set—specifically that EHIG can be made equivalent to PI up to an additive constant. Note that this proof is similar to our proof for the expected improvement (EI) acquisition function.\n\n### **Coin flip example to motivate a reasonable measure of uncertainty**\n\nThe coin flip example was simply intended to help explain the definition of a concave uncertainty measure, and also give intuition of why a concavity is a desirable property of an uncertainty measure, as it is a property of both $H_{\\ell, \\mathcal{A}}$-entropy as well as Shannon entropy. Specifically, this concavity property means that the average of uncertainties of two distributions should be less than the uncertainty of the average (mixture) distribution. In this coin flip example (where we “have two distributions p1 and p2, and flip a coin to sample from p1 or p2,”), this concavity property is equivalent to saying that we should have less uncertainty about the final sample if we are allowed to observe the outcome of the coin flip than if we are not allowed to observe it — which makes intuitive sense as a property that we want!\n\n**Thanks!** We hope that we have addressed your concerns — If there is anything else that we could do to increase your score, please let us know!",
" Thank you for your helpful review. We appreciate the positive feedback on our paper (“well written”, “technically sound”, \"particularly interesting\", “unifying the large zoo of BO acquisition functions under the same umbrella is neat”), and aim to address each of your comments and questions below in order to improve the quality of our submission.\n\n### **Motivation behind custom tasks:**\n\nTo provide some additional motivation for the custom tasks in our experiments, we wanted to describe a few more concrete instances where these tasks are useful in practice.\n\nWe see the task of *top-k optimization with diversity* whenever we have an expensive black-box function, and want to estimate multiple optimal designs (or locations, etc) and don’t want redundancy in the optimal designs. One example motivation is in tasks such as active monitoring of pollution [1], if a user wishes to efficiently estimate a set of locations that have the highest levels of pollution, in order to allocate sensors for aid/regulation or resources for cleanup. Another example is in the space of materials design, such as in computational catalyst screening [2], where the goal is to perform a sequence of expensive simulations in order to efficiently determine the top-k catalysts with highest simulated adsorption energies, as a recommended set for follow-up experiments.\n\nAnother recent application that we are familiar with from the work of our colleagues, which is currently underway, is in the space of materials/mixture characterization. Here, the goal is to guide temperature and pressure controls in Small Angle X-ray Scattering measurements, in order to efficiently characterize properties of a class of supercritical fluids (SCFs) [3]. Notably, in this application, a set of two peaks in the measurement space must be found in order to characterize the SCF properties of interest. This is precisely a top-two optimization problem as described in our paper, and a top-k-with-diversity acquisition function is currently being developed for this task in practice.\n\nFor the *sequence search* task, we also find concrete applications as well. For example, applications of this appear in materials design, in the task of synthesizing a library of nanoparticle sizes [4] — i.e. where the goal is to find a set of inputs that yield a set of nanoparticles of different pre-defined sizes. Finally, *multi-level-set estimation* is useful any time one needs to estimate more than two partitions of a design space. This is useful in various applications, such as when estimating phase boundaries for materials design [5], or when health policy makers must estimate multiple disease prevalence level sets (i.e. regions where COVID prevalence exceeds 1%, 2%, etc.) for graded reopening decisions [6, 7].\n\nNote that our Bayesian-model-based methods have particular benefits over classic optimization techniques in cases where we can only get zeroth order information via function evaluations (i.e. no gradients), and where the function is particularly expensive — and thus we need to be as sample (iteration) efficient as possible. These Bayesian techniques allow us to leverage our probabilistic surrogate model to intelligently choose a sequence of function queries for increased sample efficiency.\n\nWe will include some of these concrete applications, more specific citations, and above discussion in the updated version of our paper.\n\n[1] S. P. Hellan, C. G. Lucas, N. H. Goddard. Bayesian Optimisation for Active Monitoring of Air Pollution. In 36th AAAI Conference on Artificial Intelligence, 2021.\n\n[2] K. Tran, W. Neiswanger, K. Broderick, E. Xing, J. Schneider, Z. Ulissi. Computational catalyst discovery: Active classification through myopic multiscale sampling. The Journal of Chemical Physics, 2021. https://doi.org/10.1063/5.0044989\n\n[3] K. Nishikawa, I. Tanaka. Correlation lengths and density fluctuations in supercritical states of carbon dioxide. Chemical physics letters, 1995.\n\n[4] A. Fong, L. Pellouchoud, M. Davidson, R. Walroth, C. Church, E. Tcareva, L. Wu, K. Peterson, B. Meredig, C. Tassone. Utilization of machine learning to accelerate colloidal synthesis and discovery. J. Chem. Phys, 2021. https://doi.org/10.1063/5.0047385\n\n[5] D. Pradhan, S. Kumari, E. Strelcov, D. Pradhan, R. Katiyar, S. Kalinin, N. Laanait, R. Vasudevan. Reconstructing phase diagrams from local measurements via Gaussian processes: mapping the temperature-composition space to confidence. Nature Computational Materials. 2018.\n\n[6] E. Oh, A. Mikytuck, V. Lancaster, J. Goldstein, S. Keller. Design and Estimation for the Population Prevalence of Infectious Diseases. medRxiv, 2021.\n\n[7] C. Yiannoutsos, P. Halverson, N. Menachemi. Bayesian estimation of SARS-CoV-2 prevalence in Indiana by random testing. Proceedings of the National Academy of Sciences, 2021.\n\n**(Response continued in comment below)**",
" The paper generalises Shannon entropy-based acquisition functions (AF) to a broader class of uncertainty measures. Doing so allows the authors to frame several popular AFs as special cases of the proposed entropy. The proposed AF is also able to provide customised solutions for a number of modified versions of BO setting. The authors study efficient optimization of the AF under certain smoothness conditions. Experimental evaluation compares the proposed method with several baselines. Post-rebuttal:\nI would like to thank the authors for answering my questions. Most of my concerns are addressed, so I will update my score to 6.\n__________\n\nThe paper is well written and appears to be technically sound. The idea of unifying the large zoo of BO acquisition functions under the same umbrella is neat (even though it was previously mentioned in the literature). It is particularly interesting to see both information-based and decision-theoretic AFs to be reached as special cases of proposed entropy.\n\nThe authors mention a diverse set of AFs for custom tasks. However, some of these applications seem somewhat artificially crafted. While the authors do provide a number of references, it’s not always clear on how such problems are derived from these references (e.g., line 221, reference 30 leads to a whole PhD thesis). Furthermore, with such limited description it's not always obvious why these tasks still require BO as an expensive black box optimization, and not some classic optimization techniques. It would be helpful if the authors could shed some more light on the motivation behind these custom tasks.\n\nThe experiments are one of the weaker points of the paper. While they do cover a diverse set of tasks, the set of methods is surprisingly small and lacks some “must-have” classic BO baselines, such as PI or GP-UCB. Even more surprisingly, EI, which is mentioned in the main paper as a special case of proposed entropy, is missing in the experimental section.\n A few questions in addition to the concerns raised in the previous section:\n\nLine 115: How does the coin example help to motivate that this is a reasonable measure of uncertainty?\n\nExperiments section: How many trials did the authors average results over for the experiments?\n\nHow realistic is Eq 7 in general?\n\nHave the authors considered whether probability of improvement (the simplest BO AF one could imagine) can be obtained under the proposed framework?\n -",
" The authors derive a generic acquisition function design strategy (EHIG) based upon a user-specified loss functions and action set applicable to a variety of custom tasks involving Bayesian optimization, viz., Top-k optimization with diversity, Multi-level set estimation and Sequence search.\nEHIG includes as special cases both information-based acquisition schemes such as entropy search (ES) and decision theoretic acquisition schemes such as knowledge gradient (KG) and expected improvement (EI).\nEmpirical results on multiple datasets indicate the superiority of EHIG on custom tasks considered. Strengths\n1. The approach is clearly presented and well-motivated.\n\n2. The proofs in the appendix seem to convincingly demonstrate how EHIG yields multiple well-known special cases such as ES, KG and EI.\n\n3. Experimental validation is performed on a wide range of datasets.\n\nWeaknesses\n\n1. In the experimental comparison, only EHIG is tuned to the custom task and competing approaches are tuned to the conventional task of black box optimization (POM may be an exception in that it may be considered tuned for single level set estimation.)\nSo it appears to be a foregone conclusion that EHIG would outperform competitors.\nIt is unclear if any Bayesian Optimization approaches already exist for the custom tasks.\n 1. What distance function was considered for the diversity term in the experiments in Sec. 7 ? Was it Euclidean distance ? Did a parameter control the strength of the diversity penalty in eq. (4) ?\n\n2. Did the authors only consider two levels (Multihills) and one level (Pennsylvania Night Light) for the multi-level set estimation experiments ?\nOr were larger number of levels also considered ?\n The authors discuss avenues for future work addressing implicit limitations in Sec 8.\n\n",
" This work introduces a family of acquisition functions (AFs) for Bayesian Optimization (BO) based on the decision-theoretic entropies, $H_{l,A}$-$\\textit{entropy}$, which is a generalized version of Shannon entropy. The AFs can be tailed to select queries that maximize the reduction of uncertainty in $H_{l,A}$-$\\textit{entropy}$, which is defined as $\\textit{expected}$ $H_{l,A}$-$\\textit{information gain}$ (EHIG). The EHIG is a general form of AFs and can be reduced to information-based AFs or decision-theoretic AFs by choosing the parameters, $l$ and $A$. A framework is provided for the applications of EHIG on several categories of problems. Moreover, a gradient-based acquisition optimization method is proposed. Finally, evaluations of the method are made on examples datasets. Strengths:\n1. This paper is well-organized and easy to read. \n2. This article proposes a brand new general AF which unifies two branches, information-based and decision-theoretic AFs, for BO. Following the framework to carefully construct the the EHIG AF for each task and using the proposed gradient-based optimization method, the proposed $H_{l,A}$-$\\textit{Entropy Search}$ (HES) procedure has advantage over other baseline methods.\n\nWeaknesses:\n1. The experiments are carried out on tasks which no AF has been developed in other work. However, to make this paper more significant, there should be examples that compare the result of the proposed HES method to other traditional BO method with AF on well-studied tasks. In this way the readers can have a better idea on the performance of HES.\n\n 1. Does the Random Search (RS) method optimize any function? In top row of Fig.2 the negative loss of RS slowly increases comparing to HES, but from the description of RS (line 305) the samples are drawn randomly from the full domain, therefore, it is hard to understand why the negative loss increases. Is there an explanation?\n\n2. In Sequence Search task only the route of HES is shown and discussed. Bases on the left of the Fig. 4 at least the Uncertainty Sampling has comparable performance on the negative loss to HES. It would be better if all routes can be shown on the right of Fig. 4 for a thorough and clearer comparison. \n\n3. Some of the plots are too small to see the details and axes, eg. bottom rows of Fig. 2 and Fig. 3. There is no societal impact from this work. The authors should state and summarize the limitations clearly to the readers.\n"
] | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"htPUt2TbtE",
"uWIYyuuDO81",
"QpIUsZst4ad",
"4huYRc6HHSv",
"H4bFFn1fhLa",
"nips_2022_tmUGnBjchSC",
"nips_2022_tmUGnBjchSC",
"nips_2022_tmUGnBjchSC"
] |
nips_2022_wZk69kjy9_d | Deep Hierarchical Planning from Pixels | Intelligent agents need to select long sequences of actions to solve complex tasks. While humans easily break down tasks into subgoals and reach them through millions of muscle commands, current artificial intelligence is limited to tasks with horizons of a few hundred decisions, despite large compute budgets. Research on hierarchical reinforcement learning aims to overcome this limitation but has proven to be challenging, current methods rely on manually specified goal spaces or subtasks, and no general solution exists. We introduce Director, a practical method for learning hierarchical behaviors directly from pixels by planning inside the latent space of a learned world model. The high-level policy maximizes task and exploration rewards by selecting latent goals and the low-level policy learns to achieve the goals. Despite operating in latent space, the decisions are interpretable because the world model can decode goals into images for visualization. Director learns successful behaviors across a wide range of environments, including visual control, Atari games, and DMLab levels and outperforms exploration methods on tasks with very sparse rewards, including 3D maze traversal with a quadruped robot from an egocentric camera and proprioception, without access to the global position or top-down view used by prior work. | Accept | This paper studies an interesting problem, and overall the reviewers agreed the exposition and validation are sufficient. We encourage the authors to consider the issues raised by the reviewers and further improve the work in the final version. | train | [
"Zja5XQBsMe",
"fKk69TDTwy3p",
"E8P69PAE1b4",
"-Za9xX4q_Od",
"V2C9VbtOHKl",
"eVLn1BG1Yrj",
"piP4zTVwwB",
"oFZKKNeCET",
"PZc4mYYIE_Z",
"bdyObEErLWZ",
"rxjnLkCVb38"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer q27r,\n\nThe discussion period is coming to an end soon and we haven't received a response from you yet. Could we please ask you to confirm whether our response has resolved your concerns or whether you see any remaining issues that motivate your current rating? If there are remaining issues, we would be more than happy to address them or further clarify where necessary.\n\nThank you!",
" Dear Reviewer BArk,\n\nThe discussion period is coming to an end soon and we haven't received a response from you yet. Could we please ask you to confirm whether our response has resolved your concerns or whether you see any remaining issues that motivate your current rating? If there are remaining issues, we would be more than happy to address them or further clarify where necessary.\n\nThank you!",
" Dear Reviewer uwa5,\n\nThe discussion period is coming to an end soon and we haven't received a response from you yet. Could we please ask you to confirm whether our response has resolved your concerns or whether you see any remaining issues that motivate your current rating? If there are remaining issues, we would be more than happy to address them or further clarify where necessary.\n\nThank you!",
" Thank you for your thoughtful review! We added clarification of the manager training and the way rewards are combined, added insights about the performance on Atari games, and responded to your remaining questions. Please let us know whether this fully addresses your concerns or whether there are any points remaining, which we would be happy to address.\n\n> Q1: The authors may want to explain why the \"manager\" policy can be well trained with the async-trained world model decoder [...] how to guarantee that the \"z\" in world encoder share the same space with the \"z\" in the manager? The paper will be much stronger if there is a theoretical justification.\n\nThere is probably a small misunderstanding here. If the explanation below does not answer your question, please let us know and we'll be happy to clarify further. All components in Dreamer and Director are optimized concurrently to improve throughout the agent's lifetime --- by \"separately\" we just mean that gradients are stopped between components.\n\nAs we understand your question, it is more a question about Dreamer than Director, namely why we can continuously optimize both (1) the world model and (2) the policy that takes world model representations as inputs. The answer is that both components are optimized throughout training, so as the world model representations change the policy is trained to adapt to the changes. In Dreamer (and Director), the policy is optimized on batches of size 16K time steps through massively parallel rollouts, so it can adjust quickly to changes in the world model.\n\nWe also point out that \"z\" in our paper refers not to the world model representations but to the latent space of the goal autoencoder that learns on top of the world model representations. The manager's actions stay aligned with the latent space of the goal autoencoder because the manager treats both the goal autoencoder and the worker as a \"black box\" --- it simply chooses 8x8 discrete actions that maximize future task rewards and exploration rewards. As such, it adapts it's strategy as the goal autoencoder and worker policy change over time.\n\n> Q2: There are two rewards to stimulate the agent. One is for exploration and the other one is for reaching the goals. Do these two rewards compete with each other?\n\nIn Director, the manager maximizes both task reward and exploration reward, whereas the worker maximizes only the goal reward. Thus, it is the task and exploration rewards that are being combined, not the exploration and goal rewards. The only exception is \"Director (worker task reward)\" in Appendix A.\n\nTo combine multiple reward signals, we use the following mechanism described in Section 2.3. We use the imagined rollout to compute two separate returns for the two reward signals, divide each by its exponential moving standard deviation, and then sum them with weights w^extr = 1.0 and w^expl = 0.1. This ensures that the extrinsic/task reward contributes stronger to the policy gradient than the exploration bonus. As a result, the agent explores in the absence of rewards but once rewards are found, the agent pays more attention to them than to the exploration signal.\n\n(continued below...)",
" > Q3: What is the magic behind changing the goal very k=8 step?\n\nThe imagination rollouts of the world model have a length of 16 steps. We a goal twice during the rollout, at steps t=0 and t=8. This results in a goal duration of K=8. We also experimented with K=4 and K=16 and found that they result in similar performance, with shorter durations benefitting fast-paced environments and longer durations benefitting long-horizon tasks with sparse rewards. We will add the experimental results for this ablation to the appendix of the final paper. We see researching mechanisms for dynamic goal durations as an interesting future direction, for example based on a heuristic of whether a goal has been achieved.\n\n> Q4: Why Dreamer performs so bad in Ant Maze XL and Pin Pad Six?\n\nThese environments feature sparse rewards that require exploration techniques beyond a stochastic policy to discover within the provided interaction budget. Dreamer performs poorly on these tasks because it fails to discover the reward.\n\n- The only reward in Ant Maze is given when the ant touches the goal object, so from the initialized position the ant has to discover how to locomote and navigate all the way to the other side of the maze before receiving its first reward.\n- In the Pin Pad environments, the only reward is given once the agent steps on all pads in exactly the correct sequence.\n\n> Q5: Any insights for the reason that the proposed method is weaker in Atari games?\n\nThank you for this question, we will add insights on this to the paper. \"Pure\" Director underperforms Dreamer on fast-paced Atari games because of their need for precise, fast movement. The worker receives no task reward to learn these fast movements, instead it has to be steered by the manager through goals. However, the goals only change every K=8 steps, which is too slow for quickly moving back and forth in state-space. A simple remedy is presented in the same figure, where we give task reward to the worker policy, so that it can learn to perform fast motions near the current goal that serve the task, without the manager having to communicate those detailed motions. An interesting future direction is to find heuristics to switch goals when the previous goal is reached, which could be an elegant solution here.",
" Thank you for your detailed review! In summary, we added our findings of investigating different goal autoencoders, point out the goal visualization videos on our anonymous website (linked in the abstract), and clarified on your questions. Please let us know if our response resolves your concerns or whether there are remaining points, in which case we would be happy to address them.\n\n> The qualitative subgoal results are somewhat weak [...] For the most part, it looks like the manager just chooses one goal, and all of the actual work is carried out by the low level agent.\n\nWhat you are describing is the fallback behavior of Director for dense reward tasks that don't require long horizons and hierarchy. In those cases, the worker alone can solve the task and thus the manager only has to communicate what the task is. On tasks that require long horizons (Visual Pin Pad, Egocentric Ant Maze), the subgoals correspond to intermediate steps along the task --- such as walls of different colors or different pads to step on --- that substantially simplify the task for the worker and enable Director to find rewards and repeatedly seek them out again in these very sparse tasks where other baselines fail.\n\n> the PIN pad domain [...] is probably the best place to show this, since having subgoals that lie on each of the PIN pads would pretty much demonstrate the result\n\nWe agree that Visual Pin Pad is a good environment to show this. If you haven't yet, please take a look at the video visualizations of subgoals along the episode (on the anonymous website linked in the abstract). The high-level policy indeed chooses subgoals that correspond to which pad to activate next by adding the pad to the history display at the bottom of the screen. If you have further questions about the subgoal decomposition, please let us know and we'll be happy to go into more detail. The videos on the website also show meaningful subgoal decompositions on many other tasks.\n\n> the skill encoder [...] more information about some of the tradeoffs and design choices there. The ablation in the appendix only shows that it is necessary in some cases, not how the size of the goal autoencoder or other components scale.\n\nThank you for this suggestion. We experimented with different latent spaces and we will include these findings in the final version of the paper:\n\n- Gaussian latents of different dimensions (32, 128). We found that the resulting unbounded latent space fails to constrain the manager to goals that correspond to valid states, similar to having no goal autoencoder at all.\n- A single categorical with a varying number of classes (8, 32, 128, 512). We found that single categoricals have not enough capacity to reasonably model the state space and result in poor performance except in very simple environments. Very large categoricals result in better modeling performance but create a large exploration challenge for the high-level policy, which also results in poor performance.\n- Vectors of categoricals of different sizes (4x4, 8x8, 16x16, 32x32). These factorized spaces have high capacity (8^8=16M) and also allow the high-level policy to explore multiple dimensions in parallel, simplifying its exploration problem. The different sizes all worked quite well with 8x8 performing best, which we used in the final agent.\n\n(continued below...)",
" > more a feat of engineering than providing any significant insights into the way in which hierarchical RL can be run.\n\nThe paper shows that goal-conditioned hierarchical RL can benefit greatly from compressing all the possible goals into a compact discrete space (goal autoencoder) and from employing an exploration bonus at the high level. Our corresponding ablations in Appendix B and D demonstrate the significant effect of these two ideas.\n\nWe hypothesize that discretizing the goals simplifies learning for the high level by constraining it to choose among goals that correspond to valid states in the environment. While we see implementing these components into a widely successful hierarchical RL agent by itself as a substantial effort and contribution, we hypothesize that future hierarchical RL agents will likely benefit from the same two ideas.\n\n> feudal networks (Vezhnevets et. al. 2017), performed many of the same ideas with different design choices for generating the latent space and training the goal-based RL algorithm\n\nWhile the idea of using goal-conditioned policies for hierarchical RL has been around for a long time, there are many distinct differences between FeUdal Networks (FuN) and Director, in addition to the world model that you mentioned. Most important for the present discussion, FuN is not using any exploration bonus and FuN does not use a goal autoencoder to restrict its goals to realistic states. Both components are critical for the success of Director and our ablations that remove these components are closer to FuN. FuN also gives task rewards for the worker, so it is unclear whether the communication between the manager and the worker works reliably.\n\nWe would have also liked to compare to FuN experimentally but unfortunately the authors did not release their code and previous attempts at a reimplementation at our lab were not promising. Combined with the results in the FuN paper showing only small improvements over an LSTM agent and that we are not aware of any follow-ups to FuN from DeepMind itself, we decided not to pursue this direction further.\n\n> 45M time steps is long compared to modern methods for atari games [...] Rainbow and PPO reach 400 reward (higher than the values given by Director and Dreamer), in 2M time steps (https://wandb.ai/tianshou [...]\n\nThe aim of Appendix A is not to show benefits of Director over Dreamer. Instead, the point is that Director --- a method designed specifically for RL tasks with very sparse rewards --- can achieve competitive performance out of the box also on a wide range of tasks with dense rewards where the hierarchy isn't needed. Together with the sparse reward tasks in the main paper text, this shows that Director extends the set of environments solved by Dreamer, especially when giving some task reward to the worker.\n\nWhile it is possible to achieve faster learning on some tasks by increasing the rate of gradient steps or tuning specifically for easier tasks, this is orthogonal to the investigations in our paper. As mentioned in the experiments section, we fix the training frequency for all methods to perform a gradient step every 16 environment steps to allow for a fair comparison and fast experimentation.\n\n> Ant-maze domains often struggle with a sort of low-variance issue, which means that a policy that explores well enough to reach the goal eventually can reach the goal again. This is not usually an issue [...] this brings up the question of whether the performance benefit is just because of exploration [...] The Plan2Explore results don't really alleviate this because they never reach the goal either\n\nWe don't fully understand this point, please clarify if our response doesn't resolve your question around this. We agree that the main challenge in environments with very sparse rewards is often exploration. In the larger Ant Mazes, 10M random actions receive not a single reward so some exploration objective is needed. In Ant Maze S, Plan2Explore reaches the goal a few times earlier on (see the non-zero return) but then becomes too explorative in its low-level actions and starts flipping over. In Pin Pad 3/4/5, Plan2Explore also clearly finds rewards (even when Dreamer does not). Sekar et al. (2019) showed that Plan2Explore outperforms ICM.\n",
" Thank you for your feedback! Your summary is accurate. Below, we respond to your concern about the time steps mentioned in the intro and we emphasize the significance of our paper. Could you please let us know whether this addresses your concerns or whether there are still remaining issues we could address?\n\n> The evaluation falls short of the initial promise in the intro. To the knowledge of the reviewer, none of the tasks studied have the length of time horizon promised in the intro. Thus it remains unclear if this method indeed works better in that promised setting. The proposed method is clearly helpful in settings with sparse rewards, so perhaps that should be the motivation and promise.\n\nThank you for pointing out this potential misunderstanding. We wrote \"complex control problems can require millions of time steps\" just to motivate hierarchical RL. The sparse reward tasks in our paper require several hundreds of time steps (although discovering the rewards in the first place takes longer). We'll add a sentence to the intro that explicitly says this.\n\n> Adding summary statistics for the main paper on the standard benchmarks would be very helpful. There is likely room for something that small. And part of Fig 6 could be moved if not.\n\nThat's a great idea! The camera-ready version allows for an extra page, so we will move the full results of the standard benchmarks into the main text.\n\nWe would like to emphasize that previous hierarchical RL methods have not been able to solve a wide range of pure RL tasks (no pretraining tasks, demonstrations, semantic goal space, etc) while ensuring that the hierarchy was used (no task reward given to the low level). Director not only learns successful hierarchical behaviors across a wide range of environments, it further learns them directly from pixels. We thus think that Director constitutes a significant step forward for hierarchical RL research.",
" In this paper, the authors introduce a learnable RL-based planner. It is specifically design for long horizon sparse reward environment. The proposed planner consists of two major components. A world model uses an off-the-shelf encoder to represent the raw pixel input and additional goal autoencoder is trained to obtain a more sparse world representation. On the other hand, a manager policy and a worker policy are trained to predict the next goal and the next atomic action. The authors also designs exploration reward and goal reward to stimulate the agent. Experiments are conducted on public benchmarks including long horizon navigation and atari games. Strength:\nThe overall presentation for the proposed method and experiments is friendly for audience to understand although there are some minor grammar issues. It is a challenging problem for RL agent to plan in long horizon and reward-sparse environment. According to some of the experiment results, the proposed method works very well (Ant Maze). The idea of using a more sparse representation to represent abstract action is interesting and the overall design of the proposed method is reasonable.\n\nWeaknesses:\nMost of the components used in the proposed method are off-the-shelf, but I don't think it is a big concern regarding the originality. The design of the \"manager\" mechanism is not very solid. See question below. Although the proposed method show promising results in some tasks, in general RL benchmarks it is slightly weaker than the popular baselines. It will be better to discuss more about it. Q1: The authors may want to explain why the \"manager\" policy can be well trained with the async-trained world model decoder from a more theoretical perspective. To the best of my understanding, the autoencoder is trained by memory replay but the \"manager\" is on-policy. Given the fact the the world model is trained separately, how to guarantee that the \"z\" in world encoder share the same space with the \"z\" in the manager? The paper will be much stronger if there is a theoretical justification.\n\nQ2: There are two rewards to stimulate the agent. One is for exploration and the other one is for reaching the goals. Do these two rewards compete with each other? For example, the reward of exploring a goal state is larger than reaching it. How to avoid it?\n\nQ3: What is the magic behind changing the goal very k=8 step?\n\nQ4: Why Dreamer performs so bad in Ant Maze XL and Pin Pad Six?\n\nQ5: Any insights for the reason that the proposed method is weaker in Atari games?\n\nI've been away from this community for a while, therefore I will also consider other reviewers comments. I will rise my rating if the authors can provide insightful responses. Based on the submitted manuscript, I lean to accept. See weakness.",
" This paper proposed a method for learning a hierarchical policy that operates from pixels, without pre-defined high level actions, and using model based RL\n\nThe policy is broken up into 4 key components. A world model that models environment dynamics, representation, and reward. A manger policy that selects goals in a discrete latent space. A goal autoencoder that decodes the select goal into representation space, and a worker policy that selects low level actions to take to achieve the manager's goal.\n\nThe proposed method is evaluated in two environments that stress sparsity of rewards and on standard benchmarks.\n\nIn the sparse reward benchmarks, the proposed method outperforms baselines. The gap to baselines increases as the environment complexity increases.\n\nOn the standard benchmarks, the proposed method matches Dreamer when the worker is trained with the task-specific reward in addition to the reward that encourages it to reach the manager's goal. ### Strengths\n\nThe proposed method is able to learn a hierarchical policy with MBRL that does not require pre-specified high level actions. These high-level actions are simply environment states to reach, making them easily interpretable.\n\nThe proposed method matches state-of-the-art on standard benchmarks.\n\nThe proposed method works well with sparse rewards.\n\nThere are extensive ablations in the supplement.\n\nThe paper is well-written.\n\n### Weaknesses\n\nThe evaluation falls short of the initial promise in the intro. To the knowledge of the reviewer, none of the tasks studied have the length of time horizon promised in the intro. Thus it remains unclear if this method indeed works better in that promised setting. The proposed method is clearly helpful in settings with sparse rewards, so perhaps that should be the motivation and promise.\n\n### Suggestions for improvement\n\nAdding summary statistics for the main paper on the standard benchmarks would be very helpful. There is likely room for something that small. And part of Fig 6 could be moved if not. See above Yes",
" Summary: Learn sub-policies by planning in the latent space of a learned dynamics model, where the low level policy performs goal-based RL, and the high level policy uses exploration and task rewards. It takes pixels as input, learns a latent world model, and then performs planning in the latent space of the world model. The world model is learned using a network dynamics modeler RSSM, from PlaNet, which is a variational pixel and rewards reconstruction algorithm. In order to reduce the dimensionality of the action space for the high-level controller, the algortihm uses a second encoder to encode into action space, which is the same as a vector of categoricals method used in Dreamer V2, which uses a variational reconstruction of the world model space. The exploration reward is based on reconstruction error (which mirrors work in curiosity rather than count-based methods), and combined with the extrinsic reward through magic values. The worker policy uses a goal conditioned policy with a shaped max-cosine reward. The overall algorithm is evaluated on a large suite of common RL tasks including Atari, control suite, etc. against flat baselnes. Strengths: \nThis work describes a straightforward concept well, utilizing prior work in model-based RL, goal-based RL and exploration to contruct a hierarchical RL algorithm. The stated goal of creating a simple algorithm is executed well, and evidenced by the ability to test the algorithm on a wide variety of different domains. On top of that, the design choices are generally very reasonable, with the model-based being used both for exploration and to generate a pre-latent space for the mutual-information like latent skills, and the low level policy being goal-following.\nConsidering the difficulty of running hierarchical RL algorithms, which incorporate the combination of a large number of different ideas functioning simultaniously, this work is an impressive achievement engineering, with finding the right combination of methods to get generally improved performance on a wide variety of tasks. The fact that the latent space can also be visualized because of the world model is a nice perk, and helps alleviate some questions related to whether the method actually utilizes an efficient division of labor.\nWeaknesses: \nThe greatest disappointment with this paper is that it is more a feat of engineering than providing any significant insights into the way in which hierarchical RL can be run. Buried in the related work is the fact that feudal networks (Vezhnevets et. al. 2017), performed many of the same ideas with different design choices for generating the latent space and training the goal-based RL algorithm. While the design choices in this work appear to be superior, it is hard to tell directly from the experiments since there appear to be some gaps in terms of which tasks were tested on. As a result, the core insight of this paper appears to be that under these conditions we can combine feudal networks with dreamer-style model based RL. \nThis work utilizes a double-encoding which would have been interesting to investigate, at least in the appendix. While it makes sense to use a different embedding space for the world-model latent space (which needs all the information for reconstruction), and while the skill encoder finds a space that is a good size for learning, it would have been useful to get more information about some of the tradeoffs and design choices there. The ablation in the appendix only shows that it is necessary in some cases, not how the size of the goal autoencoder or other components scale. \nHyperparameters are always a significant question for hierarchical RL methods, and it isn't clear that all the (non-dreamer) hyperparameters are actually described in Appendix F, and more importantly, what the sensitivity is to those hyperparameters. It would also be nice to get the dreamer hyperparameters.\nThe Standard benchmark results (appendix A) is somewhat inconclusive. For one thing, 45M time steps is long compared to modern methods for atari games. As an example, Rainbow and PPO reach 400 reward (higher than the values given by Director and Dreamer), in 2M time steps (https://wandb.ai/tianshou/atari.benchmark/reports/Atari-Benchmark--VmlldzoxOTA1NzA5), which would equate to about 10x faster than the values given. Admittedly, Dreamer also has these issues, but as Breakout is the only atari task where Director has clear performance improvement over Dreamer, this puts into question performance in the whole domain. \nAnt-maze domains often struggle with a sort of low-variance issue, which means that a policy that explores well enough to reach the goal eventually can reach the goal again. This is not usually an issue, but because the atari and other benchmark results are somewhat inconclusive, this brings up the question of whether the performance benefit is just because of exploration (dreamer never reaches the goal to send return back from because it fails to perform good exploration). The Plan2Explore results don't really alleviate this because they never reach the goal either, and comparison with a semantic baseline, or a count-based exploration baseline, would have been interesting. It does not need to outperform, just demonstrate that it did well.\nThe qualitative subgoal results are somewhat weak, because it isn't clear in the researcher is inferring meaning onto the subgoals, or the subgoals are actually chosen such that there is significant division of labor between the low and high level controllers. For the most part, it looks like the manager just chooses one goal, and all of the actual work is carried out by the low level agent. This hypothesis is further supported by the fact that when extrinsic reward is provided to the low level agent, performance often improves. This is not always the cas , but it appears to be the case often enough to call into question whether the work is actually utilizing hierarchy. Ironically, the PIN pad domain, the toy domain that is somewhat underdescribed, is probably the best place to show this, since having subgoals that lie on each of the PIN pads would pretty much demonstrate the result. \nThis work does not compare with other hierarchical baselines, in particular feudal networks which implements largely the same algorithm. Normally, this would be a huge problem, but historically hierarchical methods are a huge pain to implement, so it's somewhat understandable. However, at least an attempt to use feudal networks would have been insightful.\n What fundamental insights did implementing this give towards hierarchical RL? \nHow would semantic hierarchical methods compare against this as an upper bound, or does working from pixels provide an advantage.\nWhat hindered the standard baselines from matching current flat methods?\nHow difficult was this to tune? What percentage of the development time was finding a good way to hook up the components?\n This work does not describe the societal impact of an algorithm like this, though that description would be a philosophical exercise in whether better AI is a good thing. They also do not state clearly how the different components of Director could be interchanged, specifically the model-learning component, the model-based RL component, the exploration component and the low level goal reaching component, which would give more insight into the limitations and future work than describing a few edge cases.\nI think the biggest limitations were in having a clear message of what the contribution of this work is, and that it is not clear whether this method is truly hierarchical, or just a complex way to learn flat policies with model-based RL."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rxjnLkCVb38",
"bdyObEErLWZ",
"PZc4mYYIE_Z",
"PZc4mYYIE_Z",
"PZc4mYYIE_Z",
"rxjnLkCVb38",
"rxjnLkCVb38",
"bdyObEErLWZ",
"nips_2022_wZk69kjy9_d",
"nips_2022_wZk69kjy9_d",
"nips_2022_wZk69kjy9_d"
] |
nips_2022_HFm7AxNa9Wo | Multi-Scale Adaptive Network for Single Image Denoising | Multi-scale architectures have shown effectiveness in a variety of tasks thanks to appealing cross-scale complementarity. However, existing architectures treat different scale features equally without considering the scale-specific characteristics, \textit{i.e.}, the within-scale characteristics are ignored in the architecture design. In this paper, we reveal this missing piece for multi-scale architecture design and accordingly propose a novel Multi-Scale Adaptive Network (MSANet) for single image denoising. Specifically, MSANet simultaneously embraces the within-scale characteristics and the cross-scale complementarity thanks to three novel neural blocks, \textit{i.e.}, adaptive feature block (AFeB), adaptive multi-scale block (AMB), and adaptive fusion block (AFuB). In brief, AFeB is designed to adaptively preserve image details and filter noises, which is highly expected for the features with mixed details and noises. AMB could enlarge the receptive field and aggregate the multi-scale information, which meets the need of contextually informative features. AFuB devotes to adaptively sampling and transferring the features from one scale to another scale, which fuses the multi-scale features with varying characteristics from coarse to fine. Extensive experiments on both three real and six synthetic noisy image datasets show the superiority of MSANet compared with 12 methods. The code could be accessed from https://github.com/XLearning-SCU/2022-NeurIPS-MSANet. | Accept | All reviewers are positive about this paper. Although this paper does not achieve the best performance, it reveals some insights about some insights about scale characteristics of features, which is model-agnostic and potential to design more powerful networks. Also, the proposed method can reduce FlOPs obviously. | train | [
"LOoD0pQTXa",
"6NmhGfcIDOC",
"tY8czY6bCeH",
"b5UDX1HkJaR",
"A8LBDrs5V6",
"t7Y6OZxmt",
"0EiuKiyC2r6",
"kYnPY4QH2F",
"5kl4ErYU1Kv",
"nuaqdwgEDqZp",
"U_P2om0ZPGg",
"tRFHErSv_Lx",
"4-cP4E9coJW",
"ZTM_3cFG08Y",
"mi7p-eA6DGo",
"JG2Bld8q-90",
"JVAdFYnzNF",
"AKxLioyYjrt",
"AmJAp4s1Use",
"64WafatBSlw",
"uoEYQWc2jOY",
"_PbaoqR0LGz",
"rPrT6V62Qq4"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your positive comments and suggestions. We would improve our manuscript for a clearer presentation in the next version.",
" Thanks for your positive comments and suggestions. We would accordingly revise the problems and include some discussions about the concerns in the next version for a clearer presentation.",
" Thanks for your positive comments and suggestions. We would accordingly revise the problems and include some discussions about the concerns for a clearer presentation in the next version.",
" I would like to thank the authors for their response to my review, which have addressed my concerns. In consideration of the response to me and other reviewers, I think the revealed issue is novel, and the proposed solution is effective, which has the potential of bringing insights and inspirations to multi-scale architecture design. Therefore, I would keep my rating and recommend the paper be accepted.",
" The authors have addressed all my concerns. I increase my score to Accept.",
" Thanks to the authors for the detailed responses. My concerns have been well solved. \nI keep my initial rating to the submission.",
" Dear reviewer H79u,\n\nThanks a lot for reviewing our paper and giving us the valuable suggestions.\n\nWe have tried our best to answer all the questions according to the comments. We sincerely hope that our responses could address all your concerns. Is there anything that needs us to further clarify for the given concerns?\n\nThanks again for your hard work.",
" Dear reviewer 9KYY,\n\nThanks a lot for reviewing our paper and giving us the valuable suggestions.\n\nWe have tried our best to answer all the questions according to the comments. We sincerely hope that our responses could address all your concerns. Is there anything that needs us to further clarify for the given concerns?\n\nThanks again for your hard work.",
" Dear reviewer D7k7,\n\nThanks a lot for reviewing our paper and giving us the valuable suggestions.\n\nWe have tried our best to answer all the questions according to the comments. We sincerely hope that our responses could address all your concerns. Is there anything that needs us to further clarify for the given concerns?\n\nThanks again for your hard work.",
" Dear reviewer 9gQL,\n\nThanks a lot for reviewing our paper and giving us the valuable suggestions.\n\nWe have tried our best to answer all the questions according to the comments. We sincerely hope that our responses could address all your concerns. Is there anything that needs us to further clarify for the given concerns?\n\nThanks again for your hard work.",
" Dear reviewer rLGu,\n\nThanks a lot for reviewing our paper and giving us the valuable suggestions.\n\nWe have tried our best to answer all the questions according to the comments. We sincerely hope that our responses could address all your concerns. Is there anything that needs us to further clarify for the given concerns?\n\nThanks again for your hard work.",
" **Q1: The idea of taking advantage of within-scale characteristics and cross-scale complementarity is not limited to denoising task, but is a general idea of multi-scale architecture design. Therefore, exploring and verifying this idea in more tasks and areas would make this work be more significant.**\n\n**A1:** By analyzing the characteristics of multi-scale features w.r.t. noisy images, we reveal the **within-scale characteristics (WSC)** and naturally verify its effectiveness in denoising. Although our idea is general to other tasks, with limited time and resources, it is unnecessary to extend it to handle other corruptions in a paper because denoising is a severely ill-posed problem and one of the most important low-level vision tasks. We would highlight that there are two parallel research paradigms: i) highlighting the generality of the method w.r.t. different tasks; ii) diving into a given task and accordingly developing a general solution. Clearly, this study belongs to the latter and we believe it could provide sufficient insight to the community.\n\n**Q2: MSANet contains more parameters than most of the baselines due to multiple subnetworks for multi-scale features.**\n\n**A2:** Although MSANet is less attractive in parameters, its FLOPs are obviously lower than most baselines even with more parameters. Moreover, we would remind that one should pay more attention to the novelty and insight of this work to the community, i.e., the WSC of features are varying instead of fixed with the scales, which is our motivation for multi-scale architecture design and such a property is not reported so far as we known.\n\n**Q3: In the paper, “adaptive” is frequently used to describe the proposed network as well as the three neural blocks, but itself is not explicitly discussed.**\n\n**A3:** In the network level, the “adaptive” mainly refers to the capability of extracting scale-specific features and fuse them based on their characteristics, i.e., exploiting the WSC and the **cross-scale complementarity (CSC)**. To achieve the “adaptive”, the network should consider the characteristics of multi-scale features, and properly design and use the modules to adapt the scales’ characteristics. In the module level, the “adaptive” mainly refers to the capability of sampling and weighting the features based on themselves. With “adaptive”, the modules have the ability to automatically discriminate the indispensable input features from those unpleasant ones, and determine their contributions to output features. AFeB and AFuB employ deformable convolution while AMB uses dilated convolution and channel-spatial attention to achieve “adaptive”. Their differences of design are from the characteristics of multi-scale features they adapt to.\n\n**Q4: AFeB is designed to adaptively sampling and weighting the input features, which is highly expected for fine-grained features. However, why adaptively sampling and weighting the fine-grained features is good for the denoising task to preserve the image details and filter unpleasant noise?**\n\n**A4:** AFeB could learn the sampling locations to indicate where are important for recovery, while assigning different weights to show how important the locations are, based on the input features. As a result, AFeB could preserve the image details and filter unpleasant noise from the input features for better recovery performance.\n\n**Q5: As Table 6 suggests, the performance gains of using either AFeB or AMS alone are slight. However, why using them together could bring significant performance improvements?**\n\n**A5:** As one WSC of high-resolution features is the mixture of details and noises, AFeB is designed to exploit this characteristic for adaptively preserving the indispensable details and filtering unpleasant noises. As one WSC of low-resolution features is with rich contextual information but too low-resolution destroys the image contents, AMB is designed for enriching the contextual information while keeping the resolution unchanged. Therefore, using them together could fully exploit the WSC of multi-scale features, and thus achieve better performance.",
" **Q1: Since the cross-scale complementarity and the within-scale characteristics are not only restricted to single image denoising task. Therefore, the designs will be more persuasive if they could also achieve improvements in other image restoration tasks.**\n\n**A1:** We would clarify that **denoising is one of the most important restoration tasks and a severely ill-posed problem.** By analyzing the characteristics of multi-scale features w.r.t. noisy images, we reveal the **within-scale characteristics (WSC)** and naturally verify its effectiveness in denoising. Although the idea is general, we do not think it is necessary to verify its effectiveness to other applications in a conference paper. In fact, we would highlight that there are two equally important research paradigms: i) highlighting the generality of the method w.r.t. different tasks; ii) diving into a given task and accordingly developing a general solution. Clearly, this study belongs to the latter and we believe it could provide sufficient insight to the community. \n\n**Q2: Compared with the three real and the three synthetic noisy image datasets, the performance improvements over the baselines on the three synthetic grayscale noisy image datasets are less significant.**\n\n**A2:** First, the focus of the community has gradually shifted to real-world noise, on which our method achieves considerable improvements (See Table 1-3 in manuscript). Second, even on the synthetic noise, it is inaccurate to say the improvement is limited. By referring to the performance gaps between these SOTAs (SADNet, RNAN, DeamNet), one could see that our improvements are reasonable, e.g., the first row in Table 4, the PSNR/SSIM gap between the best two baselines is 0.01dB/0.0005, while the improvement of our method over the best baseline is 0.06dB/0.0014. Similar observations could also be obtained in other rows.\n\n**Q3: Although different-scale features are processed by different subnetworks, and the subnetworks in the finest and coarsest scale are clearly scale-specific. However, it’s difficult to understand the scale-specific designs in the two middle subnetworks.**\n\n**A3:** As the characteristics of multi-scale features gradually changes from high- to low-resolution, for simplicity, we take the two bottom and two top resolutions in the Fig.2 of the paper as the high- and low-resolution, respectively. For high-resolution features, AFeB and AMB are alternately used to exploit their WSC. For low-resolution features, AMB is used to exploit their WSC. Meanwhile, except for the lowest resolution, the first and the last blocks in each subnetwork are AFeB for adaptively selecting the input and output features of each subnetwork. Following the above architecture principles, the two middle subnetworks are designed as shown in the Fig.2.\n\n**Q4: As shown in Table 6, using AFeB and AMS together could significantly improve the performance. However, using either AFeB or AMB alone slightly gains the performance over ResB. Why is that? Some clear explanations are needed for a better understanding.**\n\n**A4:** AFeB and AMB together exploit the WSC. As one WSC of high-resolution features is the mixture of details and noises, AFeB is designed to exploit this characteristic for adaptively preserving the indispensable details and filtering unpleasant noises. As one WSC of low-resolution features is with rich contextual information while a too low-resolution destroys the image contents, AMB is designed for enriching the contextual information while keeping the resolution unchanged. Therefore, suboptimal results will be obtained if using either AFeB or AMB alone, i.e., the WSC of multi-scale features is partially neglected.\n\n**Q5: As shown in Table 7, although MSANet contains more parameters, the running time and the FLOPs are not much. Why does this happen? Please give some explanations.**\n\n**A5:** Modern multi-scale architectures usually consist of multiple stages. At the end of each stage, the feature resolution will be halved while the feature channels will be doubled. As a result, the parameters will increase due to the doubled channels, and the FLOPs and running time will decrease due to the halved resolution (Height & Width). \n\n**Q6: Some typos, e.g., “disorderly” -> “disordered” in the paragraph of Adaptive Fusion Block (AFuB).**\n\n**A6:** We will revise the typos and carefully reinspect the writing in the next version.",
" **Q8: What worries me is that the methods of CVPR'22 have been released, so why are there no relevant experiments to compare these methods?**\n\n**A8:** We would remind the reviewer that **the papers of CVPR'22 were released in June, while the Paper Submission Deadline of NeurIPS'22 is in May**. Besides, we mainly select the baselines that are comparable to our methods in parameters and FLOPs. Taking an image of 256$\\times$256 as example, the parameters/FLOPs of Restormer[1], Uformer[2], DGUNet[3] respectively are 26M/282G, 51M/179G, 17M/1729G, which are far more than our MSANet which is 8M/71G.\n\n[1] Zamir, Syed Waqas, et al. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[2] Wang, Zhendong, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[3] Mou, Chong, Qian Wang, and Jian Zhang. Deep Generalized Unfolding Networks for Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n**Q10: My problem is mainly how can the authors prove that the existing approach does not make use of the information within the scale?**\n\n**A10:** We did not claim that existing methods are “not exploring the internal information of each scale in the network” and hence accordingly we do not need to prove that ``the existing approach does not make use of the information within the scale’’. In this paper, the revealed drawback of multi-scale networks is that they ignore the within-scale characteristics (WSC) in architecture design, i.e., different scale features show varying characteristics but existing multi-scale networks use homologous architectures to deal with them. To the best of our knowledge, such a drawback is revealed for the first time, and if the reviewer sees the same works, kindly let us to know please.\n\n**Q11: The authors' proposed approach does not seem to be innovative.**\n\n**A11:** Novelty is the quality of being new, or following from that, of being striking, original or unusual. In this paper, we reveal a new problem in modern multi-scale architecture designs, and accordingly proposes new design principles, adaptive modules and network. Besides, the effectiveness has also been verified in experiments. I don't understand why the reviewer thinks they are not innovative. Reviewer should provide materials to support his statements, such as the previous works that are same to our work in observations/motivations, design principles, adaptive modules as well as network (see our questions in A6). ",
" **Q1: First of all, the title does not show the author's intention, which is a bad case (No expressed motivation).**\n\n**A1:** The authors think the title is good, since it clearly conveys “the task” and “the solution” of this paper. Moreover, the “Adaptive” encapsulates the motivations and the characteristic of our solution.\n\n**Q2: The abstract description starts by depicting a big problem, namely the general weakness of multi-scale networks, but focuses on a simple application.**\n\n**A2:** First, we sincerely thank for the recognition on one major contribution of this work, i.e., pointed out a general weakness of multi-scale networks. Second, we would remind that **denoising is one of the most important restoration tasks and not a simple application.** By analyzing the characteristics of multi-scale features w.r.t. noisy images, we reveal the **within-scale characteristics (WSC)** and naturally verify its effectiveness in denoising. Although the idea is general, we do not think it is necessary to verify its effectiveness to other applications in a conference paper. In fact, we would remind that there are two equally important research paradigms: i) highlighting the generality of the method w.r.t. different tasks; ii) diving into a given task and accordingly developing a general solution. Clearly, this study belongs to the latter and we believe it could provide sufficient insight to the community. \n\n**Q3: The introduction and related works ignore Transformer-based models.**\n\n**Q5: Do not insert ‘so on’ to follow ‘such as’.**\n\n**Q9: relu---> ReLU, pytorch--->PyTorch.**\n\n**A3, A5 and A9:** We will revise the problems and carefully reinspect the writing in the next version.\n\n**Q4: Figure 1 does not convey the author's intention, which is confusing. Why not use the feature map presentation to illustrate that there is a problem with information extraction within the scale?**\n\n**A4:** For a more concise and clearer illustration on our motivation, Fig.1 takes a more abstract fashion which is more like a NeurIPS paper. Compared to the raw multi-scale feature maps, such a summarized and abstracted illustration is more conducive to readers to understand. In the new Supplementary Material, we show the qualitative results of intermediate features before (i.e., without) and after (i.e., with) all MSANet subnetworks to demonstrate the significance of exploiting WSC.\n\n**Q6: The biggest problem is that the motivation is not valid and I don't see cases to illustrate the problems with multi-scale networks nowadays.**\n\n**A6:** We believe that the quantitative and qualitative results have sufficiently verified the effectiveness of the motivation and our design criterion. To further improve the work, the new supplementary material includes the qualitative results of intermediate features before (i.e., without) and after (i.e., with) our subnetworks, and we believe it could be another evidence to qualitatively demonstrate the significance of exploiting WSC.\n\nMoreover, we sincerely hope the reviewer could take the following questions into consideration and answer them if possible, i) are there any publications revealed the same phenomenon / weakness of multi-scale network? i.e., different scale features show varying characteristics and should be processed by scale-specific structures rather than homologous architectures. ii) do our ablation studies and related experimental results not verify the effectiveness of our solution? (See A3 to Reviewer rLGu for a more detailed explanation); iii) does this work not provide novel insight to the community?\n\n**Q7: In addition, the solution is essentially adding convolutional layers (adaptive) between scales, does this solve the problem?**\n\n**A7:** This does solve the problem. Like deep neural network, although what they essentially do is linear and nonlinear transformations, there are a lot of really brilliant designs such as Transformers, CNNs, RNNs, LSTMs. Namely, the most important is not what are used, but how are they used and what problems are solved by them, in which problems are the most important in scientific research. Obviously, this paper first reveals the problem in modern multi-scale architecture designs, and then proposes the novel designs to solve this problem, finally, evaluates its effectiveness on denoising which is one of the most important restoration tasks and a severely ill-posed problem.",
" **Q1: The proposed MSANet is not competitive in parameter numbers as there are different subnetworks corresponding to different scale features. This limits the use of the model on memory-constrained devices.**\n\n**A1:** Like other multi-scale networks, MSANet is less attractive in parameters, but its FLOPs are obviously lower than most baselines even with more parameters. Moreover, we would remind that one should pay more attention to the novelty and insight of this work to the community, i.e., the **within-scale characteristics (WSC)** of features are varying instead of fixed with the scales, which is our motivation for multi-scale architecture design and such a property is not reported so far as we known.\n\n**Q2: Using AFeB and AMS together could significantly improve the performance, while using either AFeB or AMB alone slightly improve the performance, i.e., they are bound together for using. This limits the separate use of the two blocks.**\n\n**Q4: Why using AFeB and AMS together could significantly improve the performance while only using one of them slightly improve the performance?**\n\n**A2 and A4:** AFeB and AMB together exploit the WSC. To exploit one WSC of high-resolution features, i.e., the mixture of details and noises, AFeB is designed for adaptively preserving the indispensable details and filtering unpleasant noises. Meanwhile, to improve another WSC of high-resolution features, i.e., the limited contextual information, AMB is used to enrich the contextual information and provide contextually informative features. For the low-resolution features, although contextual information is rich, it will destroy the image contents due to the limited resolution. To solve this problem, AMB is designed to enrich the contextual information while keeping the resolution unchanged. Therefore, suboptimal results will be obtained if using either AFeB or AMB alone, i.e., the WSC of multi-scale features is partially neglected.\n\n**Q3: Both AFeB and AFuB are based on deformable convolution to adaptively sample and weight the features, but one is used to adaptively select details and filter noises, the other is used to fuse the multi-scale features with varying characteristics. What caused this crucial difference? Why adaptively sampling and weighting the feature is important for exploiting the within-scale characteristics and the cross-scale complementarity?**\n\n**A3: First**, the roles of them played in network, and the information they need to achieve their missions. In brief, AFeB aims to exploit the WSC by preserving the image details and filtering unpleasant noises from their mixture. Therefore, AFeB needs the information from the features to distinguish the details and noises. AFuB aims to exploit the **cross-scale complementarity (CSC)** of multi-scale features with varying characteristics by transferring the fine-grained image details into the coarse-grained image contexts. Thus, AFuB simultaneously needs the contexts and details information to match the contexts with the corresponding details. **Second**, adaptively sampling and weighting the features could endow the modules with the capability of learning the sampling locations from features to indicate where are important for recovery, while assigning different weights based on features to show how important the locations are. As a result, AFeB could preserve the image details and filter unpleasant noises, and AFuB could transfer the fine-grained image details into the coarse-grained image contexts for better recovery performance.\n\n**Q5: More baselines should be investigated w.r.t. the model complexity in Table 7.**\n\n**A5:** Due to the time limitation, we will compare with more baselines in the next version.",
" **Q4: In Fig.1, the authors claim that the robustness of high-level and low-level scales is different. What is the definition of robustness there? Is there any experiment or reference to support this statement?**\n\n**A4:** It refers to the robustness against noises. Specifically, compared with the high-resolution features, the low-resolution features contain less noises from the noisy input. To support this statement, we qualitatively show the features of different scales and visually illustrate them as Fig.1 in the new Supplementary Material.\n\n**Q5.2: The performance improvements may come from the additional parameters rather than the designed architecture.**\n\n**A5.2:** We would remind that, on the one hand, our ablation study has well investigated the effects of the proposed modules and the designed architecture with comparable parameters. On the other hand, some baselines such as CLEARER, RNAN take comparable even more parameters while their performance is obviously worse than our method. Besides, the FLOPs of our method are obviously lower than most baselines even with more parameters.\n\n**Q6: How do we combine the features weighted by channel attention and the features weighted by spatial attention in AMB?**\n\n**A6:** We perform the channel attention at first, and then perform the spatial attention.",
" **Q1.1: The relationship between the motivation of exploring the within-scale characteristics and the design of the three proposed modules is farfetched.**\n\n**Q5.1: How does the motivation of exploring within-scale characteristics cause the design of the network architecture? The proposed three modules seem to be only simple combinations of residual blocks and deformable/dilated convolution.**\n\n**A1.1 and A5.1:** We would clarify that AFeB and AMB blocks are designed to exploit the **within-scale characteristics (WSC)**, and AFuB is designed to exploit the **cross-scale complementarity (CSC)**. In other words, not all these three blocks are developed for WSC. To exploit one WSC of high-resolution features, i.e., the mixture of details and noises, AFeB is designed for adaptively preserving the indispensable details and filtering unpleasant noises. Meanwhile, to improve another WSC of high-resolution features, i.e., the limited contextual information, AMB is used to enrich the contextual information and provide contextually informative features. For the low-resolution features, although contextual information is rich, it will destroy the image contents due to the limited resolution. To solve this problem, AMB is designed to enrich the contextual information while keeping the resolution unchanged.\n\nHence, one could see that the proposed three modules are not SIMPLE combinations of residual blocks and deformable/dilated convolution. Instead, they are designed to achieve and implement our idea, i.e., different scale features show varying characteristics and should be processed by scale-specific structures rather than homologous architectures. Clearly, such an architecture design fashion is highly expected thanks to such a highly interpretable criterion.\n\n**Q1.2: There is still a lack of related experiments or visualization results that prove that the presented modules can utilize the within-scale characteristics well.**\n\n**A1.2:** We assume that the ablation studies (i.e., “AFeB+AMB”) and qualitative results could help to address this concern. In addition, in the new Supplementary Material, we add the qualitative results of intermediate features before and after all MSANet subnetworks at different scales to demonstrate that our modules can utilize the within-scale characteristics well.\n\n**Q2: Comparison with the latest SOTAs such as MPRNet, Uformer, and DGUNet is missing. The performances reported in this paper seem to be much worse than these previous denoising methods.**\n\n**A2:** We would highlight that one major contribution of this work is provide a novel insight to the community, i.e., different scale features show varying characteristics and should be processed by scale-specific structures rather than homologous architectures. This insight is model-agnostic which has the potential to design more powerful networks including Transformer and beyond. Due to limited time and resources, we have to conduct this work in the future. For the selection of baselines, we use the baselines that are comparable to our methods in parameters and FLOPs. Taking an image of 256$\\times$256 as example, the parameters/FLOPs of MPRNet, Uformer, DGUNet respectively are 16M/1148G, 51M/179G, 17M/1729G, which are far more than our MSANet which is 8M/71G. Therefore, it is reasonable that they achieve better performance. In short and once again, we believe that the novel insights are more valuable (at least, equally important) to the community, and we will add related discussions in the next version.\n\n**Q3: The ablation study in Section 4.4 is limited to demonstrating the different characteristics of each module. Adding the proposed modules seems only to gain some improvements over the baseline network, but the in-depth analysis is missing. I cannot get any intuitive conclusions from this part.**\n\n**A3:** The ablation study is conducted to demonstrate the effectiveness of utilizing WSC and CSC. Specifically, “ED” and “ResB” are with homologous architectures, which use Identity Mapping and Residual Block to build subnetworks for different scale features, respectively. As mentioned in **A1**, AFeB and AMB together are used to exploit WSC, and AFuB is used to achieve CSC. As using AFeB and AMB alone cannot exploit WSC well, “AFeB”/“AMB” and “AFeB+AFuB”/“AMB+AFuB” only slightly improve the performance over “ResB” and “AFuB”, respectively. When using AFeB and AMB together, “AFeB+AMB” and MSANet (i.e., “AFeB+AMB+AFuB”) significantly improve the performance over “ResB” and “AFuB”, verifying our claim on the role of AFeB+AMB w.r.t. WSC. Furthermore, thanks to CSC from AFuB, “AFuB” and MSANet (“AFeB+AMB+AFuB”) are significantly better than “ResB” and “AFeB+AMB”, respectively. Therefore, the ablation study not only demonstrates the significance of well utilizing WSC and CSC, but also shows the effectiveness of the proposed solution.",
" This paper proposes Multi-Scale Adaptive Network (MSANet) for single image denoising with three blocks: Adaptive Feature Block (AFeB), Adaptive Multi-scale Block (AMB) and Adaptive Fusion Block (AFuB). The network architecture design explores the within-scale and cross-scale characteristics of multi-scale networks. Extensive experiments demonstrate that these presented modules gain better performance than the baseline network and some other denoising methods on several real or synthetic datasets.\n Strengths:\n\n+This paper is well-written, and the proposed architecture is straightforward to follow.\n\n+The summary of the difference between low-resolution and high-resolution scales in Fig.1 is valuable, and the motivation to explore the within-scale characteristics in architecture design is reasonable.\n\nWeaknesses:\n\n-The relationship between the motivation of exploring the within-scale characteristics and the design of the three proposed modules is farfetched. There is still a lack of related experiments or visualization results that prove that the presented modules can utilize the within-scale characteristics well.\n\n-Comparison with the latest SOTAs such as MPRNet [1], Uformer [2], and DGUNet [3] is missing. The performances reported in this paper seem to be much worse than these previous denoising methods.\n\n-The ablation study in Section 4.4 is limited to demonstrating the different characteristics of each module. Adding the proposed modules seems only to gain some improvements over the baseline network, but the in-depth analysis is missing. I can not get any intuitive conclusions from this part.\n\n[1] Zamir, Syed Waqas, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)\n\n[2] Wang, Zhendong, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)\n\n[3] Mou, Chong, Qian Wang, and Jian Zhang. Deep Generalized Unfolding Networks for Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)\n 1) In Fig.1, the authors claim that the robustness of high-level and low-level scales is different. What is the definition of robustness there? Is there any experiment or reference to support this statement?\n\n2) How does the motivation of exploring within-scale characteristics cause the design of the network architecture? The proposed three modules seem to be only simple combinations of residual blocks and deformable/dilated convolution. The performance improvements may come from the additional parameters rather than the designed architecture.\n\n3) How do we combine the features weighted by channel attention and the features weighted by spatial attention in AMB? The detailed process seems to be omitted.\n None\n",
" The paper investigates existing multi-scale methods, and discovers the within-scale characteristics of multi-scale features are ignored. Therefore, the paper reveals this missing piece for multi-scale architecture design, and accordingly proposes a novel Multi-Scale Adaptive Network (MSANet) by simultaneously exploiting the within-scale characteristics and the cross-scale complementarity. MSANet uses AFeB, AMB to build different subnetworks corresponding to different scales for exploiting the within-scale characteristics, and AFuB to fuse the multi-scale features with varying characteristics from coarse to fine for exploiting the cross-scale complementarity. Extensive experiments on both three real and six synthetic noisy image datasets show the effectiveness of the proposed designs and the advantages over the previous methods. Moreover, supplementary material shows more results together with the main paper. Strengths\n\n(1) Simultaneously taking the within-scale characteristics and the cross-scale complementarity into multi-scale architecture design increases the performance of single image denoising task. The reviewer agrees that the within-scale characteristics is significative for multi-scale architecture and single image denoising.\n\n(2) The motivations of the proposed designs, i.e., MSANet and the three neural blocks, are clear and convincing. The experimental results show the effectiveness and robustness of the proposed designs.\n\n(3) The authors conduct adequate experiments on both synthetic and real noise image datasets, and investigate the effectiveness of the proposed three blocks as well as the model complexity. Overall, the experiments are complete.\n\nWeaknesses\n\n(1) The proposed MSANet is not competitive in parameter numbers as there are different subnetworks corresponding to different scale features. This limits the use of the model on memory-constrained devices. In contrast, the running time and FLOPs are competitive due to multi-resolution features.\n\n(2) Using AFeB and AMS together could significantly improve the performance, while using either AFeB or AMB alone slightly improve the performance, i.e., they are bound together for using. This limits the separate use of the two blocks. (1) Both AFeB and AFuB are based on deformable convolution to adaptively sample and weight the features, but one is used to adaptively select details and filter noises, the other is used to fuse the multi-scale features with varying characteristics. What caused this crucial difference? Why adaptively sampling and weighting the feature is important for exploiting the within-scale characteristics and the cross-scale complementarity?\n\n(2) Why using AFeB and AMS together could significantly improve the performance while only using one of them slightly improve the performance?\n\n(3) In Table 7, only three baselines are investigated w.r.t. the model complexity, which is inadequate. More baselines should be investigated and compared. yes",
" Overall, this paper aims to tackle the task of image denoising based on the drawback (not exploring the internal information of each scale in the network) of multi-scale networks. Specifically, the authors propose three modules to extract feature maps, multi-scale construction and feature aggregation to run a noisy image. Extensive experimental results are shown in the paper. Overall, the architecture of the paper is clear and the writing is good, but the theoretical and methodological parts are not enough to meet me.\n\nStrengths\n\n[+] The authors seem to propose an effective image denoising solution, described in the motivation, related work, methodology and experiments.\n\nWeaknesses\n\n[-] First of all, the title does not show the author's intention, which is a bad case (No expressed motivation).\n\n[-] The abstract description starts by depicting a big problem, namely the general weakness of multi-scale networks, but focuses on a simple application.\n\n[-] The introduction and related works ignore Transformer-based models.\n\n[-] Figure 1 does not convey the author's intention, which is confusing. Why not use the feature map presentation to illustrate that there is a problem with information extraction within the scale?\n\n[-] Do not insert ‘so on’ to follow ‘such as’.\n\n[-] The biggest problem is that the motivation is not valid and I don't see cases to illustrate the problems with multi-scale networks nowadays.\n\n[-] In addition, the solution is essentially adding convolutional layers (adaptive) between scales, does this solve the problem?\n\n[-] What worries me is that the methods of CVPR'22 have been released, so why are there no relevant experiments to compare these methods?\n\n[-] relu---> ReLU, pytorch--->PyTorch. [-] My problem is mainly how can the authors prove that the existing approach does not make use of the information within the scale?\n\n[-] The authors' proposed approach does not seem to be innovative. Limitations have been illustrated.",
" To exploit the potential of the within-scale characteristics and cross-scale complementarity of multi-scale features, three elaborate neural blocks and a novel Multi-Scale Adaptive Network (MSANet) are proposed for single image denoising. Specifically, AFeB and AMB are designed by taking the within-scale characteristics of multi-scale features into consideration, AFuB is designed to exploit the cross-scale complementarity of multi-scale features. Three neural blocks are combined to be the scale-wise subnetwork in MSANet by adapting to the feature characteristics of the corresponding scale. Ablation studies demonstrate the effectiveness of the proposed neural blocks and network. Sufficient comparison experiments on three real and six synthetic noisy image datasets compared with 12 baselines show the advantages of the proposed method. A. Strengths\n1. The idea of simultaneously exploiting the within-scale characteristics and cross-scale complementarity of multi-scale features is interesting and reasonable. The accordingly proposed MSANet incorporates them into multi-scale architecture design, and shows impressive performance on single image denoising task.\n2. The design of AFuB is novel and ingenious. Taking the disordered fine-grained image details in high-resolution features and the coarse-grained image context in low-resolution features as input to adaptively sample, weight and transfer the fine-grained image details into the coarse-grained image context.\n3. Sufficient experiments are conducted to demonstrate the effectiveness of the proposed designs and the advantages over the previous works. Moreover, the supplementary material also shows more details and results together with the main paper.\n\nB. Weaknesses\n1. Since the cross-scale complementarity and the within-scale characteristics are not only restricted to single image denoising task. Therefore, the designs will be more persuasive if they could also achieve improvements in other image restoration tasks.\n2. Compared with the three real and the three synthetic color noisy image datasets, the performance improvements over the baselines on the three synthetic grayscale noisy image datasets are less significant.\n 1. Although different-scale features are processed by different subnetworks, and the subnetworks in the finest and coarsest scale are clearly scale-specific. However, it’s difficult to understand the scale-specific designs in the two middle subnetworks. More clear interpretations are needed for a better understanding.\n2. As shown in Table 6, using AFeB and AMS together could significantly improve the performance. However, using either AFeB or AMB alone slightly gains the performance over ResB. Why is that? Some clear explanations are needed for a better understanding.\n3. As shown in Table 7, although MSANet contains more parameters, the running time and the FLOPs are not much. Why does this happen? Please give some explanations.\n4. Some typos, e.g., “disorderly” -> “disordered” in the paragraph of Adaptive Fusion Block (AFuB).\n Yes",
" Different from existing works treat different scale features equally, in this paper, the authors reveal the different scale features show varying characteristics and should be processed by scale-specific rather than homologous structures. To simultaneously embrace the within-scale characteristics and cross-scale complementarity of multi-scale features, the authors propose a novel Multi-Scale Adaptive Network (MSANet) for single image denoising. Specifically, three neural blocks are designed, i.e., adaptive feature block (AFeB) for adaptively sampling and filtering features; adaptive multi-scale block (AMB) for expanding the receptive field and adaptively aggregate multi-scale features; adaptive fusion block (AFuB) for adaptively fusing the multi-scale features with varying characteristics. Ablation studies indicate the blocks are effective, and comparison experiments show MSANet achieves better performance than alternatives.\n Strengths:\n1. The authors reveal the different scale features show varying characteristics and should be processed by scale-specific structures for this first time. Based on the observation, MSANet is proposed to utilize the within-scale characteristics and cross-scale complementarity of multi-scale features. Extensive experiments are conducted on both three real and six synthetic noisy image datasets to show its superiority.\n2. Three neural blocks are designed by considering the within-scale characteristics and cross-scale complementarity of multi-scale features. Ablation studies indicate these blocks are effective.\n3. In general, the paper is well written. The related works are clearly illustrated, the shortcomings of existing multi-scale networks are analyzed, and accordingly MSANet is proposed by solving the shortcomings. The framework is clear and easy to understand, and the experimental results are explained in detail.\n\nWeaknesses:\n1. The idea of taking advantage of within-scale characteristics and cross-scale complementarity is not limited to denoising task, but is a general idea of multi-scale architecture design. Therefore, exploring and verifying this idea in more tasks and areas would make this work be more significant.\n2. MSANet contains more parameters than most of the baselines due to multiple subnetworks for multi-scale features.\n\n 1. In the paper, “adaptive” is frequently used to describe the proposed network as well as the three neural blocks, but itself is not explicitly discussed. Some discussions are recommended for a better understanding to this work.\n2. AFeB is designed to adaptively sampling and weighting the input features, which is highly expected for fine-grained features. However, why adaptively sampling and weighting the fine-grained features is good for the denoising task to preserve the image details and filter unpleasant noise?\n3. As Table 6 suggests, the performance gains of using either AFeB or AMS alone are slight. However, why using them together could bring significant performance improvements? Please give some interpretations.\n Yes, the authors have discussed the related problems."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4,
4
] | [
"b5UDX1HkJaR",
"t7Y6OZxmt",
"A8LBDrs5V6",
"rPrT6V62Qq4",
"U_P2om0ZPGg",
"kYnPY4QH2F",
"rPrT6V62Qq4",
"_PbaoqR0LGz",
"uoEYQWc2jOY",
"64WafatBSlw",
"AmJAp4s1Use",
"rPrT6V62Qq4",
"_PbaoqR0LGz",
"mi7p-eA6DGo",
"uoEYQWc2jOY",
"64WafatBSlw",
"AKxLioyYjrt",
"AmJAp4s1Use",
"nips_2022_HFm7AxNa9Wo",
"nips_2022_HFm7AxNa9Wo",
"nips_2022_HFm7AxNa9Wo",
"nips_2022_HFm7AxNa9Wo",
"nips_2022_HFm7AxNa9Wo"
] |
nips_2022_NyAJzgHLAr | Intermediate Prototype Mining Transformer for Few-Shot Semantic Segmentation | Few-shot semantic segmentation aims to segment the target objects in query under the condition of a few annotated support images. Most previous works strive to mine more effective category information from the support to match with the corresponding objects in query. However, they all ignored the category information gap between query and support images. If the objects in them show large intra-class diversity, forcibly migrating the category information from the support to the query is ineffective. To solve this problem, we are the first to introduce an intermediate prototype for mining both deterministic category information from the support and adaptive category knowledge from the query. Specifically, we design an Intermediate Prototype Mining Transformer (IPMT) to learn the prototype in an iterative way. In each IPMT layer, we propagate the object information in both support and query features to the prototype and then use it to activate the query feature map. By conducting this process iteratively, both the intermediate prototype and the query feature can be progressively improved. At last, the final query feature is used to yield precise segmentation prediction. Extensive experiments on both PASCAL-5i and COCO-20i datasets clearly verify the effectiveness of our IPMT and show that it outperforms previous state-of-the-art methods by a large margin. Code is available at https://github.com/LIUYUANWEI98/IPMT | Accept | All reviewers lean to accept this paper and this is a clear acceptance. | train | [
"l9iNyM_akA-",
"xQ27gqnj75P",
"LoeViNXYP06",
"L2C4ZmUIlOT",
"5-XAIYd1OBnD",
"NjTjDg5bXza",
"AhIwdHidZC3",
"oeJxNPIPYq"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your continued interest and positive responses to our work. Here are further responses to your concerns.\n\n**Q1. Threshold of diverse samples**\n\nIn the previous rebuttal, we set 1.5 as a threshold to define diverse support and 8.2\\% samples are categorized as \"diverse support\". To further address the reviewer concerns about diverse samples, we provide more average results of all categories about $D_{qs}^{div}$, $D_{qi}^{div}$ and $D_{is}^{div}$ under different thresholds and count the number of categories ($N_q^{div}$) where $D_{qi}^{div}$ is smaller than $D_{is}^{div}$ in the below table. In addition, the mean proportion of diverse samples among all categories is also reported ($Rate$). All the experiments are conducted on PASCAL datasets under 1-shot setting.\n\n| Threshold | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 |\n|:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|\n| $D_{qs}^{div}$ | 11.123 | 11.775 | 12.491 | 13.292 | 14.320 |\n| $D_{qi}^{div}$ | 9.050 | 9.319 | 9.675 | 10.015 | 10.331 |\n| $D_{is}^{div}$ | 6.374 | 6.854 | 7.361 | 7.967 | 8.770 |\n| $Rate$ | 34.01 | 24.85 | 17.775 | 12.085 | 8.2 |\n| $N_q^{div}$ | 0 | 0 | 3 | 3 | 6 |\n\nFrom the table, we can see that, with the increment of the threshold from ‘1’ to ‘1.5’, fewer support samples are categorized as \"diverse support\" according to the $Rate$. However, on the contrary, the number of categories where $D_{qi}^{div}<D_{is}^{div}$ is increasing from 0 to 6 with the threshold increasing. This again proves that our method has significant advantages in dealing with the intra-class diversity problem, especially in diverse support.\n\n**Q2. Distance for different values of $\\alpha$**\n\nTo further illustrate the influence of $\\alpha$ for learning of the intermediate prototype, we compare the mean distance $D_{qs}$, $D_{qi}$ and $D_{is}$ of all categories under different $\\alpha$ values (i.e. 0.3, 0.5, 0.7). The results are reported in the below table on PASCAL datasets under 1-shot setting.\n\n| $\\alpha$ | 0.3 | 0.5 | 0.7 |\n|:---:|:---:|:---:|:---:|\n| $D_{qs}$ | 7.928 | 7.164 | 7.582 |\n| $D_{qi}$ | 6.968 | 6.502 | 5.262 |\n| $D_{is}$ | 4.333 | 5.721 | 6.538 |\n| $D_{qi}$ - $D_{is}$ | 2.635 | 0.781 | -1.275 |\n| mIoU | 66.8 | 65.3 | 64.2 |\n\nIn table, we can see that $D_{qi}$ is much larger than $D_{is}$ with the best mIoU score of 66.8 under $\\alpha=0.3$. The difference between $D_{qi}$ and $D_{is}$ is also larger under $\\alpha=0.3$, which means the intermediate prototype is much closer to support rather than that to query. When $\\alpha$ increases to $0.5$, $D_{qi}$ is similar to $D_{is}$ and the difference between $D_{qi}$ and $D_{is}$ is narrowed to $0.782$ with 65.3 mIoU under $\\alpha=0.5$. This indicates that the intermediate prototype is close to the middle between query and support. Finally, as $\\alpha$ increases to $0.7$, $D_{qi}$ is smaller than $D_{is}$ and the mIoU score decreased to 64.2. The difference between $D_{qi}$ and $D_{is}$ becomes a negative value (i.e. -1.275), which means the intermediate prototype is much closer to query rather than that to support. In conclusion, with the increment of $\\alpha$, the difference between $D_{qi}$ and $D_{is}$ becomes smaller and smaller, which means that the intermediate prototype would gradually close to query and move away from support. Simultaneously, this movement causes the intermediate prototype fails to obtain more deterministic category information from the support, and results in performance degradation. Thus, we argue that $\\alpha$ could influence the biased choice of the intermediate prototype (i.e. biased toward query or support) and the performance of the model.",
" I thank the authors for their response. \nThe provided analysis of the intermediate prototypes largely resolves my concern about Fig. 1 and Fig. 5. According to the authors' response, the intermediate prototypes work better than the support prototypes because $D_{qi}$ is smaller than $D_{qs}$, although most intermediate prototypes are still close to the support prototypes. This makes sense to me and would greatly avoid misleading if the author included these explanations in the paper. In addition, I have some questions about this point after reading the rebuttal. \n* First, the authors define the \"diverse samples\" as those whose $D_{qs}$ is 1.5 times larger than the mean value in their response. Why chose the value \"1.5\" and how many samples are categorized as \"diverse samples\" based on the given criteria? The authors find that the intermediate prototypes are biased towards the query for some categories with \"diverse samples\". Would the conclusion differ when varying this threshold? \n* Besides, according to the response, it seems that $\\alpha$ in Eq. (10) is essential for learning the intermediate prototypes. Have the authors tried to vary the value? For instance, set $\\alpha$ 0.5 or 0.7 to see how $D_{qi}$ and $D_{si}$ change?\n\nBTW, just a minor reminder, for Q1, according to Fig. 2 and the code in supplementary, the baseline in Table 5 seems CyCTR without CyCTransformer rather than PFENet without Feature Enrichment Module since PA is applied to both support and query. Correct me if I am wrong. \n",
" Thank you for your constructive review. Here are our responses.\n\n**Q1. The explanation for Tab.4 and Tab.5**\n\nFor Table 4, all the ablation studies are conducted with both DSL and QA. In our model, DSL is designed to facilitate the learning of the mined prototype by making sure it can obtain good segmentation performance on both query and support. QA is used to update the query feature. Thus, in Tab.4, for a fair comparison, we use both DSL and QA for all experiments to guide the learning of the prototype and the query feature.\n\nFor Table 5, we clarify that the baseline is not the PFENet. As stated in Section 6, our model uses the prior mask and the support prototype to guide the coarse localization. This is the same with PFENet and shown as prototype activation (PA) in Figure 2. However, we did not use PFENet's Feature Enrichment Module in our model. In Line 290, we have stated that we remove IPM, DSL and QA from our model as the baseline. Hence, our baseline only has the PA part and PFENet additionally has the Feature Enrichment Module. We will make this clear in the final version.\n\n**Q2. Bias towards the query**\n\nWe argue that our mined intermediate prototype is not biased towards the query. On the contrary, due to the optimization of Eq.(10), it is more biased towards the support since we set $\\alpha=0.3$ by considering that the support information is more reliable. Furthermore, to provide quantitative proof, we adopt the Euclidean distance as a metric to measure the distances $D_{qs}$ between the query prototype and the support prototype, $D_{qi}$ between the query prototype and the intermediate prototype, and $D_{is}$ between the intermediate prototype and the support. The average distances of all the categories on the PASCAL dataset are shown below.\n\n| Distance| mean |\n|:------------:|:--------:|\n| $D_{qs}$ | 7.928 |\n| $D_{qi}$ | 6.968 |\n| $D_{is}$ | 4.333 |\n\nFrom the table, we can clearly see that $D_{qi}$ is smaller than $D_{qs}$ on the whole dataset, which means that our method effectively reduces the distance between the mined prototype and the query. We can also see that $D_{qi}$ is larger than $D_{is}$, which means that our mined intermediate prototype is still biased to the support. \n\nIn our paper, Fig.1 and Fig.4 are mainly used to highlight the good performance of our proposed method when facing diverse support that is very dissimilar with the query. To objectively prove this, we measure the corresponding distance on only diverse samples, i.e., $D_{qs}^{div}$, $D_{qi}^{div}$, $D_{is}^{div}$, and report the results below. The diverse samples are defined as those whose $D_{qs}$ is 1.5 times larger than the mean value. The average distances of each category (c1 to c20) and the mean of the whole dataset are shown in the below table.\n\n| Category | c1 | c2 | c3 | c4 | c5| c6 | c7 | c8| c9 | c10 | c11 | c12 | c13 | c14| c15 | c16| c17| c18 | c19 | c20 | mean |\n|:------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:--------:|\n|$D_{qs}^{div}$ | 14.789 | 13.807 | 11.443 | 15.459 | 13.936 | 11.745 | 13.948 | 13.062 | 15.697 | 11.445 | 13.702 | 10.702 | 10.562 | 12.125 | 14.312 | 21.232 | 15.306 | 19.513 | 17.434 | 16.183 | 14.320 |\n| $D_{qi}^{div}$ | 12.008 | 12.215 | 9.651 | 11.371 | 12.198 | 8.277 | 10.149 | **8.369** | 10.417 | 9.915 | 9.751| 7.956 | 7.419 | 9.762 | **8.642** | **12.606** | **10.062** | **12.683** | 12.244 | **10.925** | 10.331 |\n|$D_{is}^{div}$ | 6.356 | 6.030 | 5.606 | 10.707 | 6.380 | 7.864 | 8.497 | 9.809 | 10.414 | 5.032 | 7.535 | 6.756 | 6.892 | 6.176 | 9.759 | 14.823 | 11.359 | 12.805 | 11.521 | 11.077 | 8.770 |\n\nWe can observe that $D_{qi}^{div}$ is significantly smaller than $D_{qs}^{div}$, which demonstrates that our method achieves remarkable performance on diverse samples. In most categories, $D_{qi}^{div}$ is still larger than $D_{is}^{div}$. However, in six categories (Bold), we found that $D_{qi}^{div}$ is even smaller than $D_{is}^{div}$, which means the intermediate prototype is biased towards the query. \n\nIn conclusion, the intermediate prototype is generally more biased to the support. However, for diverse samples, it strives more to approach the query. We are sorry for this misunderstanding and will clarify it. \n\n**Q3. Extension to the K-shot setting**\n\nFor the K-shot setting, we use the mean operation on the $\\mathbf{MaskAttn}$ results from the k support images. Eq.(6) will be modified as:\n\\begin{equation*}\n \\mathbf{IPM}(\\mathbf{G},\\mathbf{F^s},\\mathbf{F^q},\\mathbf{M^s},\\mathbf{P^q}) = \\mathbf{MLP}(\\frac{1}{K} \\sum_{j = 1}^{K} \\mathbf{MaskAttn}(\\mathbf{G},\\mathbf{F^s_j},\\mathbf{M^s_j}) + \\mathbf{MaskAttn}(\\mathbf{G},\\mathbf{F^q},\\mathbf{P^q})+\\mathbf{G}).\n\\end{equation*}\nWe will make this clear.\n",
" Thank you so much for acknowledging the strength of our method. We have carefully considered your constructive and insightful comments and here are our responses to your concerns.\n\n**Q1. Comparison of the number of parameters**\n\n**First**, we follow your advice and compare the number of parameters between our method and other state-of-the-art models. When using ResNet50 as the backbone for all models, the results on the PASCAL dataset are shown in the below table. As for our IPMT model, we report all the results using different IPMT layers, i.e., $L=1$ to $L=5$.\n\n| Method | RPMMS | PFENet | RePRI | HSNet | CWT | CyCTR | NERTNet | DCP | IPMT(L=1) | IPMT(L=2) | IPMT(L=3) | IPMT(L=4) | IPMT(L=5) |\n|---------|-------|--------|-------|-------|-------|-------|---------|-------|-----------|-----------|-----------|-----------|-----------|\n| mIoU | 56.3 | 60.8 | 59.7 | 64.0 | 56.4 | 64.0 | 64.2 | 62.8 | 64.1 | 64.7 | 65.2 | 65.6 | 66.8 |\n| Params. | 19.6M | 34.3M | - | 45.2M | 47.3M | 31.9M | 44.5M | 34.8M | 39.9M | 46.2M | 52.4M | 58.6M | 64.8M |\n\nFrom the table, we can see that our model has more parameters than previous methods when using more than three layers. However, we argue that our model can also achieve better performance than the previous SOTA method NERTNet when using the similar number of parameters, i.e., when using $L=2$, our IPMT has 46.2M parameters and achieves 64.7 mIoU, while NERTNet has 44.5M parameters and achieves 64.2 mIoU.\n\n**Second**, we want to emphasize that our work is the first to focus on the intra-diversity between query and support and propose an intermediate prototype to mitigate this issue. We simply adopted a straightforward iterative process to boost the quality of the intermediate prototype and relieve the category information gap between support and query. As a good starting point, our work can potentially motivate more future works to focus on the intra-diversity problem and how to mine the intermediate information more effectively and efficiently requires further exploration.\n\n**Third**, to further address your concern, we tried to decrease the number of parameters of our IPMT by sharing weights among different layers. Surprisingly, the mIoU score even increases to 68.7 with 5 IPTM layers on the PASCAL dataset under the 1-shot setting, while the number of parameters keeps unchanged (i.e., 39.9M) when increasing the iteration layers. This result shows the great potential of our method and we will continue to explore this in the journal extension and future works.\n\n**Q2. A metric for diversity**\n\nFor evaluating the diversity objectively, we adopt the Euclidean distance as a metric to measure the distance between the query prototype and the support prototype ($D_{qs}$) in each episode. To further demonstrate the effectiveness of our method, we also measure the distance between the query prototype and the intermediate prototype ($D_{qi}$) and the distance between the intermediate prototype and the support prototype ($D_{is}$). The average distances of each category (c1 to c20) and the mean of all the categories on the PASCAL dataset are shown in the below table.\n\n| **Category** | **c1** | **c2** | **c3** | **c4** | **c5** | **c6** | **c7** | **c8** | **c9** | **c10** | **c11** | **c12** | **c13** | **c14** | **c15** | **c16** | **c17** | **c18** | **c19** | **c20** | **mean** |\n|:------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:--------:|\n| $D_{qs}$ | 6.967 | 7.326 | 6.980 | 8.402 | 8.445 | 6.903 | 8.599 | 7.391 | 8.937 | 7.091 | 7.842 | 5.983 | 5.668 | 6.828 | 8.052 | 11.011 | 8.283 | 9.705 | 9.409 | 8.740 | 7.928 |\n| $D_{qi}$ | 6.153 | 6.602 | 6.173 | 7.439 | 8.158 | 5.793 | 7.372 | 6.321 | 8.062 | 6.328 | 6.927 | 5.080 | 4.868 | 6.027 | 6.804 | 9.855 | 6.775 | 9.072 | 7.944 | 7.601 | 6.968 |\n| $D_{is}$ | 3.005 | 3.410 | 3.117 | 4.328 | 4.220 | 3.782 | 4.453 | 3.636 | 5.100 | 3.001 | 4.200 | 2.958 | 2.652 | 3.059 | 4.733 | 7.581 | 5.023 | 7.228 | 5.263 | 5.915 | 4.333 |\n\nFrom the table, we can clearly see that $D_{qi}$ is smaller than $D_{qs}$ on all categories, which means that our mined intermediate prototype is more similar to the query than the support is. This also demonstrates that our method and the proposed intermediate prototypes can effectively mitigate the intra-class diversity problem.",
" Thank you so much for acknowledging the strength of our method. We have carefully considered your constructive and insightful comments and here are the answers to your concerns.\n\n**Q1. Similar to CANet ?**\n\n**Difference of the iterative refining:**\nThe idea of our iterative learning process is to refine the intermediate prototype and the query feature map in each IPMT layer by combining both the deterministic category information from the support and the adaptive category knowledge from the query. However, the idea of CANet is to refine the predicted segmentation mask only using the query feature. Hence, our idea is totally different from CANet and other previous methods.\n\n**Difference between QA and IOM:**\nOur QA module concatenates the expanded intermediate prototype with the query feature map to activate the target regions in it, which is used to refine the query feature. While in IOM, the query feature is concatenated with the previous predicted mask to generate more accurate mask prediction. From the perspective of the used information source, our QA module uses the category information from both query and support in the intermediate prototype, while only query information is utilized in IOM. Hence, they are totally different.\n\n**Q2. Sharing weights across support and query attention modules**\n\nTo address your concern, we made some attempts to share weights across support and query attention modules. Surprisingly, the model achieves better performance with 67.3 mIoU score on the PASCAL dataset under the 1-shot setting. Thanks for your advice and we will continue to explore this in the journal extension and future works. ",
" The authors proposed an intermediate prototype mining transformer method for few-shot semantic segmentation. According to experimental results on two widely used datasets, the proposed method achieves promising performance compared with other state-of-the-art deep learning models. Strengths:\n\n1. The paper is overall well organized. Authors systematically introduced the related works, limitations, and potential improvement solutions, which are very helpful to direct reviewers/authors to the specific topic of this paper.\n2. The proposed method is well described, especially the functions and advantages of each added module, the organized codes further make this study easy to follow.\n3. The visualization in the manuscript and supp files are well complementary to the quantitative metrics listed.\n\nWeakness:\n\nEven though it's a very good point that the proposed method performs consistently better than the compared deep learning models on two datasets, I wonder if the added performance is based on larger-scale parameters of the proposed model than the existing ones? So, it would be better to include a discussion/comparison about the number of parameters of both the proposed and compared methods. A basic standpoint of the proposed method is intra-class diversity counts for the few-shot segmentation task. Even though sine scatter plot visualizes the diversity in a subjective way, it's more reasonable to introduce some objective metrics or evaluations for diversity. The authors have adequately addressed the limitations and potential negative societal impact of their work.",
" This paper deals with the few-shot semantic segmentation problem. Instead of aggregating support category information as prototype, this paper proposes using an intermediate prototype to encode the semantic information from both support and query. By this way, this paper aims to reduce the intra-class discrepancy between support and query. The query features and the learned intermediate prototype are concatenated, followed by 1x1 and 3x3 convolutions, to generate the query mask prediction. The intermediate prototype and the query mask prediction are iteratively refined and improved. Experiments show the effectiveness of proposed method. Pros:\n- The idea is reasonable.\n- The experiment results are good.\n- The paper is easy to follow. \n\nCons:\n- The idea to iteratively refine the query prediction is proposed by previous works, e.g. CANet [1]. And the QA module is similar to some design in IOM of CANet. Thus, those two parts, in my opinion, have quite limited novel contributions. \n\n- The authors adopt separate weights for support and query attention module in IPMT. I would like to see the result when the weights are shared across support and query attention modules. \n\n[1] CANet: Class-Agnostic Segmentation Networks with Iterative Refinement and Attentive Few-Shot Learning, CVPR 2019. see the weakness part. The authors didn't discuss the limitations and potential negative societal impact of their work.",
" This paper proposes using the intermediate prototype iteratively generated by Transformer (Intermediate Prototype Mining Transformer, IPMT) to guide the query segmentation. Specifically, it considers to mine category information from both support and query so that the query feature could be better activated. Experiments are performed on the commonly used Pascal-$5^i$ and COCO-$20^i$ datasets. **Strengths**\n1. The motivation of exploring intra-class diversity between support and query makes sense to me. \n2. The generating process of the intermediate prototype is well explained, and the iterative design of IPMT is reasonable.\n3. According to the experiment part, the proposed method obtains significant improvement compared with the previous method. \n\n**Weaknesses** \n1. The baseline in Tab.4 and Tab.5 is not clear. For Tab.4, are DSL and QA applied? For the baseline of Tab.5, it refers to PFENet [28] in line.298, which seems not true according to the description and the result.\n\n2. Since the proposed IPMT processes support and query symmetrically, why would the produced intermediate prototype bias towards the query (as Fig.1 and Fig.4 shown)? \n See Weaknesses for most questions. \n\nPlease address these questions in the rebuttal, especially point 2. I feel confused about why the prototypes would bias toward the query in the symmetrical design. Is it because the prototypes absorb the context information of the query image during the coarse-to-fine procedure?\n\nBesides, some minor points that could be improved:\n1. The paper only explains the setting of the 1-shot but does not explain how to extend to K-shot settings.\n2. SDL->DSL at the line.296\n\n---\nEDIT \nThe rebuttal resolves my concerns and also provides some interesting analyses. I would like to slightly increase my score, from 6 to 7. Limitations were addressed in the Supplementary."
] | [
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"xQ27gqnj75P",
"LoeViNXYP06",
"oeJxNPIPYq",
"NjTjDg5bXza",
"AhIwdHidZC3",
"nips_2022_NyAJzgHLAr",
"nips_2022_NyAJzgHLAr",
"nips_2022_NyAJzgHLAr"
] |
nips_2022_0-uBrFiOVf | DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection | The Mean-Teacher (MT) scheme is widely adopted in semi-supervised object detection (SSOD). In MT, sparse pseudo labels, offered by the final predictions of the teacher (e.g., after Non Maximum Suppression (NMS) post-processing), are adopted for the dense supervision for the student via hand-crafted label assignment. However, the "sparse-to-dense'' paradigm complicates the pipeline of SSOD, and simultaneously neglects the powerful direct, dense teacher supervision. In this paper, we attempt to directly leverage the dense guidance of teacher to supervise student training, i.e., the "dense-to-dense'' paradigm. Specifically, we propose the Inverse NMS Clustering (INC) and Rank Matching (RM) to instantiate the dense supervision, without the widely used, conventional sparse pseudo labels. INC leads the student to group candidate boxes into clusters in NMS as the teacher does, which is implemented by learning grouping information revealed in NMS procedure of the teacher. After obtaining the same grouping scheme as the teacher via INC, the student further imitates the rank distribution of the teacher over clustered candidates through Rank Matching. With the proposed INC and RM, we integrate Dense Teacher Guidance into Semi-Supervised Object Detection (termed "DTG-SSOD''), successfully abandoning sparse pseudo labels and enabling more informative learning on unlabeled data. On COCO benchmark, our DTG-SSOD achieves state-of-the-art performance under various labelling ratios. For example, under 10% labelling ratio, DTG-SSOD improves the supervised baseline from 26.9 to 35.9 mAP, outperforming the previous best method Soft Teacher by 1.9 points. | Accept |
This paper proposes a dense-to-dense semi-supervised object detection method, where the teacher's NMS is used to guide the clustering and ranking of bounding box candidates from the student. This is motivated from potential noise resulting from sparse-to-dense pseudo-label supervision in existing methods. Results are shown on standard semi-supervised object detection benchmarks, with improvements over the current state of art.
The reviewers all thought that the paper had an interesting idea, strong results, and thorough experiments, ablations, and analysis. Some concerns included generalization to other architectures (e.g. DETR or single-stage CNN), comparison to feature distillation, and poor communication especially through the figures. The rebuttal provided answers to these, including new experiments showing generalization to a single-stage method, and all reviewers have recommended acceptance (and the reviewer with borderline accept mentioned it is a good paper). As a result, I recommend accepting this paper as it provides an interesting new contribution to the common mean teacher paradigm. I highly encourage the authors to add new elements that came out in the rebuttal, especially generalization to single-stage methods and failure cases. | train | [
"wZ-yVDZfSfk",
"AAo1PgoWLei",
"uSQ8DlWWC9",
"0Rg928RWGbG",
"HmlaW42_ci",
"dSSYeuMIiXY",
"65-L5TJ-EqZ",
"qXjso-9vRnh",
"wFv7yx_u1Qa",
"V3LbA5wBpJ9",
"NM5TGhtbkfF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your rebuttal. The author provides detailed experiments to prove this. It has addressed my concerns in the rebuttal. So I think it is a good paper.",
" Thanks for the rebuttal. The authors have properly addressed the reviewer's questions in the rebuttal. Thus, the reviewer decided to keep the original rating and suggest accepting this paper.",
" Thanks for your constructive comments. We respond to them below.\n\n**Q1. Which is better between your method and the feature distillation (distillation on the final layer)?**\n\n**A1.** We explain the differences between our method and vanilla distillation methods as follows:\n\n1. Vanilla distillation methods directly take dense predictions of the teacher as knowledge to distill without any adaptation, in contrast, our method converts the teacher's dense predictions into the candidate grouping information and candidate rank, which serve as more informative and instructive knowledge for student training. **Some important properties of object detection are considered in our method**. For example, considering that accurate box rankings benefit the NMS processing [3, 28], we propose Rank Matching to regularize consistency in candidate rank between the teacher and student. On the contrary, vanilla distillation methods are originally designed for the image classification task, and directly transferring them to object detection ignores the fact that object detection has a more complicated mechanism than classification. Lacking essential adaptations, the vanilla distillation methods may only obtain sub-optimal distillation performance in object detection. \n\n2. **Vanilla distillation methods hardly adapt to semi-supervised settings well.** In the Mean-Teacher scheme, distinct data augmentations are applied to the teacher and student, which increases feature discrepancies between the teacher and student, as a result, feature distillation (including distillation on FPN features and R-CNN features) is hard to optimize. On the other hand, distillation on the final layer takes the predictions of the teacher as soft labels. However, previous works (e.g., FixMatch [25]) validated that the hard pseudo labels with low entropy can perform favorably against the soft labels on unlabeled data. Neither feature distillation nor final layer distillation can perform well in semi-supervised settings. In contrast, our method is tailored for semi-supervised settings.\n\nWe conduct expriments to verify our analysis. Experimental results are listed as follows: \n\n| methods | mAP | AP$_{50}$ | AP$_{75}$ |\n| -------------------------- | -------- | --------- | --------- |\n| supervised baseline | 26.8 | 44.9 | 28.4 |\n| FPN feature distillation | 28.7 | 47.9 | 30.1 |\n| R-CNN feature distillation | 29.2 | 48.2 | 30.6 |\n| final layer distillation | 33.3 | 52.9 | 35.8 |\n| DTG-SSOD (ours) | **36.3** | **56.4** | **38.8** |\n\nExperiments are conducted under the 10% labeling ratio. As the results show, our method surpasses other distillation methods by a large margin, at least **3.0 mAP**. These results support our analyses.\n\n**Q2. The supervision in Inverse NMS Clustering is a \"sparse-to-dense\" paradigm, which is supervised by a reserved box. The author claims that the weakness of the \"sparse-to-dense\" at the beginning.**\n\n**A2.** In the Inverse NMS Clustering (INC), **not only reserved boxes but also dense group information** are employed to supervise the student training. INC is proposed to lead the student to group candidate boxes into clusters in NMS as the teacher does, by learning the **grouping information** revealed in the teacher's NMS procedure. Grouping information provides NMS status for each candidate box, indicating which cluster each candidate belongs to. Obviously, group information serves as dense supervision, enabling INC to be a \"dense-to-dense\" paradigm.\n\n",
" **Q8. It is non-trivial to generalize a method developed on a two-stage detector to a one-stage one.**\n\n**A8.** Our DTG-SSOD can perform well on most anchor-based detectors, regardless of one-stage or two-stage ones. We take Generalized Focal Loss (GFL) [28], a popular one-stage detector, as an example, to demonstrate the generalization of DTG-SSOD. The results are listed as follows: \n\n| detector | method | mAP | AP$_{50}$ | AP$_{75}$ |\n| -------- | ------------------------ | -------- | --------- | --------- |\n| GFL | supervised baseline | 28.1 | 43.9 | 29.4 |\n| GFL | Sparse-to-Dense baseline | 35.8 | 53.3 | 38.3 |\n| GFL | DTG-SSOD (ours) | **37.1** | **55.6** | **40.4** |\n\nExperiments are conducted under the 10% labeling ratio. On the GFL, our method surpasses the sparse-to-dense counterpart by +1.3 mAP. However, generalizing the proposed method to anchor-free detectors may be no-trivial, and we leave it for future work. ",
" Thanks for your constructive comments. We respond to them below.\n\n**Q1. Verifying whether (or to what degree) the student network actually mimics the teacher's NMS behavior.**\n\n**A1.** We define a metric, termed *Overlap Ratio (OR)*, to measure the similarity between NMS behavior of two models. Inspired by the definition of IoU, we formulate the OR as: \n\\begin{equation}\n OR = \\frac{|box_{s} \\cap box_{t}|}{|box_{s} \\cup box_{t}|},\n\\end{equation}\nwhere $box_{s}$ and $box_{t}$ refer to reserved boxes after the NMS for the student and teacher; $|box_{s} \\cap box_{t}|$ denotes the number of overlapped boxes between $box_{s}$ and $box_{t}$. Only when $box_{s}$ and $box_{t}$ **originate from the same proposal**, they are considered as overlapped. Moreover, $|box_{s} \\cup box_{t}| = |box_{s}| + |box_{t}| - |box_{s} \\cap box_{t}|$, where $|box_{s}|$ denotes the number of $box_{s}$.\nHigher OR values indicate more similar NMS behavior between the teacher and student. Next, we calculate the OR for the DTG-SSOD and sparse-to-dense baseline. Two checkpoints with similar performance are adopted from two paradigms respectively for analysis, and analyses are conducted on coco val2017 set. The OR of the sparse-to-dense baseline is **33.4%**, and our DTG-SSOD surpasses it by **4.1 points**, reaching **37.5%**. The significant improvements in OR validate that the discrepancy between NMS behavior of the teacher and student is narrowed through the proposed INC and RM. \n\n**Q2. In Eq.4, is $p_{i}^{t}$ probability or logit? Also, is $p_{i}^{t}$ the output from an additional fc/mlp head or just the output from the existing objectiveness/classification head of the detector?** \n\n**A2.** In Eq.4, $p_{i}^{t}$ is the logit. We correct the descriptions about Eq.4 in the revision. $p_{i}^{t}$ is just the output from the existing classification head of the detector. \n\n**Q3.** **Is RM applied to the R-CNN part or the RPN part as well?**\n\n**A3.** Rank Matching is also applied to the RPN part. From Tab.2(c), applying INC and RM to RPN brings a gain of + 0.8 mAP against the sparse-to-dense RPN.\n\n**Q4. What's the setting in Tab.2? In Tab.2(a), it makes more sense to compare with sparse-to-dense rather than supervised baseline. In Tab.2(e), it is better presented as a figure containing the val curve of both conditions.**\n\n**A4.** As stated in Line 280-281, unless specified, all ablation studies in Tab.2 adopt a single data fold of the **10% labeling ratio** as the training data. In Tab.2(d) and Line 309-316, we discuss the generalization of our method on various labeling ratios. The experimental results indicate our method consistently performs favorably against the traditional sparse pseudo labels under various settings. \n\nThank you for your kind and professional suggestions on Tab.2(a) and Tab.2(e). A comparison with the sparse-to-dense is already listed in Tab.2(c), thus we report the comparison with the supervised baseline in Tab.2(a). We will substitute the val curve for Tab.2(c), once we obtain the complete val curve of Soft Teacher. \n\n**Q5. It would be helpful to also include the results of the sparse-to-dense counterpart in Tab.1.**\n\n**A5.** Good suggestion! Most of the results in Tab.1 are averaged on 5 different data folds, which is time-consuming. We hardly finish all experiments on the sparse-to-dense counterpart before the rebuttal deadline. However, we will add it in the final version. \n\n**Q6. Are Fig.3 cherry-picked results or randomly selected results?**\n\n**A6.** Qualitative results in Fig.3 are randomly selected. More results are attached in *supplementary material*. We look up many visualized examples and only few valuable failure cases are observed. In most of the examples, our DTG-SSOD consistently offers more precise supervision for the student than the sparse-to-dense baseline. \n\n**Q7. In Fig.2(b), Inverse NMS Clustering, what does that number of star and triangle shapes mean? Also, why do the student samples not contain the reserved box?**\n\n**A7.** In Fig.2(b), there is no special meanings for the number of stars and triangles. We should have kept their number the same to avoid ambiguity. Moreover, the reserved box should be contained in the student samples, which is wrongly illustrated in the original figure. For a better presentation, we update Figure 2 in the revision. Please refer to it for a better understanding.",
" Thanks for your constructive comments. We respond to them below.\n\n**Q1. Some parts of the paper writing can be improved. Figure 2 is not very clear.**\n\n**A1.** In the revision, we have refined all three figures and polished the Analyses part (Sec. 4.5) for a better presentation. \n\n**In the new Figure 2**, we make the following modifications:\n\n1. We elaborately describe the NMS procedure of the teacher, including candidate grouping, ranking, and suppressing. It is underlined that dense teacher guidance is extracted from the teacher's NMS process.\n2. We split the proposed dense-to-dense paradigm into three steps. At first, one-to-one correspondence between samples of teacher and student is identified, getting ready for the following behavior imitation. Then, Inverse NMS Clustering is performed. The grouping information and reserved boxes offered by the teacher's NMS procedure are converted into dense training labels for student samples. Finally, rank distributions over clustered candidates are enforced to be consistent between the teacher and student.\n\n**Q2. How could the proposed approach be applied to other object detectors?**\n\n**A2.** The principle of our DTG-SSOD is to employ powerful direct, dense teacher supervision. For CNN-based detectors (e.g., Faster R-CNN), we instantiate the dense supervision with NMS behavior of dense predicted boxes, achieving favorable performance. However, for query-based detectors (e.g., DETR) without dense predictions and NMS, we can instantiate dense supervision with **bipartite graph matching of the teacher**. Considering that object queries are relatively dense (e.g., 900 queries are used in DINO [R-1]), the bipartite matching results for object queries can be regarded as dense information. Moreover, recent works [R-1, R-2] validate that the slow convergence of DETR results from the instability of bipartite graph matching. Thanks to stronger performance, the teacher model is supposed to have more stable bipartite matching than the student. Based on these insights, predicted bipartite matching of the teacher is able to serve as dense knowledge to provide consistent optimization goals for the student and stabilize the student training. \n\nWe adopt the two-stage detector (i.e., Faster R-CNN) as the default detection framework in the paper. Here, we also apply our DTG-SSOD to a popular **one-stage detector**, Generalized Focal Loss (GFL) [28], to demonstrate the generalization. The results are listed as follows:\n\n| detector | method | mAP | AP$_{50}$ | AP$_{75}$ |\n| -------- | ------------------------ | -------- | --------- | --------- |\n| GFL | supervised baseline | 28.1 | 43.9 | 29.4 |\n| GFL | Sparse-to-Dense baseline | 35.8 | 53.3 | 38.3 |\n| GFL | DTG-SSOD (ours) | **37.1** | **55.6** | **40.4** |\n\nExperiments are conducted under the 10% labeling ratio. From the table, our method surpasses the sparse-to-dense baseline by **+1.3 mAP**, validating that our method can perform well on both two-stage and one-stage detectors. \n\n#### **References**\n\n[R-1] Zhang, Hao, et al. \"Dino: Detr with improved denoising anchor boxes for end-to-end object detection.\" arXiv preprint arXiv:2203.03605 (2022).\n\n[R-2] Li, Feng, et al. \"Dn-detr: Accelerate detr training by introducing query denoising.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n",
" Thanks for your constructive comments. We respond to them below.\n\n**Q1. Figure 1 and 2 are not illustrative.**\n\n**A1.** Please refer to the revision of the paper, where we have updated Figure 1 and Figure 2. \n\n**In the new Figure 1**, we highlight the conventional \"sparse-to-dense\" paradigm involves many handcrafted components (e.g., NMS, score thresholding, label assignment), which inevitably introduce accumulated noise to supervision signals for the student. In contrast, our \"dense-to-dense\" paradigm abandons intermediate operations and enables more informative and precise supervision for the student. \n\n**In the new Figure 2**, we make the following modifications: \n\n1. We elaborately describe the NMS procedure of the teacher, including candidate grouping, ranking, and suppressing. It is underlined that dense teacher guidance (i.e., grouping information and candidate rank) is extracted from the teacher's NMS process. \n\n2. We split the proposed dense-to-dense paradigm into three steps. At first, one-to-one correspondence between samples of teacher and student is identified, getting ready for the following behavior imitation. Then, Inverse NMS Clustering is performed. The grouping information and reserved boxes offered by the teacher's NMS procedure are converted into dense training labels for student samples. Finally, through Rank Matching, rank distributions over clustered candidates are enforced to be consistent between the teacher and student.\n\n**Q2. Table 1 is expected to show more results under different labeling ratios, e.g., 20%, 30%, and 50%.**\n\n**A2.** In the **Partially Labeled Data setting**, sampling 1%, 2%, 5% and 10% images as labeled data is a common practice adopted by most previous works [8, 9, 10, 11, 12, 14, 27, 33, 34]. These works didn't report their performance under other labeling ratios (e.g., 20%, 30%, and 50%). Therefore, to make a comparison under 20%, 30%, and 50% labeling ratios, we implement the previous best method (Soft Teacher) using its source code. We list the table here to show the comparison: \n\n| methods | 20% | 30% | 50% |\n| :----------------- | :------ | :------ | :------ |\n| supervised baseline | 32.3 | 34.3 | 36.1 |\n| Soft Teacher | 35.2 | 37.3 | 38.1 |\n| DTG-SSOD (ours) | **37.1** | **38.7** | **39.4** |\n\nUnder all three labeling ratios, our DTG-SSOD consistently surpasses the Soft Teacher by a large margin (i.e., at least **+1.3 mAP**), which validates that our method can generalize well on various labeling ratios. \n\n**Q3. Figure 3 is not illustrative.** \n\n**A3.** In the revision, we have refined Figure 3 and polished the corresponding descriptions (section 4.5). We split the original Figure 3 into two independent figures (i.e., Figure 3 and Figure 4 in the revision) for a better presentation. **The new Figure 3** aims to explain an intrinsic problem in the sparse-to-dense paradigm. Specifically, some pseudo boxes are poorly localized, which will mislead the standard IoU-based label assignment and cause noisy training labels for the student. In contrast, without the dependency on pseudo boxes, our method can effectively alleviate this problem. We show two examples **in the new Figure 3**, where the *identical* student proposals will obtain *different* training labels in the two paradigms. Compared with the sparse-to-dense paradigm, our dense-to-dense paradigm obviously offers more precise training labels for the student. In the sub-fig (1), the sparse-to-dense paradigm treats the **poor** proposal (in white) as a positive sample. The proposal has a relatively high IoU value (i.e., 0.6) with the coarse pseudo box (in red), reaching the requirement of positive samples in the standard IoU-based label assignment. Compared with it, our Inverse NMS Clustering takes teacher predictions into consideration and succeeds in suppressing this misleading positive. Concretely, with the low confidence (i.e., 0.3) predicted by the teacher, this proposal will be a clear background sample in the teacher's NMS. Another example is shown in the sub-fig (2). Due to the poor localization of the pseudo box, a relatively precise proposal is wrongly assigned as a negative sample by the sparse-to-dense paradigm. However, our method can avoid this problem by taking the NMS behavior of proposals into consideration.\n\nOn the other hand, we also exhibit an example **in the new Figure 4**, to demonstrate that the teacher is better at modeling candidate rank than the student. Specifically, between the two candidate boxes shown in the figure, the teacher ranks the one with more precise localization first, while the student fails. Candidate rank predicted by the teacher can serve as beneficial dense guidance, which is missing in the conventional sparse-to-dense paradigm. ",
" This paper proposes a \"dense-to-dense\" paradigm that utilizes the dense guidance of teacher to supervise the students. Specifically, Dense Teacher Guidance (DTG)'s dense supervision is achieved by Inverse NMS Clustering (INC) and Rank Matching (RM), which regularizes the consistency on NMS between the teacher and student. INC leads the student to group the candidate boxes into clusters in NMS as the teacher does, so that the student obtains the same grouping scheme of NMS with the teacher. Rank Matching is further introduced to align the score rank over the clustered candidates between teacher and student. Strength:\n1. The motivation of \"dense-to-dense\" paradigm is strong and clear. The limitation of \"sparse-to-dense\" is well presented.\n2. The proposed Inverse NMS Clustering (INC) and Rank Matching are novel. The rank of the samples within each cluster learned by the teacher serves as informative dense supervision, which enables the student to reserve the same candidates as the teacher during NMS.\n3. Ablation studies show the effectiveness of each proposed components or strategies.\nWeakness:\n1. Figure1 and 2 are not illustrative. These two figures fail to give an intuitively visual demonstration on how DTG-SSOD works.\n2. Table 1 is expected to show more results under different labeling ratios, e.g., 20%, 30%, 50%.\n3. Figure 3 is not illustrative. I don't understand it. Could you better illustrate Figure 3? It is poorly presented. Despite fair technical merit and solid experimental validation, the visual illustration in this paper is pretty poor.",
" This paper focuses on the pseudo-labeling strategy in semi-supervised object detection. Unlike previous approaches which generate sparse pseudo labels from teacher detector for each image and match these sparse pseudo labels to dense proposal and object detection predictions from student detector, this paper proposes a \"dense-to-dense\" paradigm. Inverse NMS Clustering and Rank Matching components are proposed to instantiate the dense-to-dense paradigm. Experiments on COCO show that the proposed approach obtains the state-of-the-art semi-supervised object detection results. ### Strengths\n- The proposed approach is interesting.\n- Most parts of the paper are well written and easy to understand.\n- Very promising results are obtained by the proposed approach and detailed ablation experiments are conducted.\n\n### Weaknesses\n- Some parts of the paper writing can be improved. The Figure 2 is not very clear about how inverse NMS and Rank Matching are performed.\n- The proposed approach is mainly applied to Faster R-CNN based detector. It's unclear how it can be applied to recent end-to-end object detectors (e.g., DETR) which don't have dense RPN and proposal-wise predictions. How could the proposed approach be applied to other object detectors, especially the detectors without dense RPN/proposal-wise predictions and NMS (e.g., DETR)? It's unclear how it can be applied to recent end-to-end object detectors (e.g., DETR) which don't have dense RPN and proposal-wise predictions.",
" The authors propose a novel SSOD method that guides the student network to behave like the teacher network's NMS. The proposed method enforces a dense-to-dense training signal instead of the more common sparse-to-dense strategy in other SSOD methods. The proposed method is composed of two parts: (1) INC which leads the student to group candidate boxes into clusters in NMS as the teacher does, and (2) RM which imitates the rank distribution of the teacher over clustered candidates.\n\nThe author demonstrates the effectiveness of the proposed method compared to SoTA SSOD methods. They also conduct thorough ablations and analyses to help understand the contribution of each component and the advantage of the proposed method over the sparse-to-dense strategy. ### Strength\n\n1. Enforcing the student to mimic the teacher's NMS behavior is somewhat novel in SSOD.\n\n1. This work is well motivated for why using dense-to-dense is a preferred training strategy.\n\n1. The authors conduct thorough ablations (Sec.4.4) and analyses (Sec.4.5) for each component of the proposed method as well as why the proposed dense-to-dense method works better than the sparse-to-dense methods.\n\n1. As shown in Tab.2(b), It is surprising that including the box regression loss is beneficial for the final performance without any tailored method or tricks.\n\n### Weakness\n\n1. It would be good to verify whether (or to what degree) the student network actually mimics the teacher's NMS behavior.\n\n1. In Eq.4, is $p^t_i$ probability or logit? It seems weird to pass the probability through exponential and softmax. Also, is $p^t_i$ the output from an additional fc/mlp head or just the output from the existing objectiveness/classification head of the detector?\n\n1. Is RM applied to the R-CNN part or the RPN part as well?\n\n1. In Tab.2, what is the setting here? Eg. the amount of labeled data. Could the findings generalize to other settings (eg. different amounts of labeled data)? In Tab.2(a), it makes more sense to compare with sparse-to-dense rather than supervised baseline. In Tabl.2(e), it is better presented as a figure containing the val curve of both conditions.\n\n1. It would be helpful to also include the results of the sparse-to-dense counterpart in Tab.1.\n\n1. Are Fig.3 cherry-picked results or randomly selected results? It would be good to also include the failure modes and provide analyses and discussions.\n\n1. In Fig.1(b), Inverse NMS Clustering, what does that number of star and triangle shapes mean? Also, why do the student samples not contain the reserved box? Please see the questions in the weakness part of the \"Strengths And Weaknesses\" section. In lines 148-149, the authors claim, \"In theory, our SSOD method is independent of the detection framework and can be applicable to both one-stage and two-stage detectors.\" However, as Unbiased Teacher v2 [1] pointed out, it is non-trivial to generalize a method developed on a two-stage detector to a one-stage one. \n\n[1] Liu, Yen-Cheng, Chih-Yao Ma, and Zsolt Kira. \"Unbiased Teacher v2: Semi-Supervised Object Detection for Anchor-Free and Anchor-Based Detectors.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.",
" This paper introduces a new dense-to-dense” distillation to supervise the training of object detectors. The authors consider the advantages of the dense-to-dense paradigm and point out that the sparse-to-dense paradigm could accumulate noise in the hand-crafted NMS and label assignment process. They propose the Inverse NMS Clustering (INC) and Rank Matching (RM) to instantiate the dense supervision. Finally, the method achieve SOTA performance in both accuracy and efficiency. - The central insight of the work is simple and intuitive i.e. using dense supervision can be more informative regarding the information loss of NMS and score filtering. And the authors propose some solutions(INC and RM) to these problems.\n- The paper presents extensive and insightful ablations showing the importance of various choices (e.g. temperature T, choice of target boxes in INC etc.), and overall, all the choices made are empirically well-justified.\n 1. Which is better between your method and the feature distillation(distillation on the final layer)?\n2. The supervision in Inverse NMS Clustering is a \"sparse-to-dense\" paradigm, which is supervised by a reserved box. The author claims that the weakness of the \"sparse-to-dense\" at the beginning. 1. The questions above confuse me. Q1,Q2 . \n2. The method is not very interesting, but the paper presents extensive and insightful ablations study about their methods."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"uSQ8DlWWC9",
"V3LbA5wBpJ9",
"NM5TGhtbkfF",
"HmlaW42_ci",
"V3LbA5wBpJ9",
"wFv7yx_u1Qa",
"qXjso-9vRnh",
"nips_2022_0-uBrFiOVf",
"nips_2022_0-uBrFiOVf",
"nips_2022_0-uBrFiOVf",
"nips_2022_0-uBrFiOVf"
] |
nips_2022_LIKlL1Br9AT | Contact-aware Human Motion Forecasting | In this paper, we tackle the task of scene-aware 3D human motion forecasting, which consists of predicting future human poses given a 3D scene and a past human motion. A key challenge of this task is to ensure consistency between the human and the scene, accounting for human-scene interactions. Previous attempts to do so model such interactions only implicitly, and thus tend to produce artifacts such as ``ghost motion" because of the lack of explicit constraints between the local poses and the global motion. Here, by contrast, we propose to explicitly model the human-scene contacts. To this end, we introduce distance-based contact maps that capture the contact relationships between every joint and every 3D scene point at each time instant. We then develop a two-stage pipeline that first predicts the future contact maps from the past ones and the scene point cloud, and then forecasts the future human poses by conditioning them on the predicted contact maps. During training, we explicitly encourage consistency between the global motion and the local poses via a prior defined using the contact maps and future poses. Our approach outperforms the state-of-the-art human motion forecasting and human synthesis methods on both synthetic and real datasets. Our code is available at https://github.com/wei-mao-2019/ContAwareMotionPred. | Accept | Three expert reviewers have recommended accepting the paper after the discussion period. Reviewers like the overall idea and framework. The AC agrees and recommends acceptance. Please carefully revise the paper based on the reviews. | train | [
"EcVXc_zUNCk",
"afWvOKiqRMk",
"0Tqd9BeHwIc",
"YQIgRUeX0vQ",
"YQxWRBYU5er",
"guh8pBoYUeG",
"htr-86LXGpE",
"SdPix4XY7Np",
"_507hdQuFVu",
"ms6dAWB_Cg4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the suggestion.\n\nWe have added additional results to the supplemental material. As also mentioned in the Checklist, we will release our source code upon the acceptance of this paper which also includes the code to visualize our results. ",
" Thanks for the detailed response. \n\nMost of my concerns are addressed; the only remaining issue is the lack of qualitative results. I have checked the original included supplementary video and it only includes a handful of sequences (which overlaps with the plots included in the main paper). Some failure cases would also be useful to include in the video to better understand the results. ",
" Hi, \n\nThanks for the clear explanation, it has addressed my questions.",
" 1. Revising contributions\n\n__Response:__ Thank you for pointing this out. As shown in the revised manuscript, we would like to update our contributions as follow. i) We introduce a distance-based per-joint contact map that captures fine-grained human-scene interactions to avoid generating unrealistic human motions. ii) We further propose a two-stage pipeline whose first stage models the temporal dependencies of past contact maps and predicts the future ones, and whose second stage forecasts the future human motion conditioned on these contact maps.\n\n2. Is it necessary to use GRU and PVCNN?\n\n__Response:__ The human-scene contact maps at different frames have strong temporal dependencies. For example, the left and right foot of a walking person will alternately touch the floor. The design of our contact map prediction network is necessary to capture such temporal dependencies. More specifically, we propose to represent human-scene contact based on the pair-wise distance between human joints and scene points. Such a representation is smooth across time and, as shown in [22], can be efficiently encoded via DCT. Although given a scene point cloud and a historical human motion, one could use a simpler classification network to directly distinguish whether a scene point is in contact with a certain joint or not at a particular future time step, it would be extremely hard for such a network to also model the temporal dependencies between the contact maps at different time steps.\n\n3. Analysis about results at 0.5 second.\n\n__Response:__ As shown in Table 2, although our method does not perform well on the synthetic dataset GTA-IM for prediction at 0.5 second, it yields the best performance on the more complicated real world PROX dataset. We acknowledge that the benefits of our contact map could be limited on such short-term predictions because human motion in a very short future (e.g., less than 0.5 second) is highly constrained by the historical movements, e.g, the joint velocities and accelerations.\n\n4. Is contact priors enough? (question)\n\n__Response:__ Contact points alone only constrain the position of the corresponding contact joints. However, when combined with other natural constraints, such as the length of the human limbs and the temporal smoothness of human motion, it can provide a guidance for the other joints as well. For example, when we only observe contact points for a joint in the first and third frames, the position of that joint in the second frame can only be somewhere in between. Note that these natural constraints are encoded in the global translation loss (Eq. 11) and the local pose error (Eq. 12).\n\n5. Is global motion prediction necessary in motion forecasting network? (question)\n\n__Responses:__ Only a simple xyz averaging over all closest points of the joints near the feet joints may lead to ambiguities, because it is common to have root motion while the contact points remain the same. For example, during squatting, the contact points between our feet and the ground do not change while the location of our root joint (often defined at the pelvis) changes.\n\nIn the second stage of our pipeline (Figure 2 bottom), we concatenate all three inputs as a long vector and then feed it to the GRU. As also expressed in Eq 10, we concatenate the latent feature of the historical motion $\\mathbf{H}_x\\in \\mathbb{R}^D$ (from another GRU), the root joint location $\\hat{\\mathbf{x}}^{\\text{root}}_p\\in \\mathbb{R}^3$ at frame $p$, the local human pose $\\hat{\\mathbf{x}}\\_{p-1}^{\\text{local}}\\in\\mathbb{R}^{(J-1)\\times 3}$ at frame $p-1$ and the contact points $\\mathbf{q}_p\\in\\mathbb{R}^{J\\times 4}$ at frame $p$. Note that all variables are first flattened and then concatenated. The resulting long vector is then used to predict the local human pose at frame $p$.\n\n6. Number of scene points v.s. performance (question)\n\n__Response:__ Since the maximum range of human motion in 2 seconds is fixed, it is not necessary to select all the scene points, which typically represent an entire building or room. During training, for each motion sequence, we randomly sample 5000 scene points that are within 2.5 meters away from the root joint of the last observed pose. Note that we only sample 5000 points because of GPU memory limitation. During testing, we can use a different number of scene points. In the table below, we compare the results of using different number of scene points given the pretrained model on GTA-IM. Using more scene points only yields a slight improvement in path error.\n\n| No. of Scene points | Mean path error (mm) | Mean pose error (mm) |\n|:---:|:---:|:---:|\n| 5000 | 108.2 | 61.4 |\n| 10000 | 106.1 | 61.2 |\n| 15000 | 106.2 | 61.1 |\n| 20000 | 106.2 | 61.1 |\n\nAs to the computational efficiency, the evaluation of one sample takes around 90 ms during testing. Please see the revised manuscript for the updates.\n\nThank you for pointing out the typo, please see the revised manuscript for the updates.",
" 3. Details about GRU and PVCNN (question)\n\n__Response:__ We replicate the latent feature of motion history from GRU several times and then concatenate the resulting feature with the output of the PVCNN encoder. Specifically, given the output of the PVCNN encoder $\\mathbf{H}_{\\text{pcd}}\\in \\mathbb{R}^{N\\times F}$, where $N$ is the number of points and $F$ is their feature dimension (point feature), and the latent feature of the historical motion $\\mathbf{H}_x\\in \\mathbb{R}^D$ (motion feature), we first replicate the motion feature $N$ times and then concatenate the resulting feature with the point feature to form a feature matrix $\\tilde{\\mathbf{H}}\\in \\mathbb{R}^{N\\times (F+D)}$. The PVCNN decoder takes $\\tilde{\\mathbf{H}}$ as input to produce a residual of the contact maps' DCT feature.\n\nThank you for pointing out the inaccurate equations. As shown in the revised manuscript, we will also update these equations as follows in the final version.\n$$\\hat{\\mathbf{C}} = \\text{IDCT}(\\mathbf{H} + \\mathcal{F}(\\mathbf{S},\\mathbf{H},\\mathcal{G}_x(\\mathbf{X})))$$\n$$\\hat{\\mathbf{Y}}^{\\text{root}} = \\mathcal{M}(\\tilde{\\mathcal{G}}_x(\\mathbf{X}),\\mathbf{Q})$$\n$$\\hat{\\mathbf{x}}_p^{\\text{local}}=\\hat{\\mathbf{x}}\\_{p-1}^{\\text{local}}+\\mathcal{G}(\\hat{\\mathbf{x}}\\_{p-1}^{\\text{local}},\\hat{\\mathbf{x}}_p^{\\text{root}},\\mathbf{q}_p,\\tilde{\\mathcal{G}}_x(\\mathbf{X}))$$\nHere, $\\mathcal{G}_x$, $\\tilde{\\mathcal{G}}_x$ refer to the GRU motion encoders.\n\n4. Comparison to the variant of the method in GTA-IM. (question)\n\n__Response:__ To the best of our knowledge, the only work that uses the GTA-IM dataset and released their official code is [R4]. We compare their results with ours in the table below. Note that the original model of [R4] observes past 2D human poses to predict future motion. We adapted their code to take 3D past human motion as input.\n\n| | | | Path | | | | | | Pose | | |\n|:---:|:---:|:---:|:---:|:---:|:---:|---|:---:|:---:|:---:|:---:|:---:|\n| method | 0.5s | 1s | 1.5s | 2s | mean | | 0.5s | 1s | 1.5s | 2s | mean |\n| Skeleton-graph [R4] | 91.4 | 153.9 | 222.9 | 313.7 | 162.7 | | 98.8 | 107.2 | 112.2 | 116.8 | 106.1 |\n| Ours | **58.0** | **103.2** | **154.9** | **221.7** | **108.4** | | **50.8** | **67.5** | **75.5** | **86.9** | **61.4** |\n\n5. Forecasting more than 5 seconds (question)\n\n__Response:__ We follow the setup in [6] to predict the future 60 frames (2 seconds) given the past 30 frames (1 second). To obtain motions in the further future, we iteratively feed the predicted future motion to the pretrained model. In the table below, we compare our results with the most competitive baseline DMGNN [18] on GTA-IM dataset. Our model still outperforms the baseline model in both path and pose error when predicting future motions up to 10 seconds. The large path and pose errors are expected because such long-term future is stochastic and should thus not be predicted with a deterministic model. As also mentioned by Reviewer EMAJ, human motion is multi-modal, especially for motion in the long-term future, e.g., more than 5 seconds. The most popular way of addressing this is to predict multiple possible future motions, i.e., stochastic human motion prediction, which will be part of our future work.\n\n| | | | Path | | | | | | | Pose | | | |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|---|:---:|:---:|:---:|:---:|:---:|:---:|\n| method | 5s | 6s | 7s | 8s | 9s | 10s | | 5s | 6s | 7s | 8s | 9s | 10s |\n| DCGNN [18] | 1977.6 | 2334.2 | 2853.0 | 3245.3 | 3557.9 | 3938.4 | | 171.0 | 180.3 | 197.4 | 203.2 | 214.3 | 217.9 |\n| ours | **1970.7** | **2290.1** | **2751.0** | **3043.7** | **3439.7** | **3712.2** | | **135.4** | **144.1** | **148.9** | **155.9** | **159.5** | **165.0** |\n\n6. limitation about body surface contact (limitation)\n\n__Response:__ Thank you for the suggestion. Please see the revised manuscript for the updates. We will also add this to the limitation discussion in the final version.\n\n[R4] Mohamed, Abduallah, et al. \"Skeleton-Graph: Long-Term 3D Motion Prediction From 2D Observations Using Deep Spatio-Temporal Graph CNNs.\" ICCV Workshop 2021.",
" 1. Body surface contact\n\n__Response:__ In applications where body shape matters, we agree that it can be helpful to also consider the contact between the body surface and the scene. However, our per-joint contact map can also regularize the surface vertices given that the position of the human joint is often defined as a weighted sum of surface vertices (e.g., in SMPL-X [25]). Moreover, our joint-based contact map can also be easily extended to the surface contact map. Note that, at the time of submission, human-scene interaction datasets either only provide human skeletons, e.g., GTA-IM, or have noisy human body surfaces, e.g., PROX. The new dataset in [R1], which has recently been released, provides accurate scene and human body contacts. We would like to extend our method to this dataset in the future.\n\n2. Discussion about grasping\n\n__Response:__ Thank you for the suggestion. Below we discuss the major difference between our human-scene contact map and the hand-object contact map in grasping. As shown in the revised manuscript, we will also include this discussion in the final version. Although hand-object contact relationships have already been studied for the task of grasping [R2,R3], existing methods cannot be naively applied to human-scene interactions because their object-centric contact relationships tend to be static across time. For example, when we are using a hammer, we will grasp the handle tightly, and thus the contact region between our palms and the hammer does not change across time. By contrast, our human-scene contact maps changes across the frames for almost all human activities. To capture such temporal dependencies, we propose to represent the human-scene contact based on the pair-wise distances between the human joints and the scene points, and use a DCT-based temporal encoding strategy to capture the cross-frame dependencies of contact maps. (Note that we assume that Jiang et al., ICCV2021 and Brahmbhatt et al., CVPR2019 refer to [R2] and [R3], respectively.\n\n[R1] Shimada, Soshi, et al. \"HULC: 3D Human Motion Capture with Pose Manifold Sampling and Dense Contact Guidance.\" ECCV 2022.\n\n[R2] Jiang, Hanwen, et al. \"Hand-object contact consistency reasoning for human grasps generation.\" ICCV. 2021.\n\n[R3] Brahmbhatt, Samarth, et al. \"Contactdb: Analyzing and predicting grasp contact via thermal imaging.\" CVPR. 2019.",
" 1. Novelty in lieu of motion generation methods and comparison with SAMP in literature review.\n\n__Response:__ Although we both adopt multi-stage pipeline, the motivation and intention of each stage is different. SAMP [R1] first generates a goal location and orientation given a target object (goal generation). It then plans the path to the goal with searching techniques, such as A* (path planning). Finally, a motion net is used to generate a human pose at each frame. Our pipeline differs from that of SAMP [R1] in two ways. First, SAMP only considers interaction with a given object in the final frame, i.e, the goal, while our contact map prediction network does not rely on any object and aims to capture interactions with the entire scene in every frame. Second, SAMP's interaction representation is coarse, i.e., only a goal location and orientation. By contrast, our contact map models fine-grained relationships between every human joint and the scene. Such per-joint contact maps constrain both the global motion and the local human pose and can avoid issues like ''ghost motion''.\n\nThank you for the suggestion. As shown in the revised manuscript, we will discuss SAMP [R1] in the parts of the text where we already review works on scene-aware human motion generation.\n\n2. Lack of generative modeling.\n\n__Response:__ In this work, we tackle the problem of deterministic motion prediction. Therefore, given one history motion, we predict one future motion. Given a past motion, human movement in a short future is mostly deterministic because of physical constraints, e.g., Newton's laws. For example, a forward-walking person cannot suddenly turn backward. We acknowledge that human motion is multi-modal, especially for long-term future motions. This is addressed in the task of stochastic human motion prediction, which will be one of our future research directions.\n\n3. Lacking Qualitative Results\n\n__Response:__ For qualitative motion comparisons, please see our supplemental video. We will include more comparisons in the final video.\n\n[R1] Hassan, Mohamed et al. “Stochastic Scene-Aware Motion Prediction.” 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021): 11354-11364.",
" This work proposes a two-stage human motion forecasting framework that explicitly models human-scene contact. It proposes to decouple the problem into two stages: past pose conditioned contact forecasting, and contact-conditioned pose forecasting. Specifically, it proposes to use a Discrete Cosine Transform (DCT) based network to predict contacts based on past contact, human pose, and scene point clouds. After the future contact is predicted, a series of networks are used to predict the human’s global translation, rotation, and body joint positions. ## Strength\n\n**Explicit Contact Modelling**\n\n- The proposed two-stage pipeline is intuitive and performs well in the context of human motion prediction. Contact and physical constraints play an important role in governing human motion and based on contact human motion are a lot less ambiguous. The idea of explicitly predicting the future contact of humans in a known scene to guide motion prediction is interesting.\n\n**Performance compared to State-of-the-art**\n\n- The proposed method outperforms SOTA methods in the motion prediction task.\n\n## Weakness\n\n**Novelty in lieu of motion generation methods**\n\n- Given the existence of methods such as SAMP [1], where human motion is generated based on path and scene context, the proposed framework has limited novelty. While the settings are slightly different (motion and interaction generation vs forecasting), the methodology is largely similar. The two-stage modeling has been largely explored (first generate goals or subgoals, then generate local motion), and this work mainly excels at better modality (explicit contact).\n\n**Lack of generative modeling** \n\n- While human motion is multi-modal, the lack of generative and stochastic modeling means that the estimated human motion could be memorizing past observed interactions (especially in PROX and GTA-IM datasets, where the motion are largely similar).\n\n**Lacking Qualitative Results**\n\n- Since motion is better seen in videos, it would be better if more qualitative samples are provided.\n\n[1] Hassan, Mohamed et al. “Stochastic Scene-Aware Motion Prediction.” 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021): 11354-11364. It would be great if the authors could compare more closely with the motion generation literature (such as SAMP) and discuss differences. The authors have discussed limitations adequately. ",
" This paper proposes to tackle scene-aware 3D human motion forecasting by explicitly modeling the human-scene interactions, i.e., representing the contact between human body joints and scene points with a distance-based contact map. They also introduce a two-stage pipeline that first predicts the future contact map with the given motion history; then forecasts the future global translation and local poses. The proposed method can predict more physically plausible motions and avoid artifacts such as “ghost motion”.\n\n**Main Contributions:** This paper proposes a contact map representation explicit modeling human-scene interactions and propose a two-stage framework to forecast human motions with given motion histories and 3D scenes. - Strengths:\n\n1.\tThis paper explicitly models the human-scene interactions with a contact map, which measures the distance between the human joints and the scene points. The contact map enables more physically plausible and realistic human-scene interaction generation.\n2.\tThe proposed two-stage prediction pipeline disentangles contact prediction and human pose forecasting, thus capable of explicitly encouraging consistency between human motions and contact points in given 3D scenes.\n\n- Weaknesses:\n\n1. The contact map computed between joints and scene points is too coarse. It is more plausible to compute the contact map between body surface vertices and scene points because the human body surface rather than joints contact with the environment.\n2. The contact map has been widely used in grasp generation tasks[ Jiang et al., ICCV2021; Brahmbhatt et al., CVPR2019 ]. I think you should discuss the contact map in the literature review. And the idea of using the contact map to model human-scene interactions is not very appealing. 1. How do you use the motion features extracted by GRU in PVCNN? The PVCNN processes the 3D point cloud at point level, but the motion feature is the feature of the given motion history. Additionally, the notation $\\mathbf{X}$ in Eq.6 is inaccurate, as $\\mathcal{F}$ takes as input the latent feature of motion history instead of past human poses $\\mathbf{X}$. Similar problems in Eq.9 and Eq.10.\n\n2. The partial problem setting follows GTA-IM, which forecasts future human motions with a multi-stage pipeline. The quantitative evaluation should include the comparison between the proposed method and the variant of the method in GTA-IM.\n\n3. How about the model's ability of forecasting long-term motions (more than 5 seconds)? The current setting only predicts the future motions in 2 seconds which is too short for humans to have distinguishable movements. When considering the contact between human and scene, I think the shape-based human body representation, e.g., SMPL and marked-based representation, is more reasonable to model the contact between body surface and scene. This representation can produce a more fine-grained contact map, which thus can model more realistic details of human-scene interactions.",
" This paper promotes explicit contacts modeling when handling the challenging scene-aware human motion forecasting problem,\n\nTo achieve that, a dense scene-joint distance maps are utilized to densely model human dynamics when interacting with the static scenes,\nfollowed by a novel discrete pre-processing with DCT to get sparse principal features, while also enabling residual motion prediction of\nthe contacts point in frequency domain. \n\nA two-stage pipeline is assembled together to get better future motion predictions for all 3D body joints, with stage 1 combining GRU-based dynamics modeling and PVCNN-based 3D scene encoding for sequential contact distance maps prediction, and stage 2 carrying out contacts-guided sequential motion forecasting in stage 2.\n **Strengths**\n[Novelty]\nAs mentioned above, dynamic contacts modeling and the way to use it are the shining points in this paper, in which the authors leverage an effective point-joint distance field followed by a novel frequency transformation(DCT) to better capture sparse smooth motion patterns. Per-joint closest scene point is used to better guide the motion prediction. This design \n\n[Completeness]\nThree baselines and their proposed methods are validated in two common datasets, including both real and synthetic ones. They also provide both quantitative and qualitative results, including a video demo in the supp. \n\n[Effectiveness]\nThe author gets consistently better motion forecasting results(global and local) on long-term motion predictions on two benchmarks compared to all the baselines.\n\n**Weaknesses**\n1. Firstly, even though the whole method seems to be novel, I do not think that line 49-54 is carefully written to\nwell capture the whole work, in (ii) the two-stage pipeline itself should not be considered as a key contribution,\nit has overlaps with (i), also I did not see any unique technical contribution statements when tackling this conditional\nmotion synthesis task. The author needs a better clarification on the contributions.\n\n2. Even though so many efforts(DCT/IDCT transformation, GRU models, PVCNN) have been conducted to get better\ncontact maps forecasting, it seems that only one nearest scene point is selected and used per joint point, I am\nwondering whether such a heavy pipeline is really necessary. \n\n3. The result in table 1 seems to show that the proposed method does not perform well enough on short-term predictions, even though it performs consistently better on predictions >= 1s\n 1. Related to Weakness 2, is this single closest contact priors\nwould be enough to guide the motion forecasting network? The single contact point looks like a special form of\nsigned distance function to me.\n\n2. The motion forecasting network is still a very implicit design when leveraging contact points,\nlike for the global translation, the author seems to want the MLP module to learn root motion implicitly from all the closest points,\nvery similar to a learnable center point. If this is possible, I am wondering how this implicit design would be better than\na simple xyz averaging over all closest points belonging to the feet-nearby joints (maybe plus a certain offset).\nPlus, it is not clear that in Figure 2, how does GRU take all three inputs to update the local joints offsets prediction?\n\n3. How does number of scene points affect the prediction performance? Do we need to select all the scene points in order to train the stage 1 model? The author does not mention the computation efficiency clearly in the implementation section. See above\nMinor: Line 179, should be '...are then fed into...' "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"afWvOKiqRMk",
"htr-86LXGpE",
"YQIgRUeX0vQ",
"ms6dAWB_Cg4",
"guh8pBoYUeG",
"_507hdQuFVu",
"SdPix4XY7Np",
"nips_2022_LIKlL1Br9AT",
"nips_2022_LIKlL1Br9AT",
"nips_2022_LIKlL1Br9AT"
] |
nips_2022_7CONgGdxsV | Understanding Programmatic Weak Supervision via Source-aware Influence Function | Programmatic Weak Supervision (PWS) aggregates the source votes of multiple weak supervision sources into probabilistic training labels, which are in turn used to train an end model. With its increasing popularity, it is critical to have some tool for users to understand the influence of each component (\eg, the source vote or training data) in the pipeline and interpret the end model behavior. To achieve this, we build on Influence Function (IF) and propose source-aware IF, which leverages the generation process of the probabilistic labels to decompose the end model's training objective and then calculate the influence associated with each (data, source, class) tuple. These primitive influence score can then be used to estimate the influence of individual component of PWS, such as source vote, supervision source, and training data. On datasets of diverse domains, we demonstrate multiple use cases: (1) interpreting incorrect predictions from multiple angles that reveals insights for debugging the PWS pipeline, (2) identifying mislabeling of sources with a gain of 9\%-37\% over baselines, and (3) improving the end model's generalization performance by removing harmful components in the training objective (13\%-24\% better than ordinary IF). | Accept | This paper proposes source-aware Influence Function (IF) to study the “influence” of individual data, source, and class tuples on the performance of different label functions in the programmatic weak supervision paradigm. The proposed method has the capability to work with diverse data domains (tabular, image, textual). An ample number of datasets are used in the experiments.
The reviewers agree that the proposed method is interesting and sound, the experiments are thorough, and the results provide valuable insights for future work. Reviewers' raised concerns and questions are properly addressed by the author's response. | test | [
"QU1Q-MNYVv",
"iOlUfX0XO4G",
"pXFDuGrnq5i",
"LZnP-mvyLCx",
"DdtfKqcMB96",
"K6R1BevR9Hl",
"Pw9lhotw6qy",
"OKA67Zxes23"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n| | | MV | | | | | | DS | | | | | Snorkel | | | |\n| ------------ | :--: | :-------: | :----: | :-------: | :-------: | :-------: | :--: | :-------: | :-------: | :-------: | :-------: | :--: | :-------: | --------- | :-------: | :-------: |\n| **Dataset** | | **ERM** | **IF** | **GIF** | **RW** | **WM** | | **ERM** | **IF** | **RW** | **WM** | | **ERM** | **IF** | **RW** | **WM** |\n| Census | | 0.579 | 0.642 | **0.653** | 0.649 | 0.648 | | 0.516 | **0.605** | 0.596 | 0.590 | | 0.554 | 0.610 | 0.624 | **0.630** |\n| Mushroom | | 0.893 | 0.913 | 0.952 | 0.952 | **0.958** | | 0.853 | 0.896 | **0.899** | 0.850 | | 0.863 | 0.929 | **0.936** | 0.928 |\n| PW | | 0.844 | 0.876 | 0.873 | 0.877 | **0.878** | | 0.799 | 0.866 | 0.863 | **0.870** | | 0.807 | 0.867 | 0.867 | **0.875** |\n| Spambase | | 0.783 | 0.881 | 0.867 | 0.872 | **0.883** | | 0.690 | 0.867 | **0.870** | 0.865 | | 0.842 | 0.870 | **0.901** | 0.901 |\n| IMDb | | 0.789 | 0.789 | 0.740 | 0.790 | **0.793** | | **0.626** | 0.612 | 0.626 | 0.626 | | **0.786** | 0.786 | 0.786 | 0.786 |\n| Yelp | | 0.839 | 0.839 | **0.847** | 0.842 | 0.833 | | 0.853 | 0.853 | **0.862** | 0.850 | | 0.850 | 0.850 | **0.852** | 0.845 |\n| Youtube | | 0.790 | 0.810 | **0.887** | 0.861 | 0.870 | | 0.824 | **0.888** | 0.858 | 0.821 | | 0.858 | 0.898 | **0.899** | 0.883 |\n| DN-real | | 0.892 | 0.944 | 0.917 | **0.966** | 0.957 | | 0.685 | 0.920 | **0.966** | 0.948 | | 0.849 | 0.954 | 0.960 | **0.966** |\n| DN-sketch | | 0.552 | 0.632 | 0.538 | **0.682** | 0.664 | | 0.484 | 0.507 | **0.673** | 0.578 | | 0.538 | 0.659 | **0.664** | 0.628 |\n| DN-quickdraw | | 0.420 | 0.764 | 0.400 | **0.780** | 0.724 | | 0.544 | 0.700 | **0.740** | 0.736 | | 0.360 | **0.720** | 0.668 | 0.560 |\n| DN-painting | | 0.656 | 0.821 | 0.763 | 0.818 | **0.860** | | 0.695 | **0.847** | 0.831 | 0.847 | | 0.614 | 0.815 | **0.854** | 0.834 |\n| DN-infograph | | **0.612** | 0.586 | 0.566 | 0.526 | 0.599 | | 0.553 | 0.559 | **0.579** | 0.546 | | 0.539 | 0.520 | 0.546 | **0.579** |\n| DN-clipart | | 0.691 | 0.711 | 0.691 | **0.742** | 0.732 | | 0.639 | 0.691 | **0.711** | 0.670 | | 0.701 | **0.804** | 0.794 | 0.784 |\n| Avg. | | 0.718 | 0.785 | 0.746 | 0.797 | **0.800** | | 0.674 | 0.755 | **0.775** | 0.754 | | 0.705 | 0.791 | **0.796** | 0.785 |",
" Thank you for catching the grammar issue and other comments (Q6 \\& Q7)- we have fixed these in our latest version. We answer your other questions as below.\n\n_**Q1 \\& W1.a**: The two methods of calculating source-aware IF correspond to two different ways of perturbing the training loss. How do these 2 differ (maybe through examples) in identifying the most responsible (i,j,c) tuples? From tables 3 \\& 4, on average, RW seems to be doing better than WM. It would be nice to have a discussion on why that may be._\n\n**R1**: Thank you for asking this question so that we could further explain the difference between RW and WM.\nFirst, RW has convenient computation only when the $\\sigma$ is identity function, while WM is agnostic to the $\\sigma$ function, so WM is more general than RW in terms of computation. However, RW has better theoretical performance guarantee than WM, because the WM case of Theorem 1 requires more assumptions to hold than its counterpart of RW. We think that this explains why RW leads to better overall test loss than WM. We will also add some discussion to the paper.\n\n_**Q2 \\& W1.b**: What is the significance of RelatIF here? How is it better/different from ordinary IF and source-aware IF?._\n\n**R2**: We include the RelatIF to show that our framework is compatible with state-of-the-art variant of IF method, ie, RelatIF can be combined with the proposed RW and WM, which are based on original IF, leading to R-RW and R-WM. As for the difference between ordinary IF and RelatIF, the latter additionally considers and constraints the global effect of a sample on the model to regularize the effect of some dominating samples. These dominating samples could be outlier and identified as the most influential sample for most of test data, making them poor choices for explaination.\n\n_**Q3**: some examples from other datasets._\n\n**R3**: We added an additional example of Yelp to the appendix (Appendix G.6), a text sentiment classification dataset. And this example leads to similar observations as what we get from the visual example in the main body of the paper.\n\n_**Q4**: How much does correcting the identified mislabelings affect the end performance? Is there a minimum number of corrections to be made to improve the classification results? I.e, how many mislabelings need to be identified to positively make an impact on the results?_\n\n**R4**: \nFirst, our method, as well as the ordinary IF method, can only identify most negatively influential training samples/labelings, which are then removed from the training to improve the test loss. One could incorporated our method in a human-in-the-loop system to leverage human expertise to correct those samples if they're corrupted.\nAs for how many mislabelings need to be removed, we conducted an additional study trying to answer this (see Figure 3 in Appendix G.5).\nWe measure the maximal removing portion (MRP) $\\beta$. That is, if we remove more than $\\beta$\\% top-ranked negatively influential labelings, the resultant test loss after re-training will be larger than original test loss without any removing. In other words, removing any portion of labelings between 0\\% and $\\beta$\\% has a positive impact on the test loss. We take label model MV and our RW method as an example for this study. We found that the MRP $\\beta$ is closely correlated to the accuracy of training labels produced by the label model. This is quite intuitive- for a high-quality training set, the MRP would be low since we do not have to remove many labelings when training labels are already accurate, while for low-accuracy cases, the MRP would be high since most of the training labels are incorrect so we could remove a large portion of labelings but still be able to improve the test loss.\n\n_**W2**: What happens if this process is repeated multiple times after removing 1 data point each time or the top n most responsible data points are removed?_\n\n**R5**: Yes, the IF scores produced by our method and baselines can all be used in such a iterative manner, while in experiments, we follow existing convention to evaluate the usefulness of IF scores by one-round removing, since the study of best strategy of leveraging IF scores is orthogonal to our study, and we leave it as future work.\n\n_**Q5 \\& W3**: the performance of classification accuracy/F1 score on an unseen test set._\n\n**R6**: We conducted new experiments on improving the classification accuracy and F1 score, and included those results in Appendix G.7. From the results, we can draw a similar conclusion as Table 4 in the main body, ie, our methods outperform baselines in most of cases and achieve better averaged performance, which shows that the improvement of tess loss could be translated to that of classification metrics.\n\n",
" We thank you for your detailed and helpful comments! We answered your questions as below.\n\n\n\n\n_**W1.1**: Discussion of the inner workings of each label model._\n\n**R1**: Thanks for point this out! We added a brief explanation of label models in Section 3.1, and in the Appendix A, we also discuss the inner workings of each involved label model, as well as a figure for illustration purpose. \n\n_**W1.2**: Comparison of the predicted labels of the original and approximated label models._\n\n\n**R2**: We evaluated the mean squared error (MSE) and the expected disagreement (ED) of the label model and its approximated version (the results can be found in Table 10 of appendix and we also put it below), where the MSE is operated over the predicted label posterior and the ED is $E[Y_1\\neq Y_2]$ over data samples ($Y_1$ and $Y_2$ are predicted labels of label model and its approximation). From the results, we can see that both metrics have quite low value across datasets for binary classification, while they are relatively larger for multi-class classification, which is because label model for binary classification is much easier to approximate. Even for multi-class classification, all the EDs are still less then 22\\%, which indicates the approximated label model could replicate most of the predicted labels of original label model.\n\n| Dataset | Metric | Census | Mushroom | PW | Spambase | IMDb | Yelp | Youtube | DN-real | DN-sketch | DN-quickdraw | DN-painting | DN-infograph | DN-clipart | Avg. |\n| :-----: | :----: | :-----: | :------: | :-----: | :------: | :-----: | :-----: | ------: | :-----: | :-------: | :----------: | :---------: | :----------: | :--------: | :-----: |\n| DS | MSE | 0.00041 | 0.00000 | 0.00015 | 0.00003 | 0.00033 | 0.00004 | 0.00094 | 0.02127 | 0.02910 | 0.01689 | 0.01620 | 0.01735 | 0.02620 | 0.00992 |\n| | ED | 0.00130 | 0.00000 | 0.00150 | 0.00000 | 0.00000 | 0.00000 | 0.00398 | 0.14766 | 0.21159 | 0.18400 | 0.14866 | 0.12861 | 0.15912 | 0.07588 |\n| Snorkel | MSE | 0.00097 | 0.00003 | 0.00111 | 0.00011 | 0.00005 | 0.00050 | 0.00024 | 0.00995 | 0.00729 | 0.00370 | 0.01250 | 0.00749 | 0.00992 | 0.00414 |\n| | ED | 0.00400 | 0.00031 | 0.00300 | 0.00056 | 0.00011 | 0.00322 | 0.00066 | 0.13993 | 0.10692 | 0.09100 | 0.19659 | 0.04369 | 0.12807 | 0.05524 |\n\n\n\n_**W1.3**: The advantages of proposed method on approximated label models over a baseline that aggregates instance-level IF score as source-level IF score._\n\n**R3**: \nWe would like to point out that when the $\\sigma$ is not identity function, the baseline mentioned also relies on the approximated label model to be valid, because in this case each source contributes to an instance via a complex normalization function, eg, softmax. Thus, here the IF score is not addable, and this motivates use to use approximated label model so that the IF score is addable.\nWhen both methods work on approximated label models, they both suffer from the same approximation error introduced by label model approximation, but in practice, one could use our primitive source-aware IF to remove some of the (possibly harmful) votes of an individual source while the baseline can only be used to remove a source as well as all of its votes.\nActually, this baseline is the GIF method we included in the Table 4.\n\n_**W2**: Leveraging clean labeled data._\n\n**R4**: \nYes, one can use clean labeled data and still leverage our method to explain the effect of upstreaming components.\nWhen we use clean labeled data via an additional loss term, let's say $\\ell_{clean}$, we could still compute the source-aware IF in the same way since the new loss $\\ell_{clean}$ does not depend on the sources and therefore would not affect our derivation.\nIn a sum, using clean labeled data for training better model and our goal of understanding the upstreaming components is orthogonal to each other.\n\n\n_**Q1**: What is the size of the validation set used in this paper?_\n\n**R5**: For all the datasets, we use the standard train/validation/test split and the validation size can be found in Table 6 of the appendix.\n\n_**Q2**: Does it make sense to use at least a few labeled data per class while training either the label model or the end model?_\n\n**R6**: Yes, one can definitely use labeled data to improve the performance, while our method can still be used in these cases to help understand the effect of components on model prediction of new data samples as mentioned above (R4).",
" Thank you for your valuable feedback!\nAlthough our method only works for the two-stage method, we think it could inspire future work for studying the effect of components in the one-stage method, where the way of leveraging source votes is more complicated and therefore more challenging.\nAs for performance on deep neural architectures, we believe that our work set the foundation for understanding programmatic weak supervision with a more complicated model. We would like to leave those explorations to the future.",
" We thank all the reviewers for their helpful feedback on the submission! \nBased on the valuable comments and suggestions, we have added some new content/experiments (**Appendix G.4-G.7**) to the latest draft (text highlighted in blue). Please note that the full detail for some of the new results are in the appendix rather than the response text below due to length, sorry for the inconvenience!\n\nWe’ve provided replies to individual reviewer comments. Please let us know if you have additional questions or need further clarifications. Thanks again!",
" This paper focuses on weakly-supervised classification tasks where (i) multiple weak sources (i.e., labeling rules) are used in a label model (e.g., weighted majority voting) to generate soft labels for unlabeled data; (ii) the soft labels are used to train an end model (e.g., logistic regression, neural classifier) with a smooth cross-entropy loss. \n\nThe paper proposes a method to evaluate the influence of each weak source on the end model's performance by considering the choice of the label model, a critical component in this weak supervision pipeline. The main idea behind the proposed method is to decompose the training loss of the end model into multiple components corresponding to the individual weak sources. Then, two techniques are proposed to compute a source-aware influence function, namely reweighting and weight-moving. In cases of label models involving an exponential function in their generation process, the paper applies the proposed source-aware influence function techniques by training an approximate label model based on the identify function.\n\nThe paper evaluates and compares the proposed method with various alternatives for multiple use cases and datasets across tabular, text, and image modalities. Strengths:\n* Overall, the paper is clearly written and provides clear and substantiated arguments.\n* The paper addresses an interesting and challenging problem. Evaluating the influence of weak sources on the end model's performance is an increasingly important research direction given the increasing adoption of programmatic labeling in both research and industrial settings. \n* The proposed method shows promising experimental results on an extensive experimental evaluation of multiple techniques on several scenarios and datasets. It is demonstrated that including the label model into the computation of IF helps across multiple use cases. \n\n\nWeaknesses: \n* It is hard to understand why (in theory) the proposed method should be effective across label model choices.\n * The proposed method unifies three types of label models into a single equation (Eq. (6) in Section 3.1) without having discussed the inner workings of each label model (e.g., weighted aggregation in Snorkel). Thus, it is hard for a reader not familiar with these label models to understand Eq. (6) and possibly the rest of the method. \n * For cases where $\\sigma(\\cdot)$ is not the identity function, it is not clear whether the proposed method explains the real influence of each component. The approximated label model (that uses the identity function) is simpler and is not guaranteed to have the same behavior as the original label model. (Indeed, the two label models lead to different rankings as shown in Figure 1. Thus, it is not clear whether the estimated influence scores explain the real influence of each component. In addition to comparing the rankings, it would also help to compare the predicted labels of the original and approximated label models to give an idea of how well the latter approximates the former. \n * It is not clear why the proposed method (resorting into approximations of label models) would in theory be more effective than a simpler method that first computes (source-agnostic) influence scores for each instance and then aggregates instance-level scores into source-level scores. \n* The problem addressed in this paper is a simplified (and possibly unrealistic?) case compared to problems addressed in practice in weak supervision\n * According to the problem definition no labeled data are assumed for training, however, labeled data are considered in a validation set. In practice, (at least few) labeled data are considered during training (either in the label model or in the end model) and have been shown to improve the end model's performance. For, example, clean labeled data could be combined with weakly labeled data with weights. It is not clear whether the proposed method can be applied in this setting. * What is the size of the validation set used in this paper? Does it make sense to use at least a few labeled data per class while training either the label model or the end model? \n Yes.",
" This paper introduces a method called source-aware Influence Function (IF) to study the “influence” of individual data, source and class tuples on the performance of different label functions in the programmatic weak supervision paradigm. This differs from previous methods that calculate ordinary IF which can only identify influential training data and not the labeling function or the training data source responsible for mislabeling. The paper introduces a framework for estimating the influence of each training data on the test loss and prediction. The authors introduce two methods to calculate this source-aware IF. They support their method with theorems and proofs. They also show a variety of use-cases on different datasets as to how source-aware IF can improve the performance and understanding of PWS pipelines. Strengths:\n1. The paper tackles a very interesting and important aspect of understanding programmatic weak supervision. The problem is well motivated. \n\n2. The paper includes theoretical proofs of the claims and methods used for developing source-aware IF.\n\n3. The generalization of the generation process of probabilistic labels using eq 6 is nice.\n\n4. The paper includes experiments on a wide range of benchmark datasets and classification tasks.\n\n5. The contributions of the paper are clear, original and significant. \n\nWeaknesses:\n1. The motivation for certain aspects of the method and experiments are unclear. For e.g:\n a. The intuition behind why there are two methods for calculating source-aware IF is not clear - why reweighting vs weight-moving. There isn’t a clear study of how one is better or worse than the other and in what cases and why. \n b. The significance of RelatIF in the context of the contributions of this paper is not clear. \n\n2. In the experiments, the training data most responsible for mislabeling is removed and the resulting test loss is reported. However, it is unconvincing if this is the best strategy. What happens if this process is repeated multiple times after removing 1 data point each time or the top n most responsible data points are removed?\n\n3. The test loss is shown to reduce after removing the most negatively influencing tuples, however, there is no mention of how this may translate to classification accuracy or other metrics. \n 1. The two methods of calculating source-aware IF correspond to two different ways of perturbing the training loss. How do these 2 differ (maybe through examples) in identifying the most responsible (i,j,c) tuples? From tables 3 & 4, on average, RW seems to be doing better than WM. It would be nice to have a discussion on why that may be. \n\n2. What is the significance of RelatIF here? How is it better/different from ordinary IF and source-aware IF? R-RW seems to be outperforming RW (table 3) for Majority Vote but not for others. \n\n3. It would also be interesting to look at some examples from other datasets (tabular/textual) and see which LFs are responsible. In many real world non-image applications, the LFs may not be created similar to the experimental setup in the paper for the DN datasets. \n\n4. Having identified which LFs have mislabelings, it would be nice to see what can be done about it. For eg, how much does correcting these mislabelings affect the end performance? Is there a minimum number of corrections to be made to improve the classification results? I.e, how many mislabelings need to be identified to positively make an impact on the results?\n\n5. It would also be interesting to look at the difference in classification accuracy/F1 score on an unseen test set with the usual PWS setup and after removing the tuples most responsible for mislabeling (based on a validation set). \n\n6. In table 2, it might be more helpful to have the ‘Misclassified Test Data’ as the first column since the explanation goes from misclassification to the responsible components. \n\n7. There is a small grammatical error in the table 3 caption - “The larger the AP is, the better the method identify mislabeling of LFs.” \n The authors have mentioned some limitations of their work. The proposed framework may be difficult to implement in the case of more complicated label models. It has not been defined for one-stage PWS pipelines which are becoming more popular recently. The paper does not include experiments on using complicated deep learning architectures as the end model. Additionally, this framework is also limited in its ability to decide/recommend how many samples (responsible for mislabeling) need to be discarded or down weighted for optimal classification performance. ",
" ## Summary\nIn this work, the authors propose a general framework for quantifying the influence of individual PWS components on the end model. The proposed framework can be used to look into the influence of individual PWS components' effects on the end model. A source-aware influence function is proposed, which leverages the knowledge of the probabilistic label generation process and uses that knowledge to decompose the training loss into multiple terms and eventually individual influence scores. These influence scores are used to quantify the effect of PWS components (source vote, supervision source, and training data). \n ## Strengths\n1. The effective use of the source-aware influence function along with the knowledge of the probabilistic label generation process provides a fine-grained analysis tool for the behavior of the end-model.\n2. The influence of individual PWS components can be analyzed with the proposed framework.\n3. The proposed framework has the capability to work with diverse data domains (tabular, image, textual). Secondly, an ample number of datasets are used in the study. \n4. The most responsible LFs as well as their capacity to mislabel can be identified, which is different from identifying the influential training data. \n5. The proposed model steers the end-model towards generalization, which is always desirable.\n6. The paper is well written.\n\n\n## Weaknesses\n1. The proposed framework hasn’t been shown to work with one-stage methods.\n2. Experimental results are reported for two-layer neural networks. It is still to be seen how the proposed method performs with intricate and deep neural architectures.\n n/a n/a"
] | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"iOlUfX0XO4G",
"Pw9lhotw6qy",
"K6R1BevR9Hl",
"OKA67Zxes23",
"nips_2022_7CONgGdxsV",
"nips_2022_7CONgGdxsV",
"nips_2022_7CONgGdxsV",
"nips_2022_7CONgGdxsV"
] |
nips_2022_k7FuTOWMOc7 | Elucidating the Design Space of Diffusion-Based Generative Models | We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36. | Accept | Ratings: 8/9/8/7.
Confidence: 4/4/4/5.
Discussion among reviewers: No.
Summary: This is an excellent paper analyzing the design space of diffusion models. The paper clarifies the design space by disentangling the effects of (1) parameterization, (2) sampling, and (3) training separately. The researchers uniformily agree that the paper is well written and that the empirical results are impressive. Given the enormous interest in diffusion models in the research community, and the likely high impact of advancements in this subfield, this paper is well timed, and will probably be very well received by the NeurIPS community.
Decision: I highly recommend to accept this paper. | train | [
"b1M7dY_e9C",
"2jNQZ5NMJK4",
"JK2tbKgI6h_v",
"VsMxG6fNCi1",
"Q8k7apk1UdC",
"AZvQgzXI_SN",
"AqtWGbdOLDr",
"kmfBuNSIYOm",
"EHQP3cnRbLU",
"TwVg5ExbRhs",
"XyqsKy2paNd"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for clarifying the motivation of your tailored stochastic sampler. My rating about this paper stays unchanged.",
" Thanks for your response. I'll stick to my original rating recommending a Strong Accept.",
" Thank you for the response. I am looking forward to see Fig. 5(b) for ImageNet in the camera-ready version.",
" We are thankful for the pointers to additional previous work. We will amend the paper accordingly.\n\nUnfortunately we lacked the compute capacity to determine curves such as in Fig. 5(b) for ImageNet-64, as that would have required a training effort we could not afford prior to submission. It is our expectation too that ImageNet is diverse enough w.r.t. network capacity that stochasticity would be beneficial, and we are extremely interested in knowing if this is the case. We should have results soon, and plan to add those in the paper before the camera-ready deadline.\n\nThank you for pointing out the connection to v-diffusion parameterization in [PD]. Indeed, they make an observation that also motivates our skip connection: at high noise levels, predicting the noise leads to arbitrarily large amplification of the network output, along with any errors it makes. Their proposed mixture prediction can be interpreted in our framework as a noise level dependent skip connection. We will acknowledge this connection in the camera-ready revision.\n\nWe have not considered including variance learning in our model so far, but it could be an avenue for future development.\n",
" The case-by-case grid search for stochasticity hyperparameters is indeed quite expensive. We hope that countering the image degradation effects illustrated in Fig. 13 can be achieved in other ways in the future, reducing the number of these hyperparameters or at least making them less sensitive to the dataset.\n\nWe did not perform measurements in metrics other than FID, and thus have not analyzed the diversity vs fidelity tradeoff resulting from different design choices. Given that this tradeoff can be adjusted directly via, e.g., classifier-free guidance, a proper study would have to take such methods into account. It may well be that some of our advocated design choices are not optimal for, say, systems that strongly favor fidelity at the expense of diversity (e.g., Dall-E 2, Imagen), and finding out which design choices suit such systems best is an interesting topic for future work.\n",
" There are several motivations behind the use of a tailored stochastic sampler, some of which are only tersely hinted at in the manuscript due to space constraints. Space permitting, we are happy to expand this section with additional arguments upon request:\n\n- General-purpose SDE solvers must be prepared to tackle fully general SDEs correctly. The SDE in Eq. 6 is a simpler special case, where in particular the diffusion term is independent of $\\boldsymbol{x}$.\n\n- The noise replacement schedule $\\beta(t)$ in the SDE formulation is a somewhat awkward way to control stochasticity, as its effect depends on discretization step lengths. At a given noise level, the distribution $p(\\boldsymbol{x};\\sigma)$ corresponds to the data manifold mollified by $\\sigma$, suggesting that the appropriate scale for discrete exploration jumps would be proportional to $\\sigma$. Thus, at each optimization step we should replace the same proportion of noise, but implementing this in the SDE would require retrofitting details of the discretization step lengths into $\\beta(t)$. Instead of doing this, we view the SDE as an inspiration for fusing an explicit Langevin sampling mechanism with the ODE.\n\n- When discretizing Eq. 6 by Euler-Maruyama, there is a subtle discrepancy between the contributions of the denoising and noise injection terms. Between the sub-steps of our algorithm, $\\boldsymbol{x}$ and $\\sigma$ (or $t$) supplied to $D_\\theta$ correspond to the state after noise injection, whereas standard Euler-Maruyama can be interpreted as first adding noise and then performing an ODE step, not from the intermediate state after noise injection, but assuming $\\boldsymbol{x}$ and $\\sigma$ remained at the initial state at the beginning of the iteration step. In the limit of $\\Delta_t$ approaching zero there is no difference, but the distinction appears to become significant when pursuing low NFE with large steps. Experiments not included in the paper indicated that Euler-Maruyama required larger and potentially discretization-dependent noise level corrections via $S_\\text{noise}$, whereas in our formulation the optimal value for $S_\\text{noise}$ was closer to 1 and not sensitive to such hyperparameters.\n\n As we did not fully analyze these findings and have no firm theory to support them, we chose to steer away from speculating on the merits of the two-step method vs Euler-Maruyama or higher-order SDE solvers in general, and instead focused on the practical results. The benefit of our approach is evaluated numerically in Figure 4, including a comparison with standard Euler-Maruyama, a predictor-corrector sampler, and a higher-order SDE solver tailored for diffusion models. However, we did not make an attempt to retrofit the noise replacement schedules into the comparison SDEs.\n",
" We agree that the split between main paper and appendix is not entirely satisfying. Our original draft was considerably longer than permitted by the conference, and some interesting findings and analysis unfortunately had to be moved to the appendix.\n\nThe differences between Jolicoeur-Martineau et al. and our method (Fig. 4) include the choices for noise and scale schedules and the discretization time steps, all established in Section 3, as well as our SDE solver limiting the stochasticity to noise range determined by $S_\\text{tmin}$ and $S_\\text{tmax}$ and adjusting the level via $S_\\text{noise}$. Our SDE solver is also structured slightly differently. As Fig. 4 illustrates, these choices have a major impact on result quality, and using a 2nd order solver does not in itself guarantee low FIDs.\n\nThe statement about modularity refers to the theoretical independence between components listed in Table 1, i.e., that changing one component does not necessitate changes elsewhere in order to, e.g., maintain the property that the model converges to the data in the limit. In this sense, one could indeed mix and match, say, preconditioning schemes from different models, although in practice training might be more difficult, and the results might be worse. We shall clarify this in the text.\n\nWe consider arbitrary noise level and scale schedules in Sections 2 and 3 in order to properly analyze previous methods in our theoretical framework. Based on evaluating VE, VP and DDIM in this common framework, we then standardize to $\\sigma(t)=t$ and $s(t)=1$ and reflect this in the formulas in Section 4 onwards and Algorithm 2 to reduce the (significant!) notational clutter.\n\nImageNet-64 was not included in Section 5 evaluation due to our lack of compute capacity to train the model in time for submission. All prior experiments could be performed with pre-trained networks, but those in Section 5 could not, which limited us to smaller datasets. We should have results for ImageNet-64 fairly soon, though, so we can still add those in the camera-ready version.\n\nWe did not re-evaluate earlier design choices after arriving at our final models. Like well-optimized models in general, we believe they are not very sensitive to small changes in hyperparameters or minor design details. However, modifying high-level choices such as noise or scale schedule would most likely have a large effect on result quality.\n\nRegarding the data-dependence of design choices, it appears that only the stochasticity parameters are heavily dependent on the dataset (Table 5). The higher-level choices seem to hold across datasets and score network architectures -- with the caveat that our tests have been limited to small resolutions and mostly small datasets.\n",
" This paper studies the algorithmic design space of diffusion models. In doing so, the authors make the following contributions:\n(a) identify and characterize the degrees of freedom w.r.t. sampling, network parameterization, training\n(b) propose to disentangle sampling from training and consequently, design schemes for both deterministic and stochastic sampling based on higher-order Runge Kutta solvers that significantly reduce the number of function evaluations\n(c) improve training by better preconditioning of inputs, outputs, training losses and augmentation schemes\nEmpirically, these improvements translate to sota sample quality on standard image generation benchmarks Strengths:\n\n- This paper presents a timely and important contribution to an area of growing significance: deep generative modeling using diffusion models. Arguably, these models, while clearly having a lot of potential given their recent successes, are notorious to train requiring several training tricks. \n- Related to the above, the paper also stands out in contrasting existing models and algorithms --- in a way, the community can look at this paper both as an excellent survey of some major works in diffusion models while providing a generalized characterization that enables the development of newer models.\n- Finally, the resulting improvements on CIFAR-10 and Imagenet are impressive -- the paper truly achieves the best of both worlds as the number of function evaluations reduce very significantly while also improving the sample quality metrics in both scenarios of deterministic and stochastic sampling. I also liked learning about some negative results and intuitions on their failures --- such as the use of stochastic samplers leading to oversaturation -- I have observed this to be a common problem in the results of some past works in diffusion models, but couldn't come across a satisfying explanation.\n\nWeaknesses:\n\n- While such a paper is not easy to write as it covers a bunch of distinct contributions, I found it a little hard to review constantly shuttling between the main text and the appendix. Some of the very important and interesting discussions such as those concerning the step sizes in Section 3 were delegated to the appendix, making readability slightly hard.\n- At certain places, it is hard to distinguish what is prior work vs what is new --- this isn't a question re: novelty of this paper, but more on the exposition lacking discussion and details. For example, Jolicoeur-Martineau et al. (duly cited) also explore the use of higher order solvers for sampling-- what exactly is different between the current work and the one before? Is it the scheme for setting timesteps to minimize truncation error in Eq. 5 or something else?\n- I think some of the claims can be placed better in context or need more empirical justification. For example, in L103-104, the authors highlight there are no dependencies between the components but does this hold broadly for all parameters in Table 1? One extreme interpretation of this could be eg, we can mix and match preconditioning schemes, i.e., skip scaling from model A (say VP), output scaling from model B (say DDIM), etc. and still train a diffusion model reasonably well. Given the brittleness of these models, I am not sure if such arbitrary mix-and-match schemes will translate to a functioning model with reasonable performance.\n - See last 2 points in the weaknesses para above and please let me know your thoughts\n- I wasn't quite clear on why the authors choose to discuss the generalized ODE in Eq. 4 when eventually they propose to set s(t)=1 which reduces to Eq (1)\n- Is there any specific reason to exclude Imagenet-64 for evaluating the effect of preconditioning in Table 2? I am particularly curious because I was hoping to contrast it with the numbers in Fig. 4c for stochastic sampling. Since we might intuitively expect that stochastic sampling could relatively be more beneficial for diverse datasets, I wonder if that intuition carries over empirically.\n- Finally, while the paper does characterize the design space reasonably well, I am curious if the authors found that their final diffusion model is robust to different choices (not just their prescribed choice) of some of the design parameters in Table 1. The authors discuss negative results in the context of stochastic sampling. Perhaps a broader discussion of negative results and context around their empirical proposals could help --- concretely, one example of such a discussion point is: which design choices are data-dependent and on what properties (eg, size, dimensionality, modality)? Such information could vastly aid practitioners looking to train and deploy these models on their custom data and modalities.",
" This work proposes a reformulation of continuous-time diffusion/score-based models that is more modular and easier to analyze. With this new formulation, authors carefully analyze different design choices in those models, including discretization for sampling, parameterization for the score model, stochasticity in the sampling process, noise distributions in training, and scaling schedules of inputs. Experiments demonstrate significant improvements in various settings, creating new/near SOTA results on CIFAR-10 and ImageNet-64. This is an excellent work with many strengths:\n\n1. Unlike previous works, authors formulate diffusion/score-based models around the probability flow ODE. This novel perspective leads to a new analysis on the role of stochasticity in sampling, showing that previous SDE-based formulations are only a special case of combining the probability flow ODE and the Langevin diffusion SDE with different relative weights. Stochasticity only exists in the Langevin component, which effectively corrects for potential errors in solving the probability flow ODE. This interpretation motivates authors' new stochastic samplers that outperform existing ones on image datasets. I'd like to point it out that a similar reasoning was also used to form the predictor-corrector samplers in ref. [41], where the Langevin component is the \"corrector\", and the \"predictor\" can be either a probability flow ODE or a reverse-time SDE. \n\n2. The analysis on deterministic sampling is very convincing, backed by both insightful illustrations (Figure 3) and strong empirical improvements. I especially like the analysis on how to choose the noise schedule $\\sigma(t)$ to make the probability flow ODE easy to solve. It is also a valuable contribution to show that 2nd order ODE solver (Heun's method) significantly accelerates sampling.\n\n3. The parameterization of $D_\\theta$ in equation (7) is a nice contribution, capturing the intuition that we should predict the denoised image for bigger noise levels, and predict the noise itself for smaller noise levels. The analysis on choosing $\\lambda(\\sigma)$, $p_{\\text{train}}(\\sigma)$ and others provides an effective set of hyperparameters for improved empirical performance.\n\nThere are no major weaknesses. Below are some thoughts if I have to nitpick:\n\n1. I don't fully agree that the proposed reformulation is more modular than previous ones. It seems to me that all design choices in this work can be translated to the variational formulation of diffusion models, or the continuous-time formulation of score-based generative models. The authors' new formulation seems to facilitate the analysis of stochastic samplers the most. Other contributions can be made with existing formulations with appropriate modifications.\n\n2. Derivations of hyperparameters are not fully principled. They are more or less based on intuitions (such as fixing the magnitudes of input and output signals), or discovered from experiments. This raises a question on whether the same design choice can perform as well for other data domains. The stochastic sampler in Algorithm 2 is different from solving equation (6) with numerical SDE solvers. Why not perform stochastic sampling with existing numerical SDE solvers? How do they fare against the sampler in Algorithm 2? Authors discussed limitations implicitly as avenues for future work, such as the precise interaction between stochasticity and the training objective. A comprehensive discussion on negative societal impacts was also included.",
" In this paper, the authors explore a wide variety of design choices made when designing diffusion models, both those which are typically given attention explicitly in prior work (e.g. the noise schedule) and ones which have been more implicitly agreed upon (e.g. the choice of of ODE solver). They highlight a set of potential sources of generalization to improve sampling - e.g., the lack of need for the sampling process to correspond to model or training details. They generalize a set of the stochastic diffusion model SDE with a single equation (Eq. 6) corresponding to moving forward and backward in time, and propose an algorithm to reduce stochastic sampling errors. In essence, prior works have explored relatively constrained subspaces of the decisions which define a particular diffusion model - this paper attempts to unify these decisions through a shared framework, allowing for new non-obvious generalizations, and then to quantitatively understand their optima. This paper is remarkably thorough, considering an impressively broad range of parameters and demonstrating repeatedly why such a generalized framework is useful by applying it and making nontrivial algorithmic/formal contributions. They also serve to provide an important baseline, reducing the gap between these models and those leveraging recent advances such as cascaded diffusion models, without sacrificing (as much) theoretical interpretability. The practical consequences of the suggestions made in this paper are significant, allowing for substantially more efficient sampling, and addressing what has historically been a significant limitation of diffusion models. The paper itself is excellent and beyond its practical and theoretical contributions, represents a powerful and remarkably accessible survey of diffusion model literature (excluding a few nuances like approaches consisting of multiple diffusion models).\n\nOne noteworthy detail of this paper is the repeated highlighting on decisions which, while they could be and often are made with some theoretical motivation, are in fact best left to be empirically determined (such as the relative rates of noise decay and injection over time). While it is useful to have these empirical results, and empirical results often yield useful theoretical generalizations, it is less practical to need to perform a “case-by-case … grid search” for each new design decision. In addition, FID and other Inception-based metrics have well-known limitations e.g. [1, 2, 3]. While FID is widely used to evaluate generative models, when analyzing many hyperparameters and exploring some relatively small differences in image quality, it is difficult to know whether some empirical decisions are actually improving the generated images or overfitting to FID. It would be helpful to understand/interpret the tradeoffs made by these decisions in terms of other metrics as well, such as precision and recall [4].\n\n[1] An Improved Evaluation Framework for Generative Adversarial Networks, Liu et al 2018\n[2] Pros and cons of GAN evaluation measures, Borji 2019\n[3] A Note on the Inception Score, Barratt and Sharma 2018\n[4] Improved Precision and Recall Metric for Assessing Generative Models, Kynkäänniemi et al 2019. What kinds of diversity/fidelity tradeoffs are implied by the various suggestions of this paper? The limitations are reasonably well addressed.",
" The paper proposes to consider 1) network parameterization (preconditioning) 2) training and 3) sampling of diffusion models in separation. This allows for putting many existing works in a common framework and for a simplified investigation of the three aspects in isolation. Considering, for example, the sampling process in isolation, the authors show how to improve several pre-trained networks from the literature. Furthermore, making \"optimal\" choices for each of the three aspects leads to new state-of-the-art models on several image benchmark datasets. Additionally, the paper proposes to train diffusion models on augmented data, importantly conditioning the network on the augmentation process. Lastly, the paper shows that stochastic sampling outperforms deterministic sampling for \"suboptimal models\", whereas deterministic sampling is preferred for \"well-trained models\". Strengths:\n* The paper is extremely well-written and all claims are well-supported by experiments.\n* The paper has a great impact on the current field of diffusion models: the independently proposed training recipes, network parameterizations (preconditioning), and sampling schemes can potentially be used in many future works.\n\nWeaknesses:\n* The paper lacks related work in certain places; to list only a few:\n * Model parameterization (preconditioning):\n * [PD] introduces v-diffusion parameterization\n * [LSGM, CLD] introduce mixed score parameterizations\n * Sampling:\n * [PNDM, DEIS] apply linear multistep methods to the Diffusion Model ODE (DEIS was just recently proposed, so missing this citation is not a big deal)\n * [LFS] learns efficient samplers\n * [PD, KD] accelerating sampling via distillation\n* Otherwise, the paper has very little weaknesses; see Questions below. \n\n[PD] - Progressive Distillation for Fast Sampling of Diffusion Models\n\n[LSGM] - Score-based Generative Modeling in Latent Space\n\n[CLD] - Score-Based Generative Modeling with Critically-Damped Langevin Diffusion\n\n[PNDM] - Pseudo Numerical Methods for Diffusion Models on Manifolds\n\n[DEIS] - Fast Sampling of Diffusion Models with Exponential Integrator\n\n[LFS] - Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality\n\n[KD] - Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed\n Main questions: \n* Figure 5(b): The figure is of very high value as it shows that stochasticity benefits/hurts suboptimal/well-trained models. I am, however, curious if the figure would look similar for more challenging datasets such as ImageNet. My suspicion is that for ImageNet a well-trained model may still be considerably different than the true data model and therefore I suspect that stochasticity would even improve a well-trained model. Could the authors comment on this?\n\n* Network parameterization (preconditioning): The proposed network parameterization seems very similar to the v-diffusion parameterization introduced in PD. The denoiser in v-diffusion is $\\alpha_t x_t - \\sigma_t F_\\theta(x_t, t)$, where $F_\\theta$ is the neural network. Could the authors comment on connections/advantages of v-diffusion in the proposed network parameterization as well as loss function?\n\nExtra (research) questions (for which I don't necessarily expect an answer):\n* Do the authors believe that variance-learning diffusion models (as, for example, introduced in iDDPM) could also be unified with their approach?\n N/A"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
9,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"AZvQgzXI_SN",
"AqtWGbdOLDr",
"VsMxG6fNCi1",
"XyqsKy2paNd",
"TwVg5ExbRhs",
"EHQP3cnRbLU",
"kmfBuNSIYOm",
"nips_2022_k7FuTOWMOc7",
"nips_2022_k7FuTOWMOc7",
"nips_2022_k7FuTOWMOc7",
"nips_2022_k7FuTOWMOc7"
] |
nips_2022_HEcYYV5MPxa | Dict-TTS: Learning to Pronounce with Prior Dictionary Knowledge for Text-to-Speech | Polyphone disambiguation aims to capture accurate pronunciation knowledge from natural text sequences for reliable Text-to-speech (TTS) systems. However, previous approaches require substantial annotated training data and additional efforts from language experts, making it difficult to extend high-quality neural TTS systems to out-of-domain daily conversations and countless languages worldwide. This paper tackles the polyphone disambiguation problem from a concise and novel perspective: we propose Dict-TTS, a semantic-aware generative text-to-speech model with an online website dictionary (the existing prior information in the natural language). Specifically, we design a semantics-to-pronunciation attention (S2PA) module to match the semantic patterns between the input text sequence and the prior semantics in the dictionary and obtain the corresponding pronunciations; The S2PA module can be easily trained with the end-to-end TTS model without any annotated phoneme labels. Experimental results in three languages show that our model outperforms several strong baseline models in terms of pronunciation accuracy and improves the prosody modeling of TTS systems. Further extensive analyses demonstrate that each design in Dict-TTS is effective. The code is available at https://github.com/Zain-Jiang/Dict-TTS. | Accept | The reviewers generally liked the proposed approach in this paper, agreed that it is novel, and that the experiments showed good improvements over reasonable baselines. There was broad concern about the ablation study in the original paper (one shared by the AC), but the authors revised that section during the discussion period to the satisfaction of three of the reviewers. While three reviewers recommend that the paper be accepted, one reviewer recommends a borderline reject. The reviewer stuck to this recommendation after the discussion period, primarily citing concerns about whether or not the method is broadly applicable versus being limited primarily to being useful for logographic languages. While I am recommending that this paper be accepted, I urge the authors to expand their discussion of the limitations of the method in Appendix G. I think the discussion with reviewer ksPw of the JSUT results and the fact that Japanese writing comprises both more alphabetic and more logographic elements would be a valuable addition to that appendix and would help to clarify the contributions and limitations of the proposed method.
| test | [
"uIyZ_t27FL9",
"UvZqzAiuG0T",
"MzLeN_NcKRI",
"2iJFV05vZU5",
"eHP8JmamRHT",
"iYXCU_MwylK",
"lZ-4oGxISlb",
"vBQbgBgoA1",
"V0Kmi153VJo",
"kR70CKfVFkH",
"GI1tqKa4zy_",
"QmMjs-LLz8k",
"OlLILDjgYuw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for revising the paper. The response answers my questions. I am updating the score accordingly.",
" Thanks again for your great efforts and valuable comments. \n\nWe have carefully addressed the main concerns and provided detailed responses to each reviewer. We hope you might find the responses satisfactory. As the end of the rebuttal phase is approaching, we would be grateful if we could hear your feedback regarding our answers to the reviews. We will be very happy to clarify any remaining points (if any).\n\nThanks in advance,\nPaper 2575 authors",
" ## Summary of the rebuttal revision\n\nWe would like to thank the reviewers for their constructive reviews! Here we summarize the revision of the manuscript according to the comments and suggestions of reviewers:\n\n- In section 3.3, we clarified some terms and modified the description for our S2PA module.\n- In section 4.4, we conducted new experiments for Dict-TTS to demonstrate the effectiveness of auxiliary semantic information and the Gumbel-Softmax sample strategy.\n- In Appendix E in the supplementary material, we further analyzed the naturalness of prosody and rhythm for different TTS systems. \n- In Appendix F in the supplementary material, we introduced how to add Rules to the pronunciation weights. \n- In Appendix G in the supplementary material, we describe the importance of polyphone disambiguation for various languages.",
" We are grateful for your positive review and valuable feedback, and we hope our response fully resolves your concern.\n\n\n\n**[About the ablation study in Section 4.4] (Question 1)**\n\nWe apologize for the confusing ablation studies in Section 4.4. We have conducted new experiments to demonstrate the effectiveness of designs in Dict-TTS, including the auxiliary semantic information and the Gumbel-Softmax sample strategy. More details can be found in Section 4.4 in the revised version of the paper. Thanks for the reviewer’s kind and helpful comments!\n\n\n\n**[About the confusing 2-dimensional terms $a_{i,1}$ in Equation 1] (Question2)**\n\nThe attention vector $a$ is a 3-dimensional vector. We are sorry for our confusing terms in Section 3.3. We have clarified these terms and marked them blue in the revised version of the paper.\n\n\n\n**[About the experiment of a weighted sum of the pronunciations embeddings/semantics vs using the most likely pronunciation] (Question 3)**\n\nYes, we have conducted this experiment in the Biaobei dataset. Since measuring PER-O requires one-hot vectors, we do not calculate the PER-O score for the weight-sum version of Dict-TTS. The results are shown in the following table:\n\n| Methods | PER-O | PER-S | SER-S |\n| ------------------------- | ----- | ----- | ----- |\n| Dict-TTS (Weighted Sum) | - | 1.19% | 7.75% |\n| Dict-TTS (Gumble Softmax) | 2.12% | 1.08% | 6.50% |\n\nIt can be seen that PER-S and SER-S increase when we use the weighted sum method. In the experiments, the weights of different pronunciations for some characters might be close to each other, which results in relatively worse performance in the subjective results. For example, the two pronunciations \"ZH ANG3\" and \"CH ANG2\" of the character ``长'' might be ambiguous when their weights are close to each other (e.g., $0.6$ and $0.4$). Therefore, to accurately model the subjective pronunciations, we utilize the Gumbel-Softmax function to sample the most likely pronunciation in both training and inference stages. \n\n\n\n**[About the set of characters in the dictionary and computational shortcuts for the S2PA module] (Question 4)**\n\nNo, the dictionary does not need to be limited to a smaller set of characters for a given input text. For example, we use the full set of characters in the Chinese dictionary obtained from https://github.com/yihui/zdict, which contains 7030 characters. \n\nYes, computing an attention weight over the entire dictionary for each input text is quite expensive. As shown in Section 4.1, Line 247-249, for computational efficiency, we firstly use the pre-trained XLM-R model to extract the semantic representations from the raw text of the whole dictionary and record them on the disk. We then load the mini-batch along with the pre-constructed dictionary representation during training and testing. Besides, we also restrict the length of the dictionary entry $e_{i,j}$ to be less than 50 characters for computational efficiency.\n\nAs shown in Section 4.1, Line 257, we use a batch size of 40 sentences in Dict-TTS training. And we a use batch size of 64 sentences in all baseline systems (following PortaSpeech [1]). We are sure that the memory usage and total training time are consistent among all experiments.\n\n| Dictionary entry length | Entry number |\n| ----------------------- | ------------ |\n| 0<=x<50 | 5193 |\n| 50<=x<100 | 2042 |\n| x>=100 | 956 |\n\n\n\n**[About whether the noise in the dictionary affects the performance] (Question 5)**\n\nWe are sorry that we may not understand the meaning of \"noise in the dictionary\". Could you please clarify it so that we can make a better response?\n\nAccording to our comprehensions, \"noise in the dictionary\" means the wrong or inappropriate definitions or usages in the dictionary. However, the dictionaries used in our experiments have been adequately revised in history, which rarely have wrong definitions or usage.\n\n\n\n**[About the typo: L21/92 'there' -> 'they'] (Question 6)**\n\nWe think that \"There\" in Line 21 and Line 92 are not typos.\n\n\n\nAgain, we appreciate your positive review and hope our response can fully resolve your concerns.\n\n\n\n**[References]**\n\n[1] Yi Ren, Jinglin Liu, and Zhou Zhao. Portaspeech: Portable and high-quality generative text-to-speech. Advances in Neural Information Processing Systems, 34, 2021.",
" \n\n**[About question 6 and 7]**\n\n**Our work aims at the polyphone disambiguation problem in G2P conversion.**\n\nYes, \"JSUT\" is a mixture of phonograms and logograms, which is different from \"Biaobei\" and \"Common Voice (HK)\". Japanese writing system consists of two types of characters: the syllabic kana – hiragana (平仮名) and katakana (片仮名) – and kanji (漢字). In our analysis, 32.42% of the characters in JSUT dataset are kanji. The pronunciations of a part of the kanji can not only be specified by the semantic information and should be specified by empirical pronunciation rules. For example, most kanji (漢字) can be pronounced multiple ways: **on-yomi (音読み)** and **kun-yomi (訓読み)**. Although the compound kanji usually uses on-yomi and one kanji probably uses kunyomi, the different readings are largely just chosen empirically in practice. Our Dict-TTS has the potential to work only for the kanji whose pronunciation should be specified based on the semantic meaning. Due to the characteristics of Japanese writing systems, in Table 1, although Dict-TTS surpasses the character-based system, it does not show comparable performance with the open source G2P module in Japanese. But as shown in Section 3.4 Line 226, our method is compatible with the predefined rules from language experts by directly adding specific rules to pronunciation weight. We are sure that the performance of our method can be further improved by introducing the pronunciation rules in Japanese (e.g., the rules in the rule-based G2P baseline \"pyopenjtalk\").\n\nAll in all, our work aims at the polyphone disambiguation problem in G2P conversion and can be generalized to various languages. But the polyphone disambiguation problem may be less problematic in some languages. Thanks for your suggestions. We have clarified these limitations in Appendix G in the revised version of the paper.\n\n\n\nFinally, we appreciate the reviewer’s valuable reviews and believe some misunderstandings are due to our clarity. Hope our response can address your concerns.\n\n\n**[References]**\n\n[1] Jia, Ye, et al. \"PnG BERT: Augmented BERT on phonemes and graphemes for neural TTS.\" arXiv preprint arXiv:2103.15060 (2021).\n\n[2] Kastner, Kyle, et al. \"Representation mixing for TTS synthesis.\" ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019.\n\n[3] Tan, Xu, et al. \"A survey on neural speech synthesis.\" *arXiv preprint arXiv:2106.15561* (2021).",
" We thank the reviewer for the constructive feedback and for considering our work as \"High intelligibility can be obtained without phoneme label in Text-to-Speech task using logogram\". We understand that your concerns are mainly related to the paper’s generalization limitations and claims. We hope our response resolves your concerns fully.\n\n\n**[About question 1]**\n\nYes, we agree that the polyphone disambiguation problem is an important problem in logograms such as Chinese, but is less problematic in phonograms such as English. Although there are fewer polyphones and heteronyms in phonograms, the pronunciations of the polyphones in phonograms should also be deduced based on the semantic contexts. For example, \"resume\" in English can be pronounced as \"[ri′zju:m]\" (means to go on or continue after interruption) or \"[rezjumei]\" (means curriculum vitae). Therefore, Dict-TTS can also disambiguate the polyphones in phonograms by replacing \"character\" with \"word\". Our methods can also be used as the modules to retrieve the correct pronunciation for polyphones and heteronyms in English G2P process (e.g., the Algorithm step 2 in https://github.com/Kyubyong/g2p).\n\n\n\n**[About question 2]**\n\nThe main problem our Dict-TTS tries to solve is polyphone disambiguation in TTS systems and the loss of information caused by G2P pre-processing in phoneme-based TTS systems is only a part of our preliminary analyses. Section 3.2 and Section 3.3 can be summarized as follows:\n\n1. We make preliminary analyses about the challenges and information loss faced in character-based and phoneme-based TTS systems. \n2. Based on these analyses, we propose our Dict-TTS. In Dict-TTS, we firstly capture the semantic information of the input character sequence. Then the S2PA module deduce the pronunciations based on the extracted semantic information and the dictionary knowledge. Finally we obtain the expressive pronunciation hidden states from the deduced pronunciations and semantic information.\n\nMoreover, although the problem of information loss in the conversion process to phoneme has already been explored in previous work such as [1] and [2], their methods require (phoneme + character) or (phoneme + grapheme + word-level alignment) as input features. Our work aims at modeling natural pronunciations based on the input character sequence and dictionary knowledge, which is a different setting.\n\n\n\n**[About question 3]**\n\nThanks for your advice! We agree that the space token is information that greatly influences prosody prediction in English and our example is inappropriate. We have changed the example of \"I scream | Icecream\" to \"whether | weather\" and marked them blue in the revised version of the paper.\n\n\n\n**[About question 4]**\n\nYes, we agree that one grapheme can be mapped to multiple phonemes in many cases in phonograms (e.g., grapheme ‘e’ in English). **However, our work aims at polyphone disambiguation problem in G2P conversion.** \n\nFor alphabetic languages like English, lexicon cannot cover the pronunciations of all the words. Thus, the G2P conversion for English is mainly responsible for generating the pronunciations of out-of-vocabulary words [3]. Although the polyphone disambiguation is less problematic in these languages, our methods can still be used as the modules to retrieve the correct pronunciation for polyphones and heteronyms in their G2P process (e.g., the Algorithm step 2 in https://github.com/Kyubyong/g2p). By replacing \"character\" with \"word\" in Section 3.3, Dict-TTS is effective in the polyphone or heteronym disambiguation problem in alphabetic languages.\n\nFor logograms like Chinese, although the lexicon can cover nearly all the characters, there are a lot of polyphones that can only be decided according to the context of a character. Thus, G2P conversion in this kind of languages is mainly responsible for polyphone disambiguation, which decides the appropriate pronunciation based on the current word context. Therefore, polyphone disambiguation is crucial in these languages and our method is an effective solution for polyphone disambiguation problem.\n\nThanks for your suggestions. We have explained how substantial a problem this is for a variety of languages in Appendix G and marked them blue in the revised version of the paper.\n\n\n\n**[About question 5]**\n\nAt first, we are sure that the dictionary embeddings extracted by the pre-trained language model are well distributed in the semantic space. High pronunciation accuracy in the experiments means the pronunciation weights (the semantic similarities) in our S2PA module are accurate enough. It also demonstrates the high similarities between the dictionary embeddings extracted by the pre-trained language model and the character-level representations extracted by our semantics encoder. Therefore, we conclude that the character representations are well distributed in the semantic space. \n\n\n",
" \n**[About how much should PER-O be trusted as a gold standard] (Question 5)**\n\nYes, in some languages, there are multiple valid pronunciations of worse. However, for example, the gold pronunciation of the word \"the\" depends on the first sound of the word that comes after it, which can be called **pronunciation rules**. In our experiments, the Mandarin character \"一\" also has the sandhi pronunciation rule, e.g. \"一\" before tone4 should be \"yi2\" (一段) and when \"一\" is an ordinal word, it should be \"yi1\".\n\nWe are sure that the ground truth labels used in the PER-O experiments conform to the pronunciation rules in those languages and can be trusted as a gold standard.\n\n\n\n**[About the comparison to PNG-BERT] (Question 6)**\n\n1. PNG-BERT [3] has not released its code officially. We find an unofficial implementation (https://github.com/ishine/PnG-BERT), but we do not obtain satisfying results.\n2. The basic architecture of PNG-BERT [3] is Non-attentive Tacotron [4], which is quite different from Portaspeech [5] (the baseline system used in our experiments).\n3. PNG-BERT [3] requires phoneme, character, and word-level alignment as input features. However, our work aims at modeling natural pronunciations based on the input character sequence and dictionary knowledge, which is a different setting.\n\nTherefore, for fair comparisons, we do not use PNG-BERT [3] as one of the baseline systems in our experiments.\n\n\n\n**[About the question \"Table 3: is the Dict-TTS entry pretrained or not?\" ] (Question 7)**\n\nFor fair comparisons, the Dict-TTS entry in Table 3 is not pretrained.\n\n\n\n**[About the impacted quality score and the interaction between pronunciation, lexical tone, and subjective prosody responses in Table 3] (Question 8)**\n\n**Question:** Is there an explanation for why the quality score is impacted by the pronunciation representation?\n\n**Answer:** For audio quality evaluation, we tell listeners to \"focus on examining the naturalness of audio quality (e.g., noise, timbre, sound clarity, and high-frequency details)\". The pronunciation accuracy will influence the sound clarity in audio quality evaluation.\n\n**Question:** How much interaction if any is there between pronunciation, lexical tone, and subjective prosody responses here?\n\n**Answer:** For subjective prosody evaluations, we focus on examining pitch, energy, and duration. The pronunciation and lexical tone may affect the local pitch trajectory and energy distribution in subjective prosody evaluation. But as shown in the ablation studies in Section 4.4, the improvement in pronunciation modeling of Dict-TTS is mainly due to the semantic information extracted from the dictionary.\n\n\n**[About disentangling the lexical tone and prosodic realization in evaluations] (Question 9)**\n\nThe lexical tone can be evaluated by the PER-O, PER-S, and SER-S metrics. The prosodic realization can be evaluated by the duration errors and character-level average pitch errors. For duration errors, we calculate the character-level duration MSE. For character-level average pitch errors, we firstly calculate the mean pitch for each character's region in the mel spectrogram according to the Montreal Forced Aligner (MFA) to remove the influence of lexical tone, and then we calculate the MSE of the mean pitch sequences. We present the results on the Biaobei dataset in the following tables:\n\n| Method | Duration Error (ms) | Pitch error |\n| ------------------ | ------------------- | ----------- |\n| Character | 36.2 | 1424.6 |\n| BERT Embedding | 35.7 | 1312.1 |\n| NLR | 36.4 | 1414.3 |\n| Phoneme (G2PM) | 35.8 | 1341.7 |\n| Phoneme (pypinyin) | 35.3 | 1308.8 |\n| Dict-TTS | 34.4 | 1232.3 |\n\nWe have attached these results to Appendix E and marked them blue in the new version of the paper.\n\n\n**[About the ablation studies in Section 4.4] (Question 10)**\n\nWe apologize for the confusing ablation studies in Section 4.4. We have conducted new experiments to demonstrate the effectiveness of designs in Dict-TTS, including the auxiliary semantic information and the Gumbel-Softmax sample strategy. More details can be found in Section 4.4 in the revised version of the paper. Thanks for the reviewer’s kind and helpful suggestions!\n\n\nAgain, we thank the reviewer for the insightful reviews and “Accept” recommendation for our paper.\n\n**[References]**\n\n[1] Tan, Xu, et al. \"A survey on neural speech synthesis.\" 2021.\n\n[2] Binbin Zhang, et al. Wenetspeech: A 10000+ hours multi-domain mandarin corpus for speech recognition. 2022.\n\n[3] Jia, Ye, et al. \"PnG BERT: Augmented BERT on phonemes and graphemes for neural TTS.\" 2021.\n\n[4] J. Shen, et al. “Non-Attentive Tacotron: Robust and controllable neural TTS synthesis including unsupervised duration modeling,” 2020.\n\n[5] Yi Ren, et al. Portaspeech: Portable and high-quality generative text-to-speech. 2021.",
" Thanks for your positive review and valuable comments, and we hope our response fully resolves your concerns.\n\n**[About how substantial a problem this is for a variety of languages] (Question 1)**\n\nThe polyphone disambiguation problem is critical in logographic languages such as Chinese, but is less problematic in phonograms like English.\n\nFor logographic languages like Chinese, although the lexicon can cover nearly all the characters, there are a lot of polyphones that can only be decided according to the context of a character. Thus, G2P conversion in this kind of languages is mainly responsible for polyphone disambiguation, which decides the appropriate pronunciation based on the current word context. Therefore, polyphone disambiguation is crucial in these languages and our method is an effective solution for the polyphone disambiguation problem.\n\nFor alphabetic languages like English, lexicon cannot cover the pronunciations of all the words. Thus, the G2P conversion for English is mainly responsible for generating the pronunciations of out-of-vocabulary words [1]. Although the polyphone disambiguation is less problematic in these languages, our methods can still be used as the modules to retrieve the correct pronunciation for polyphones and heteronyms in their G2P process (e.g., the Algorithm step 2 in https://github.com/Kyubyong/g2p).\n\nThanks for your helpful suggestions. We have explained how substantial a problem this is for a variety of languages in Appendix G and marked them blue in the revised version of the paper.\n\n\n\n**[About the pre-processing the dictionary text] (Question 2)**\n\n**Question:** how much pre-processing of the dictionary text is important here? e.g., is the decomposition into \"Character\" \"Pronunciation\" \"Definitions\" \"Usages\" always used?\n\n**Answer:** In our experiments, we just crawl the dictionaries and do not need too much pre-processing. Besides, the decomposition into \"Character\" \"pronunciation\" \"Definitions\" \"Usages\" is not necessary, but the decomposition into \"Character\" \"pronunciation\" \"Definitions or Usages\" is necessary.\n\n**Question:** Are there ever errors to this decomposition?\n\n**Answer:** There are rarely errors to this decomposition since the online dictionaries used in people's daily life are already well-organized and decomposed (e.g., the online dictionaries listed in Appendix A.1).\n\n\n\n**[About the way like the human brain's processing] (Question 3)**\n\nIn Section 1, Line 37-39, \"When one is confused about the acoustic pronunciation of a specific polyphone, he or she will resort to the dictionary website to infer its exact reading based on the semantic context\". In our S2PA module, the semantic encoder aims at comprehending the semantic contexts in the input character sequence. The semantic similarity between the input character representations and the dictionary entries is measured for deducing the correct pronunciations. Therefore, we claim that \"... the model can easily deduce the correct pronunciation and semantics based on the lexicon knowledge like human brain\".\n\n\n\n**[About the insight into the quality of the pronunciations obtained from the \"low-quality text-speech pairs] (Question 4)**\n\nIn our experiments, we use the Wenetspeech dataset [2] for Dict-TTS (pre-trained). The Wenetspeech dataset contains 10005 hours of text-speech pairs with 0.95 ~ 1.0 confidence. We use the 1000 hours \"M training subset\" with 1.0 confidence for our Dict-TTS pre-training. Among the M subset, there are approximately 400 hours of audio from podcasts which can be seen as the \"clean\" partition and 600 hours of audio from Youtube which can be seen as the \"other\" partition. Most of the audio samples in the \"Youtube\" set come from online dramas, which contains various background music or loud noise. Besides, the pronunciations in these audio samples may not be accurate enough. We have pre-trained our Dict-TTS with different subsets in the WenetSpeech dataset and the results are shown in the following table. It can be seen that although the audio quantity in the \"Youtube\" set and \"Podcast + Youtube\" set are larger, the pronunciation accuracy of Dict-TTS (pre-trained) is negatively impacted by the poor audio quality.\n\n| Methods | Set for pre-training | PER-O | PER-S | SER-S |\n| ---------------------- | --------------------- | --------- | --------- | --------- |\n| Dict-TTS (pre-trained) | Podcast Set | **1.54%** | **0.79%** | **4.25%** |\n| Dict-TTS (pre-trained) | Youtube Set | 1.97% | 1.02% | 6.25% |\n| Dict-TTS (pre-trained) | Podcast + Youtube Set | 1.63% | 0.87% | 4.75% |\n| Dict-TTS | None | 2.12% | 1.08% | 6.50% |\n\n",
" We are grateful for your positive review and valuable comments, and we hope our response fully resolves your concerns.\n\n\n\n**[About the ablation studies in Section 4.4] (Question 1 and Question 2)**\n\nWe apologize for the confusing ablation studies in Section 4.4. We have conducted new experiments to demonstrate the effectiveness of designs in Dict-TTS, including the auxiliary semantic information and the Gumbel-Softmax sample strategy. More details can be found in Section 4.4 in the revised version of the paper. Thanks for the reviewer’s kind and helpful suggestions!\n\n\n\n**[About how to add the rules to the pronunciation weight] (Question 3)**\n\nIn Mandarin, there are some pronunciation rules (like \"sandhi rules\") that can not be learned from the dictionary. For example, \"一\" before tone4 should be \"yi2\" (e.g., \"一段\") and when \"一\" is an ordinal word, it should be \"yi1\" (e.g., \"一四九五年\"). According to these pronunciation rules, we can obtain the correct pronunciation labels for some specific characters based on the input character sequence's part-of-speech (POS) tags. After we obtain the correct pronunciation labels for these specific characters, we can directly force the pronunciation weights of these characters to be the ground truth values.\n\nAnd in our experiments for Mandarin, we only use the sandhi rules from the PaddleSpeech frontend (https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/paddlespeech/t2s/frontend/tone_sandhi.py) for Dict-TTS and phoneme-based baseline systems (pypinyin and G2PM). Although we can use a portion of the rules in the rule-based baselines (e.g., pypinyin) to further improve Dict-TTS's PER and SER, for a fair comparison, we only add the sandhi rules from PaddleSpeech to the pronunciation weights of our Dict-TTS. We have attached these explanations to Appendix F in the new version of the paper.\n\n\n\n**[About more details about the dictionary design] (Question 4)**\n\nThanks for the reviewer’s feedback that requests more details about the dictionary design. We merge the definition examples and usage examples as a single character sequence. We have added more details about this design in Section 3.3 and marked them blue in the revised version of the paper.\n\nOur method does not require them to be carefully structured like the dictionary shown in Figure 1. For example, as shown in Figure 2, some characters' pronunciations in the Chinese dictionary used in the experiments (https://github.com/yihui/zdict) may only have several usage examples. And usage examples of the pronunciation \"L E4\" are similar in terms of semantics. Therefore, the operation of merging different examples into a single character sequence will not affect the performance of semantics matching.\n\n\n\n**[About some confusing terms] (Question 5)**\n\nWe are sorry for the confusing terms in Section 3.3. We have clarified these terms and marked them blue in the revised version of the paper.\n\n\n\nAgain, we thank the reviewer for the insightful review and “Accept” recommendation for our paper.",
" The paper proposes Dict-TTS that can infer the corresponding pronunciations for the given input text sequence by incorporating the prior information from the online website dictionary. For deriving the pronunciations, the paper proposes a semantics-to-pronunciation attention (S2PA) module which finds the correct pronunciations by matching the semantic information between the input text sequence and the dictionary entries. The proposed S2PA module can be incorporated into the end-to-end TTS system and can be trained simultaneously with the TTS system using the mel-spectrogram reconstruction loss. The idea is interesting and is validated by extensive experiments on three datasets with different languages. Strengths:\n\n1) The idea of using prior knowledge from the dictionary is interesting, and the method of using S2PA module is novel. Specifically, the S2PA module incorporates the semantic information for polyphone disambiguation, which imitates the “dictionary lookup” practice in human daily life.\n\n2) The proposed method for polyphone disambiguation (grapheme to phoneme) can be trained in an unsupervised manner and can be trained simultaneously with the TTS modules in an end-to-end manner. The method greatly eases the process for building a TTS system which directly accepts the raw text as input (i.e. character-based TTS system).\n\n3) The proposed method also provides the possibility to pre-train the model on large-scale ASR dataset to improve the generalization capacity for improving the polyphone disambiguation performance.\n\nWeaknesses:\n\nMy main concerns are mainly related to the experiments. Please refer to the following Questions section for details.\n 1) In Section 4.4 “Ablation Studies”, the paper claims that “Dict-TTS successfully decompose the character representation, pronunciation, and semantics, which significantly improves the pronunciation accuracy.” How could this conclusion be drawn from the ablation study experiments? For the proposed Dict-TTS, removing the top two layers of the linguistic encoder only increases the PER and SER slightly. But do the results indicate the success of decomposition (of character representation, pronunciation, and semantics)? The paper should give more explanations about this, or should prevent over-claiming.\n\n2) Also in Section 4.4, the paper reports the results by removing the top two layers of the linguistic encoder for different systems including the character-based, phoneme-based and the Dict-TTS. Why are these top two layers of the linguistic encoder important? What about the other layers of the linguistic encoder?\n\n3) In Section 3.4, it is said that “our method are compatible with the predefined results … by directly adding specific rules to pronunciation weight $w_{i,j}$. It would be interesting to elaborate more on how to add the rules to the pronunciation weight.\n\n4) In Section 3.3, for a pronunciation $p_{i,j}$, it may corresponds to a dictionary entry $e_{i,j}$ with several different items including different definition examples and different usage examples, etc. For example, in Figure 1, the first pronunciation corresponds to two definition examples, and seven or eight usage examples. Are these examples merged together as a single character sequence, leading to $[e_{i,j,1}, …, e_{i,j,u}]$ with $u$ characters? The authors should give more details about this design. Will the operation of merging different examples into a single character sequence affect the performance?\n\n5) In Section 3.3, some terms might be confusing and need be clarified. \n\n* Line 202, $a_{i,1}$ should be $a_{i,j}$?\n\n* Line 202, $c_i$ might be confused with the character $c_i$ in Dictionary (i.e. $c_i$ in Line 186). The $c_i$ in Figure 4 should also be clarified.\n\n* Line 206, it is not clear what does the term $a_{i,j,k}$ mean.\n Yes, the authors address the impacts and limitations in Appendix E. ",
" This paper describes an approach to leverage available human readable dictionaries to help improve TTS pronunciation modeling, specifically for polyphonous lexical items. The aim here is to leverage the semantic information from the definition and example to guide selection of an appropriation pronunciations. Strengths\n* Clear motivation, mostly very well described technical approach.\n* Good improvement over comparable studies.\n* Interesting use of distant or weak supervision for this task.\n\nWeaknesses\n* The problem being solved may be somewhat narrow, though important for TTS. \n* The ablation study of removing network layers is not altogether convincing. * It would be useful to demonstrate how substantial a problem this is for a variety of languages. How necessary is solving this problem for delivering high quality TTS? Some additional motivation to this end could be helpful.\n\n* In section 3.3. how much preprocessing of the dictionary text is important here? e.g. is the decomposition into \"Character\" \"Pronunciation\" \"Definitions\" \"Usages\" always used? Are there ever errors to this decomposition?\n\n* In Section 3.3 it is claimed that \"..the model can easily deduce the correct pronunciation and semantics based on the lexicon knowledge like human brain\". In what way is this like the human brain's processing? Would any function that uses a \"semantic\" embedding to disambiguate pronunciation be \"like the human brain\"?\n\n* Section 3.4 Do you have any insight into the quality of the pronunciations obtained from the \"low-quality text-speech pairs\" obtained from ASR data? E.g. Librispeech is divided into \"clean\" and \"other\" partitions based on how easy an ASR model can recognize them. It would be interesting to understand the quality/quantity tradeoffs that are made in this work.\n\n* Table 1 - how much should PER-O be trusted as a gold standard? In some languages there are multiple valid pronunciations of worse e.g. English \"the\" pronounced as /ðə/ vs. /ðiː/. The example pronunciations are quite different, but to what degree is some pronunciation variation tolerated?\n\n* Table 2 A comparison to PNG-BERT would make sense here since it is another approach to include semantics in the TTS frontend. Is there a reason this is not used?\n\n* Table 3: is the Dict-TTS entry pretrained or not?\n\n* Table 3: is there an explanation for why the quality score is impacted by the pronunciation representation? It is particularly interesting that the prosody score is positively impacted by the improved pronunciation modeling of Dict-TTS. How much interaction if any is there between pronunciation, lexical tone, and subjective prosody responses here? \n\n* Table 3: the DTW measure would be impacted by both lexical tone and prosodic realization. Is there any effort to disentangle these when evaluating the impact of these approaches?\n\n* Table 4, Section 4.4. This ablation studey of removing two layers is somewhat strange. Without retraining the model with fewer layers, there isn't an expectation that useful information would be available from lower layers to be used by a higher layer in a network. Some limitations of the work are discussed. The potential negative societal impact of the work is not addressed.",
" This work presents a method to increase the pronunciation accuracy of a speech synthesis system without phoneme labels by using an online dictionary. ## Strengths\n1. High intelligibility can be obtained without phoneme label in Text-to-Speech task using logogram.\\\n\n## Weaknesses\n1. The authors claim to present a method to solve the polyphone disambiguation problem, which is a problem in logograms such as Chinese, but is less problematic in phonograms such as English. The authors argue in this section that the characteristics of the logographic writing system can be easily extended to alphabetic languages (e.g., English) by replacing “character” with “word”, but this seems to be an inappropriate explanation that does not take into account that the phonogram basically displays the pronunciation unlike the logogram.\n\n1. The authors mention the loss of information caused by converting grapheme to phoneme in comparison with “Phoneme-based TTS systems”, but it is not related to conversion errors that occur during the conversion process to phoneme (An error in which Pinyin is converted differently from context or semantic meaning.), which the authors tried to solve. The problem of information loss in the conversion process to phoneme has already been explored in previous work such as [1] and [2], and this comparison is not appropriate because it is not the same as the problem claimed in this work.\n\n1. In addition, the authors excluded the space token from the comparison between “I scream” and “Icecream”, this is an inappropriate example given that unlike Chinese which the space token is not commonly used, the space token is information that greatly influences prosody prediction in English.\n\n1. The method presented in section 3.3 does not seem to be effective considering that the dictionary is composed of word units and that one grapheme can be mapped to multiple phonemes in many cases in phonograms(e.g., grapheme ‘e’ in English). The authors need to explain the limitations of the presented method and modify the scope of the claims.\n\n1. The authors claim that the character representations are well distributed in the semantic space. To support this, the authors need to prove how the semantic space is defined and well distributed. It is not appropriate to claim that the character representations are well distributed in the semantic space based solely on predicting pronunciation well.\n\n1. Referring to Table 1, in the case of Japanese, the performance is significantly lower than when using the open source G2P module, and the authors claim this as “which demonstrates the superiority of the explicit semantics matching in our S2PA module.”.\nIt is difficult to understand how this result demonstrates superiority.\nIn addition, the authors assert in the Conclusion as follows:\n“Our experimental results in three languages show that Dict-TTS outperforms several strong G2P baseline models in terms of pronunciation accuracy and improves the prosody modeling of the baseline TTS system.”\nClearly, it showed better performance for only two languages and a significant performance degradation for Japanese, so I think this part also has a problem with the argument, and I have doubts about the overall completeness and reliability of the paper.\n\n1. In terms of generalization, “Biaobei” and “HK” are composed entirely of logograms, and “JSUT” is different in that it is a mixture of phonograms and logograms. According to the experimental results presented by the authors, the proposed method has the potential to work only in the logograms, can be viewed as a solution that operates under specific conditions, and is not of high significance compared to solutions that can be generalized.\n\n\n[1] Jia, Ye, et al. \"PnG BERT: Augmented BERT on phonemes and graphemes for neural TTS.\" arXiv preprint arXiv:2103.15060 (2021).\\\n[2] Kastner, Kyle, et al. \"Representation mixing for TTS synthesis.\" ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. The authors need to clarify the limitations of the method presented and the points described above. Comments on the limitations of the work are mentioned above. \\\nThere is no negative societal impact.",
" Polyphone disambiguation is a big challenge for end-to-end TTS systems. The paper proposes Dict-TTS, an approach for phonemic disambiguation that can be trained jointly with an end-to-end neural TTS system. The key intuition behind the approach is that semantic context in the input text can be used to select the most appropriate pronunciation from a dictionary. To achieve this goal, the input text is first encoded using a transformer based semantic encoder. The output of this encoder is then used to compute attention weights over the entries in a dictionary using a semantic-to-pronunciation attention module. The embeddings of the most likely pronunciation and the corresponding semantic encoding are fed to the linguistic encoder in the end-to-end TTS system. The paper shows that DictTTS outperforms character and phoneme baselines (obtained using commonly used grapheme-to-phoneme systems) in terms of pronunciation accuracy in Chinese, Japanese and Cantonese. Pre-training on an ASR dataset can further improve accuracy. The paper also shows that compared to a phoneme-based system, an end-to-end TTS system that incorporates DictTTS achieves similar overall MOS scores and better MOS-prosody scores.\n Strengths:\n* Proposes a new approach for polyphone disambiguation which can be jointly trained with the end-to-end TTS model using mel-spectrogram reconstruction loss without requiring phonemic labels, which are expensive to obtain.\n* Performance of the system improves by pre-training on an ASR corpus.\n* The approach yield improvements in pronunciation accuracy on three languages and improves in prosody of the underlying TTS system\n\nWeaknesses:\n* Some parts of the paper are hard to follow (see details below)\n\nUpdate: In their revisions, the authors have adequately addressed my concerns. * The conclusions from the ablation study in Sec 4.4 is not very clear. The paper states that when the top-2 layers of the character based system are removed, the PER/SER increase rapidly but this is not the case for either the phoneme-based system or DictTTS. The paper claims this demonstrates that the Dict-TTS successfully decomposes the character representation, pronunciation, and semantics, which significantly improves the pronunciation accuracy. Why is this the case?\n* Why is the attention vector 2-dimensional in Equation 1 i.e. a_{i,1} … a_{i,m} but 3-dimensional on L206 i.e. w_{ij} = \\sum_{k=1}^{u} a_{ijk}?\n* Did you try an experiment where you used a weighted sum of the embeddings of the pronuncations/semantics vs using the most likely pronunciation?\n* Does the dictionary need to be limited to a smaller set of characters for a given input text? \nL184 says: \"D which contains a sequence of characters C = [c1 , c2 , ..., cn ], where n is the size of characters set in a language\" If the language has a large character set, computing an attention weight over the entire dictionary for each input text will be expensive. Do you need any computational shortcuts for the S2PA module to be practical?\n* Could you comment whether the noise in the dictionary affects the performance of this approach?\n\n* typo: L21/92 'there' -> 'they'\n\n The authors have discussed some limitations in Sec 5. The authors have also discussed potential negative societal impact of their work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"2iJFV05vZU5",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa",
"OlLILDjgYuw",
"iYXCU_MwylK",
"QmMjs-LLz8k",
"vBQbgBgoA1",
"GI1tqKa4zy_",
"kR70CKfVFkH",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa",
"nips_2022_HEcYYV5MPxa"
] |
nips_2022_vhKaBdOOobB | GhostNetV2: Enhance Cheap Operation with Long-Range Attention | Light-weight convolutional neural networks (CNNs) are specially designed for applications on mobile devices with faster inference speed. The convolutional operation can only capture local information in a window region, which prevents performance from being further improved. Introducing self-attention into convolution can capture global information well, but it will largely encumber the actual speed. In this paper, we propose a hardware-friendly attention mechanism (dubbed DFC attention) and then present a new GhostNetV2 architecture for mobile applications. The proposed DFC attention is constructed based on fully-connected layers, which can not only execute fast on common hardware but also capture the dependence between long-range pixels. We further revisit the expressiveness bottleneck in previous GhostNet and propose to enhance expanded features produced by cheap operations with DFC attention, so that a GhostNetV2 block can aggregate local and long-range information simultaneously. Extensive experiments demonstrate the superiority of GhostNetV2 over existing architectures. For example, it achieves 75.3% top-1 accuracy on ImageNet with 167M FLOPs, significantly suppressing GhostNetV1 (74.5%) with a similar computational cost. The source code will be available at https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch and https://gitee.com/mindspore/models/tree/master/research/cv/ghostnetv2. | Accept | This paper aims to augment efficient CNNs with self-attention. However, since the naive approach to self-attention is computationally expensive and would contradict the point of efficient CNNs, the authors introduce a new attention mechanism which captures long-range information without substantially added computation cost. The paper demonstrates that GhostNetV2 exhibits markedly better performance at various compute limits as compared to previously proposed efficient networks. Three of the reviewers were quite positive on this paper, noting the novelty of the approach and the strength of the empirical results. One reviewer had several concerns, primarily regarding comparison to NAS based approaches and the novelty of the approach. I agree with the other reviewers that it is not reasonable to compare NAS approaches to non-NAS approaches, and agree that there are marked differences between this work and the previous work cited. I therefore recommend acceptance. I think this will be a valuable contribution to the efficient network community. | val | [
"VS4hQobanBh",
"EPtLi4LZTaE",
"B6AmHzyy0Sv",
"swqgwmTN6Lq",
"edvNiHYiDEx",
"vWNJ4wxaFZ",
"RTwOorslz7G",
"kNGapMAj-8",
"k3JWlEJAjD1",
"Hya34Za4oB9",
"8C1AgMTFes",
"oby17P7M50",
"3NlUgry_7Dt",
"oURXecLNmwd",
"L8OIDMusXmS",
"SmKnmcr_lAL",
"u6iftgwjhb"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear area chair and anonymous reviewers,\n\nThanks for your constructive comments and valuable suggestions to improve this paper. We have revised the manuscript and supplemental materials by improving the presentation and including more experiments, discussions, and explanations. If you have any questions, we are glad to discuss them with you.\n\nRegards \n",
" Thanks for your support and constructive comments!\n\nRegards",
" Dear Reviewer Fb8Z:\n\nThanks for your feedback and valuable suggestions! We have revised the manuscript and supplemental materials. We improve our writing and include additional explanations, diagrams, and discussions to make the paper clear.\n\nRegards\n",
" Thank you for your clarifying comments. I believe this paper would be interesting to the community, especially if the paper is later updated to include the additional explanations and make the writing / diagrams a bit more clear. I have increased my score as a result.",
" Thanks for your response. The rebuttal has well addressed my questions. I support this paper for its novelty and solid experiments. GhostNetV2 is a new and efficient architecture with SOTA performance and great potential. It may inspire new works in the future, such as searching its architecture configures or improving training recipes to pursue higher performance.\n \nThus I vote to accept this manuscript strongly.",
" Dear Reviewer L8vJ,\n\nThanks for your support and constructive comments. \n\nRegards\n\n",
" Dear Reviewer BfQb,\n\nThanks for your constructive review. Has our response resolved your concerns? If there are other questions, we are glad to discuss them with you. \n\nRegards",
" I appreciate the responses from the authors and keep my original rating unchanged regarding the significance of the novelty and the good experimental results. ",
" Thanks for the constructive comments.\n\n**Q1**: In Sec. 3.1, the authors say that only half of the channels are used for encoding spatial information with a depthwise conv in a ghost module and claim that this may be a performance bottleneck of GhostNet. How could you prove this?\n\n**A1**: This is an interesting question. Convolution operations in CNN have a weak ability to capture long-range spatial information. It prevents CNN's performance from further improvement, which is empirically proved by recent works about vision transformers [r1]. As a lightweight CNN, GhostNet even uses smaller kernel sizes (i.e., 1x1 convolution) in half of the channels, which intuitively incurs a weaker ability to model spatial information. This intuition inspires us to design the DFC attention to capture the long-range information, which indeed improves the performance significantly.\n\n**Q2:** The performance on downstream tasks, like COCO object detection, seems not that surprising but considering the low computations, it is acceptable.\n\n**A2:** Thanks for your constructive comments. COCO object detection with lightweight backbones is a challenging task. Compared with the existing architectures, GhostNetV2 achieves higher mAP. In the future, we will continue exploring how to improve the performance of lightweight architectures on downstream tasks.\n\n**Q3**: English and abbreviations. \n\n**A3**: Thanks for your suggestion. We will polish the writing and fix typos carefully in the final version.\n\n**Q4:** Tables 9 and 10 could be redesigned to make the arrangement look better.\n\n**A4**: Thanks for your suggestion. We will redesign these two tables for better presentation in the final version.\n\n[r1] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.",
" Thanks for the constructive comments.\n\n**Q1**: The DFC attention captures the long-range dependence with two sequential FC layers, which aggregate information along with horizontal and vertical directions, respectively. Adding a non-linear activation function between them may reflect a more complex relationship between different pixels. It is interesting to investigate the effectiveness of this non-linear function empirically.\n\n**A1**: Thanks for your suggestion. We further conduct experiments by inserting a ReLU function between the two FC layers. It slightly improves the performance. For example, the top-1 accuracy of GhostNetV2 is improved from 75.3% to 75.4%.\n\n**Q2**: For intuitive presentation, it is better to describe the proposed architectures in the captions of Figure 4.\n\n**A2**: Thanks for your suggestion. We will revise the captions to describe the architecture in the final version.",
" **Q1**: Nits. \n\n**A1**: Thanks for your careful review. We will fix all the typos in the final version.\n\n**Q2**: The description and diagram of the actual approach are somewhat confusing (see the question section).\n\n**A2**: Thanks for your constructive comments. We detailedly answer the question in **A5**.\n\n**Q3**: It would be good to study what this attention ends up attending over (and compared for vertical/horizontal att vs full att).\n\n**A3**: Thanks for your suggestion. We further visualize the attention of vertical/horizontal attention and full attention in this figure (https://i.postimg.cc/c4tJTKDR/Attention-Visualization.png) and show their calculation process with a diagram (https://i.postimg.cc/13JqKCGd/Diagrams-decoupled-attention-full-attention.png). In full attention, all the patches in a $N \\times N$ region participate in the calculation of the focused patch directly. For the decoupled attention, a patch is directly aggregated by patches in its vertical/horizontal lines, while other patches participate in the generation of those patches in the vertical/horizontal lines, having an indirect relationship with the focused token. Thus the calculation of a patch also involves all the patches in the $N \\times N$ region. \n\nWe visualize the attention produced by stacking vertical and horizontal attentions and compare it with full attention. In low layers, the decoupled attention shows some cross-shaped patterns, indicating patches from the vertical/horizontal lines participate more. As the depth increases, the pattern of the attention map diffuses and becomes more similar to the full attention. \n\n**Q4**: This is not quite global attention because you attend over patches aligned horizontally or vertically, and hope that previous blocks captured sufficient global context in their respective patches. one way to test whether they do is to make the network deeper, but narrower (to control for the number of params) and see if the accuracy improves. Have you considered this, along with visualizing attention maps to see what is actually being attended. \n\n **A4**: This is an interesting perspective. We make the network deeper and narrower so as to keep FLOPs similar. Their comparisons are shown below, where 'GhostNetV2-x denotes there are x blocks'. Increasing the network's depths can indeed improve the performance, and we infer that the long-range information along the two directions can be mixed more thoroughly as the depth increase. Over increasing depth does harm to the performance as the long-range information is saturated but the channels are too few. This phenomenon is consistent with the attention's visualization and analysis in **A3**. \n\n| Model | GhostNetV2-16 | GhostNetV2-20 | GhostNetV2-25 | GhostNetV2-30 |\n| ------------------ | ------------------------------ | ---- | ---- | ---- |\n| Accuracy | 75.3 | 75.7 | 75.6 | 75.1 |\n\n**Q5**: I am a bit confused about figure 4 and equation 6 and the preceding paragraph. Is it saying that you compute your horizontal/vertical attention (via a linear projection) followed by sigmoid followed by elementwise multiplication with the output of the ghost module (which is of dim HxWxC). Does this not mean you are just rescaling the output of the ghost module? if not can you please explain and also possibly clarify in the paper how exactly you are attending over/aggregating info from different patches?\n\n**A5**: Thanks for your suggestion. We further clarify the information aggregation process by this figure (https://i.postimg.cc/Vkjtbsvz/Fusion.png). With the same input, the Ghost module and DFC attention are two parallel branches extracting information from different perspectives. The output is their element-wise product, which contains information from both features of the Ghost module and attentions of the DFC attention module. The calculation of each attention value involves $N \\times N$ patches in a large range so that the output feature can contain information from these patches. ",
" **Q1**: The proposed solution, Equation 4,5, is not depthwise convolution with kernel $K_H$ and $K_W$. In equation 4 and 5, the weight matrix F has HWC parameters. These parameters are not shared for each column or row. This is more like a batch matrix multiplication, but is different from depthwise convolution which has $K_H$ and $K_W$ parameters. According to table 5, it seems that depthwise convolution is being used in practice.\n\n**A1:** Eqs. 4 and 5 denote the general formulation of DFC attention, which aggregates pixels along horizontal and vertical directions, respectively. The implementation strategy is discussed in Line 168-174 of the submitted manuscript. With weight sharing, depth-wise convolution with kernel $1\\times K_H$ and $K_W\\times1$ can accomplish the aggregating process along the two directions as Eqs. 4 and 5. This strategy is well supported by tools such as TFLite and ONNX for fast inference on mobile devices. \n\n**Q2**: Limited novelty with height-width decoupled depthwise convolutions [spatially separable convolution] and SE-like spatial-channel attention. Both of the techniques have been heavily explored in the community.\n\n**A2**: This paper discusses a practical and important problem, i.e., how to design a spatial attention mechanism for efficient architectures. It is required to capture long-range spatial dependency and be efficiently deployed on mobile devices as well, while the existing methods cannot satisfy them simultaneously. Though the proposed DFC attention is concept-simple, it satisfies the two properties and helps develop a new GhostNetV2 architecture with higher performance and lower latency. We argue that the discussion about designing hardware-friendly spatial attention can bring new perspectives to the community and the proposed GhostNetV2 architecture can be practically applied to various mobile devices.\n\nThe representative SE-like attentions include SE[11], CBAM [31] and CA [8]. There are many differences between the proposed DFC attention and these methods. For example, instead of global pooling along height or width in SE-like attentions, our DFC attention keeps the H/2xW/2 size which is beneficial for spatial and fine-grained information. Besides, the horizontal and vertical attentions are conducted sequentially, which can involve patches more efficiently than the conventional parallel formulation. Our DFC attention is compared with these SE-like attentions in Table 4 of the submitted manuscript, which outperforms SE-like attentions by a significant margin.\n\n**Q3**: Marginal performance gain when FLOPs is controlled. For example, Auto-NL [r1] also studied lightweight self-attention, achieved 77.7% on ImageNet with 353M FLOPS and 5.6M parameters two years ago, compared with this paper, 77.8% with 399M FLOPs and 12.3M parameters. Although it is not clear what latency Auto-NL needs, this related work is not discussed or compared.\n\n**A3**: Thanks for your suggestion. Auto-NL [r1] is a nice work, and we further compare our method with it. Auto-NL follows the typical paradigm of self-attention (i.e., $(xx^T)x$ or $x(x^Tx)$), whose computational cost is saved by reducing the feature's dimensions and replacing $1\\times1$ convolution with light-weight depthwise convolution. It also requires 'einsum', tensor reshaping, and transposing operations for practical implementation, which incur large latency. Since the original paper [1] only reports theoretical FLOPs without practical latency, we measure its latency using the same devices for GhostNetV2 (Huawei P30 with Kirin 980 CPU) and show the results as follows. AutoNL suffers much higher latency (76.4ms v.s. 56.7ms) than GhostNet with lower accuracy (76.5% v.s. 76.9%).\n\n| Model | FLOPs (M) | Latency (ms) | Top-1 Accuracy (%) | Top-5 Accuracy (%) |\n| --------------- | --------- | ------------ | ------------------ | ------------------ |\n| AutoNL-S | 267 | 76.4 | 76.5 | 93.1 |\n| GhostNetV2 1.3× | 269 | 56.7 | 76.9 | 93.4 |\n| AutoNL-L | 353 | 101.6 | 77.7 | 93.7 |\n| GhostNetV2 1.6× | 399 | 77.6 | 77.8 | 93.8 |\n\nBesides, Auto-NL and GhostNetV2 actually focus on different aspects of designing architectures. Auto-NL is a NAS-based method, which searches the architecture's configuration (e.g., location for inserting LightNL, channel's number in each layer) to pursue high performance. While GhostNetV2 focuses on how to design a hardware-friendly attention mechanism, which doesn’t optimize the network architecture. Searching network's configuration has the potential to further improve the performance of GhostNetV2, but it may be out of this paper's scope.",
" **Q4**: Downsampling and upsampling part of the feature map has been explored too in [r2]. And it leads to gains as well. So it is not clear why DFC works.\n\n**A4**: Feature downsampling is actually a direct idea when the computational cost is excessive, and here we adopt it to reduce DFC attention's computational cost. In [r2], half of the features are down-sampled in block level to reduce FLOPs, while our DFC attention downsamples the features in the attention path. We apply Elastic method on GhostNetV1 to verify its effectiveness on mobile networks. The results are shown below, where the model's width is adjusted to align the FLOPs. With similar FLOPs, downsampling a part of features (GhostNetV1 + Elastic) can improve the performance, but it is much inferior to that of using DFC attention. We infer that DFC attention can capture long-range information and improve performance more effectively.\n\n| Model | FLOPs (M) | Top-1 Accuracy (%) | Top-5 Accuracy (%) |\n| ------------------- | --------- | ------------------ | ------------------ |\n| GhostNetV1 | 168 | 74.5 | 92.0 |\n| GhostNetV1+ Elastic | 172 | 74.8 | 92.1 |\n| GhostNetV2 | 167 | 75.3 | 92.4 |\n\n**Q5**: Latency is important to the paper, so instead of saying “an ARM-based mobile device”, is it possible to specify which device exactly, then it is possible for others to reproduce the latency results and make meaningful comparisons.\n\n**A5**: Thanks for your suggestion. The practical latency is measured on Huawei P30 (Kirin 980 CPU) with TFLite tool. \n\n**Q6-1**: Related to the point above, FBNet reports a latency of 28.1ms, but this paper reports around 70ms. Is this mainly caused by device difference and optimization? \n\n**A6-1**: Yes, the devices and implementation tools usually incur a large difference in practical latency, e.g., OFA [r3] (Figure 10) reports 76.3% top-1 with 89ms on Samsung S7 Edge, and 76.4% top-1 with 58ms on Pixel 1. It is hard to compare the model's latency with different devices and implement tools. Moreover, the available devices are different for various companies or institutions, e.g., Google usually implements models on Pixel series phones [r4], while Apple uses iPhones [r5]. Thus a widely-used comparison strategy is to measure different models' speeds on the same device as we did in the paper.\n\n**Q6-2**: Also, [r3] reports multiple models on multiple mobile devices, e.g. 76.1% with 22ms on Samsung Note 10, or 76.9% with 58ms on Pixel 1. Is it possible to compare the latency results with the literature? \n\n**A6-2**: OFA [r3] is a latency-aware NAS method that searches the architecture configures for specific hardware. It also uses more training tricks (e.g., progressive shrinking, knowledge distillation) to improve performance. While our method is to propose a universe module without optimizing for a specific device. These two methods focus on different aspects and have the potential to be combined.\n\n**Q6-3**: Similarly, MobileViT reports a 17.86ms latency on iPhone 12, which is much faster than the GhostNet+self attention baseline in Table 1, probably due to a larger feature resolution in the comparison.\n\n**A6-3**: For both “GhostNet+Self attention” and “GhostNet+ DFC Attention”, we both use the standard input’s resolution of ImageNet, i.e., 224x224. Thus their comparison is fair. MobieViT [r5] implements the model with CoreML on iPhone 12, where MobileViT-XS is much slower than MobileNetV2 (Table 11 in [r5]). MobileViT-XS achieves 74.8% accuracy with 700M FLOPs, while our GhostNetV2 1.3$\\times$ achieves higher performance (76.9%) with a much lower computational cost (269M). \n\n**Q7 (minor)**: Should the latter $a$ be $a'$ in equation 5?\n\n**A7**: Thanks for your careful review. The latter $a$ should be $a'$ in Eq. 5 and we will fix it in the final version.\n\n**Q8 (minor)**: In equation 4, for each h, the equation does not depend on h. Does it mean that the output is the same for each h? If so, equation 5 does not depend on w?\n\n**A8**: The transformation weights in Eqs. 4 and 5 are $F^H_{h,h'w}$ and $F^W_{w,hw'}$, respectively. So Eq. 4 depends on h and Eq. 5 depends on w. Sorry for the typo.\n\n[r1] Neural Architecture Search for Lightweight Non-Local Networks, CVPR 2020. \n\n[r2] ELASTIC: Improving CNNs with Dynamic Scaling Policies, CVPR 2019.\n\n[r3] Once for All: Train One Network and Specialize it for Efficient Deployment, ICLR 2020.\n\n[r4] MobileNetV2: Inverted Residuals and Linear Bottlenecks, CVPR 2018.\n\n[r5] MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer, ICLR 2022.",
" This paper proposes a cheap attention module that focuses on mobile settings, e.g. limited FLOPs and latency. The cheap attention module is implemented as two depthwise convolutions, followed by sigmoid attention. The module is mainly used to augment GhostNet and achieves improvements on ImageNet classification and downstream tasks, e.g. detection, and segmentation. ### Strengths\n1. There are many ablation studies that validate the model design choices.\n2. Extensive experiments on multiple large scale tasks and datasets.\n\n### Weaknesses\n1. The proposed solution, Equation 4,5, is not depthwise convolution with kernel $K_H$ and $K_W$. In equation 4 and 5, the weight matrix F has HWC parameters. These parameters are not shared for each column or row. This is more like a batch matrix multiplication, but is different from depthwise convolution which has $K_H$ and $K_W$ parameters. According to table 5, it seems that depthwise convolution is being used in practice.\n2. Limited novelty with height-width decoupled depthwise convolutions [spatially separable convolution] and SE-like spatial-channel attention. Both of the techniques have been heavily explored in the community.\n3. Marginal performance gain when FLOPs is controlled. For example, Auto-NL [1] also studied lightweight self-attention, achieved 77.7% on ImageNet with 353M FLOPS and 5.6M parameters two years ago, compared with this paper, 77.8% with 399M FLOPs and 12.3M parameters. Although it is not clear what latency Auto-NL needs, this related work is not discussed or compared.\n4. Downsampling and upsampling part of the feature map has been explored too in [2]. And it leads to gains as well. So it is not clear why DFC works.\n5. Latency is important to the paper, so instead of saying “an ARM-based mobile device”, is it possible to specify which device exactly, then it is possible for others to reproduce the latency results and make meaningful comparisons.\n6. Related to the point above, FBNet reports a latency of 28.1ms, but this paper reports around 70ms. Is this mainly caused by device difference and optimization? Also, [3] reports multiple models on multiple mobile devices, e.g. 76.1% with 22ms on Samsung Note 10, or 76.9% with 58ms on Pixel 1. Is it possible to compare the latency results with the literature? Similarly, MobileViT reports a 17.86ms latency on iPhone 12, which is much faster than the GhostNet+self attention baseline in Table 1, probably due to a larger feature resolution in the comparison.\n\n[1] Neural Architecture Search for Lightweight Non-Local Networks, CVPR 2020.\n[2] ELASTIC: Improving CNNs with Dynamic Scaling Policies, CVPR 2019.\n[3] Once for All: Train One Network and Specialize it for Efficient Deployment, ICLR 2020. Minor: should the latter $a$ be $a\\prime$ in equation 5?\n\nAlso, in equation 4, for each h, the equation does not depend on h. Does it mean that the output $a\\prime$ is the same for each h? If so, equation 5 does not depend on w? Yes.",
" In this paper, the authors propose a \"decoupled fully connected\" (DFC) attention module which can significantly improve the performance of image classification models such as GhostNet and MobileNet at a much lower cost than typical self-attention like those used in transformers. This is achieved by aggregating information on the horizontal and vertical axis (instead of over the entire image) in a downsampled feature space and combining the output of this module with the output of standard network block.\n\nExperiments show that this approach empirically achieves significant improvement in accuracy at a modest cost in throughput, and outperforms previous state of the art for efficient mobile-friendly image classification networks.\n\nNits:\n\n69-70: Until now, GhostNet is still the SOTA light-weight model with a good trade-off between accuracy and speed. -> Until now, GhostNet has been the SOTA light-weight model with a good trade-off between accuracy and speed.\n\n71: Besides manual design, a series of methods try to search a light-weight architecture. -> Besides manual design, a series of methods try to search for a light-weight architecture.\n\n199: practica -> practical\n\n215: Experiment -> experiments Strengths:\n- Experiments and ablation studies are well designed and show the value of this work\n- The results are convincing\n- The approach appears to be novel\n\nWeaknesses:\n- The description and diagram of the actual approach is somewhat confusing (see the question section)\n- It would be good to study what this attention ends up attending over (and compared for vertical/horizontal att vs full att) - This is not quite global attention because you attend over patches aligned horizontally or vertically, and hope that previous blocks captured sufficient global context in their respective patches. one way to test whether they do is to make the network deeper, but narrower (to control for number of params) and see if the accuracy improves. Have you considered this, along with visualizing attention maps to see what is actually being attended\n\n- I am a bit confused about figure 4 and equation 6 and the preceding paragraph. Is it saying that you compute your the horizontal/vertical attention (via a linear projection) followed by sigmoid followed by elementwise multiplication with the output of the ghost module (which is of dim HxWxC). Does this not mean you are just rescaling the output of the ghost module? if not can you please explain and also possibly clarify in the paper how exactly you are attending over/aggregating info from different patches? limitations were addressed adequatly",
" This paper proposes a hardware friendly attention module and then present a light-weight neural architecture for general vision tasks. It finds that light-weight neural networks have weak ability to capture the global information, which is the bottleneck restricting the representation ability. The attention module is only constructed by fully-connected layers, which can efficiently capture the global information without complex operations. Then a light-weight architecture is constructed, which achieves SOTA performance on various vision tasks, such as image classification and object detection. +It is vital to design light-weight neural networks with low latency on edge devices (e.g., ARM CPU) for implementing AI models. Due to the strict constraint on practical latency, improving its performance is a very challenging task. Some complex operations may have low theoretical complexity, but will incur high practical latency as they are not hard-ware friendly. This paper finds the performance bottleneck of light-weight models and presents an efficient DFC attention to capture the global information, which can significantly improve performance. \n\n+The empirical results are impressive. Based on DFC attention, the GhostNetV2 model achieves significantly higher performance (about 1 point) than the existing architectures. Both with attention mechanism, GhostNetV2 achieves much higher performance than MobileViT (77.8% v.s. 74.8%) with lower computational cost (399M v.s. 700M). \n\n+The proposed architecture has strong generalization ability and can be used in diverse tasks (such as image classification and object detection), as it does not introduce prior knowledge of a specific task. Considering its SOTA performance, easy implementation, and strong generalization ability, it can play as the backbone to improve the model’s performance on various tasks.\n\nThough the proposed architecture is novel and effective, I still have some suggestions to further improve its impact on the community. \n\n- The DFC attention captures the long-range dependence with two sequential FC layers, which aggregate information along with horizontal and vertical directions, respectively. Adding a non-linear activation function between them may reflect a more complex relationship between different pixels. It is interesting to investigate the effectiveness of this non-linear function empirically.\n\n-For intuitive presentation, it is better to describe the proposed architectures in the captions of Figure 4.\n\n+This paper explores an interesting direction to leverage the attention mechanism, which may inspire the community. Transformer achieves high performance owing to its strong ability for capturing global information. However, it suffers high computational complexity and complex formulation, which is not hard-ware friendly when implemented on edge devices. The proposed method uses fully-connected layers to implement the attention operation, which is both effective and efficient. \n Please see the weaknesses. Further investigating the impact of non-linear functions will make the paper stronger. Yes",
" This paper improves a previous work, named GhostNet. The original GhostNet aims to eliminate the effect of those uninformative feature maps by introducing the ghost module. However, the drawback of the original GhostNet is the lack of the ability to capture long-range relationships among pixels. This paper is motivated by this and designs an edge device friendly attention mechanism, which runs fast and performs well on ImageNet classification.\n\nOverall speaking, the quality of this paper is good. The novelty of this paper is significant. Thorough experiments are also conduct to demonstrate the effectiveness of the proposed approach. The strengths of this paper is clear.\n\n- The proposed light-weight attention mechanism is interesting. It simplifies the standard self-attention and the performance does not drop. I think it could be considered as a promising way to encode global context for mobile networks.\n\n- The analysis of this paper is sufficient. The authors carefully analyze why self-attention is not friendly to edge devices and show lots of results to support this.\n\n- The results on ImageNet are great. Compared to most of previous models for mobile devices, this paper performs better.\n\nWeaknesses:\n\nI do not think there are any major red flags, but some minor to moderate concerns that should be addressed.\n\n- In Sec. 3.1, the authors say that only half of the channels are used for encoding spatial information with a depthwise conv in a ghost module and claim that this may be a performance bottleneck of GhostNet. How could you prove this?\n\n- The performance on downstream tasks, like COCO object detection, seems not that surprising but considering the low computations, it is acceptable.\n\n- The English should be improved, especially the usage of the definite articales.\n\n- The usage of abbreviations. L168 and L183, \"Eq 4, 5\" should be \"Eqs. 4 and 5.\" Tables 9 and 10 could be redesigned to make the arrangement look better. Not found."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"nips_2022_vhKaBdOOobB",
"edvNiHYiDEx",
"swqgwmTN6Lq",
"8C1AgMTFes",
"Hya34Za4oB9",
"kNGapMAj-8",
"oURXecLNmwd",
"k3JWlEJAjD1",
"u6iftgwjhb",
"SmKnmcr_lAL",
"L8OIDMusXmS",
"oURXecLNmwd",
"oURXecLNmwd",
"nips_2022_vhKaBdOOobB",
"nips_2022_vhKaBdOOobB",
"nips_2022_vhKaBdOOobB",
"nips_2022_vhKaBdOOobB"
] |
nips_2022_-zYfrOl2I6O | CASA: Category-agnostic Skeletal Animal Reconstruction | Recovering a skeletal shape from a monocular video is a longstanding challenge. Prevailing nonrigid animal reconstruction methods often adopt a control-point driven animation model and optimize bone transforms individually without considering skeletal topology, yielding unsatisfactory shape and articulation. In contrast, humans can easily infer the articulation structure of an unknown character by associating it with a seen articulated object in their memory. Inspired by this fact, we present CASA, a novel category-agnostic articulated animal reconstruction method. Our method consists of two components, a video-to-shape retrieval process and a neural inverse graphics framework. During inference, CASA first finds a matched articulated shape from a 3D character assets bank so that the input video scores highly with the rendered image, according to a pretrained image-language model. It then integrates the retrieved character into an inverse graphics framework and jointly infers the shape deformation, skeleton structure, and skinning weights through optimization. Experiments validate the efficacy of our method in shape reconstruction and articulation. We further show that we can use the resulting skeletal-animated character for re-animation.
| Accept | The paper shows how to combine 3d model retrieval with an inverse graphics framework to recover 3D models of a diverse range of animals from video. The paper also introduces a new dataset of 3D animals that is projected to be of value in future works.
While one reviewer considers the technical problem to be "an engineering work", the other reviewers, and the AC, consider that the implementation and experimentation of the effects of this novel (in this context) idea is valuable.
Based on calibration across other papers and reviews in this AC's stack, the average review score is generally inconsistent with the review text, even given the effective rebuttal. I mention this only because a poster acceptance might seem at odds with average score, but of course the point of meta reviewing is to make a judgement which looks at more than average score. The key decision that might be affected in this case is oral vs poster, so it is perhaps useful to clarify: an oral presentation needs to be of value to the broad NeurIPS community. 3D computer vision is an important subfield, and animal reconstruction is an emerging topic in the subfield, but the learnings of this paper remain essentially within a subfield, so I am confident that poster is the appropriate disposition of this paper.
| train | [
"VF7oPoJf3B3",
"1tNX2xG4mm",
"pXL96kKHOZ6",
"4hNKy6CUspJ",
"f4g18gNqH6T",
"jWeQ3RGE2YW",
"NlI6IS-NT7t",
"VFydtWFE-cJ",
"VnZTuK6QUN",
"6XzKgobF_FU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Most of my questions are answered adequately. I'd like to raise score to 7. The interesting part of the paper is the 3D skeletal model retrieval given a large database, which provides reasonable constraints when the target object falls roughly within the database. The remaining concern is that the method does not appear faithful to the data (horns of the cow in Fig. 4), which could be due to either lack of observation or unnecessary regularization terms, which needs further discussion.\n\nSuggestion on figures\n- Please visualize the retrieved model together with the reconstruction (similar to lasr did), so that readers understand how much the shape is updated by optimization.\n- Lasr results in Fig. 8 is inconsistent with Fig.7 \n- Fig 6. Note casa/lasr apply symmetry constraint on the occluded body part, but banmo/viser baseline do not. This should be better explained.",
" Thank you for the detailed responses as well as additional experiments to address the raised concerns. \n\nNow that my major concerns are cleared, I raised my score to accept. However, as R1 mentioned, the loss functions on inverse graphics are not substantially novel and I would strongly recommend toning down the contribution claim on neural inverse graphics and better clarifying the novelty. Also since the proposed approach does per-instance optimization, the generalization claim in the sentence of L58-60 does not make sense. I would highly recommend removing it. Thanks!",
" We thank the reviewers for their feedback and helpful suggestions. The reviewers agree our template retrieval is \"*generalizable and flexible*\" (Reviewer ziQq, 4EwH), \"*impressive qualitative results*\" (Reviewer pF6x, ziQq), and \"*the great benefit to the community*\" of our proposed dataset (Reviewer ziQq, 4EwH). \n\nThis comment summarizes the major revisions we make to our submission. We also reply to each reviewer's questions individually. We strongly recommend the reviewers and the ACs read both rebuttal comments and the revised submission. Please do not hesitate to ask follow-up questions during the reviewer-author discussion period. \n\n-------------------------------------------------------\n\n### Comparison studies.\nWe updated and added several baseline methods for comparison (VISER, BANMo, ACFM). **Table 1** and **Figure 6 and 7** summarized the results:\n* We updated **VISER** results following the author's recent bug fixes in June 2022. \n* We added **BANMo** as an additional template-free baseline. \n* We reported (**A-CFM**) and compared it with CASA in supp Table 5 on quadruped animals testset. \n* We contacted the author of **LASR, VISER, and BANMo** and validated that the comparison was fair and correct. \n\n-------------------------------------------------------\n\n### Technical details. \n* We added details on the initialization strategy of CASA (**supp Sec.I**). \n* We provided more details on the CLIP-based retrieval procedure in (**supp Sec.H**). \n* We provided more information about the new dataset in (**supp Table 6**). \n\n-------------------------------------------------------\n\n### Ablation study.\nWe provided a more thorough ablation comparison and discussion, including quantitative performance against retrieval-based methods, fixing initial skinning weight. Please see **supp Table 3 and 4**.\n\n-------------------------------------------------------\n\n### Missing related works. \nWe have added all related works suggested by the reviewers in our rebuttal revision. ",
" ### Ablation study\n\n* **Retrieval strategies**: We show the full retrieved results in supp Table 7 – including Top-1 for each animal. It’s hard to quantitatively evaluate how good retrieval performance is, as our testing set consists of novel categories. That said, we compare the final reconstruction quality between the proposed retrieved skeletal shape vs. other init strategies in supp Table 4. In particular, we include: 1) initializing skinning weight by k-means (mIOU: retrieval init 0.435, k-means init 0.305), 2) initializing shape by sphere (mIOU: retrieval init 0.435, sphere init 0.277). These results demonstrate the necessity of retrieval, as initializing by retrieved skeletal shapes boost the optimization performance by a large margin. In addition, we add the quantitative results on retrieved shapes without optimization in supp Table 3. The experiments show that retrieval does provide reasonable results, since the retrieved shapes achieve relatively good IoU and chamfer distance values without optimization.\n* **Stretchable bones**: We also add the qualitative comparison with/without the flexible bone parameterization in supp Figure 4. Due to the time limits, we did not complete a quantitative evaluation. We will include this in our camera ready. \n\n------------------------------------------------------\n\n### Sample viewpoints\n\nOur 3D asset consists 225 animal categories. We render 180 realistic frames of each animal under different poses from different viewpoints. \n\nWe marginalize the similarity of a query video as follows: 1) given a frame of the video, find the closes image over each animal category using CLIP and store its image similarity score. 2) calculate the similarity score between the video and a given animal category by taking the sum of the similarity between each frame and that animal; 3) take the category with the highest similarity score. To summarize, our retrieval procdure calculate the following function: \n\n$$\\arg\\max_j \\sum_t \\max_v \\langle g_\\mathrm{CLIP}(\\mathbf{I}_t), g_\\mathrm{CLIP}(\\pi(\\mathbf{s}_j, \\mathbf{q}_v)) ) \\rangle,$$\n\nwhere ${\\mathbf{I}_{1...T}}$ is the input video, $\\mathbf{s}_j$ is the $j$-th animal shape, $g_\\mathrm{CLIP}$ is the image embedding network of the CLIP model and $\\pi(\\mathbf{s}_j, \\mathbf{q}_v)$ is the photo-realistic rendering of the articulated shape $\\mathbf{s}_j$ at a randomized skeletal pose $\\mathbf{q}_v$. \n\n------------------------------------------------------\n\n### Dataset statistics\n\nWe provide a detailed comparison between PlanetZoo and other popular dynamic 3D dataset, including DeformingThings4D and SAIL-VOS 3D, in the table below.\n\nDataset | Category | Character | Frame | Realistic texture | RGB | Depth | GT camera | GT mask | GT mesh |\n:-----| :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: |\nDeformingThings4D | 31 | 147 | 122,365 | No | No | No | No | No | Yes |\nSAIL-VOS 3D | 10 | multiple | 111,654 | No | Yes | Yes | Yes | Yes | Yes |\nPlanetZoo | 249 | 249 | 44,820 | Yes | Yes | Yes | Yes | Yes | Yes |",
" ### Horns in Figure 7 and optimization\n\nThank you for pointing this out! We agree with the reviewer that the remained horn is not desirable. \n\n**Shape optimization**: There are two possible ways to deform the canonical shape in the optimization process: 1) the neural displacement field described in Line 208 - 219 of our paper. 2) the changes in bone length. Stretchable bone cannot handle this case as no \"bones\" exist in the horn component. However, the neural displacement field in our paper provides a fine-grained shape deformation, whose flexible parameterization can remove the horn by providing the correct image-based evidence. \n\n**Root causes**: However, we found that the mask and flow energy are small in practice for this case. This is because of 1) the tiny size of the horn region and 2) the majority of the horn regions are rendered inside the mask. Both prevent it from providing strong signals to guide the large deformation. We believe expanding our framework to include photometric loss (minimizing RGB appearance) or even feature-metric loss (minimizing feature difference) will help to overcome this issue. We will add this into the limitation discussion and leave it as a future direction. \n\nAs shown in other qualitative results (e.g. Figure 6, 7 and supp Figure 2), we want to highlight that most of our recovered shapes have faithful and realistic shapes and poses after optimization, suggesting the efficacy of optimization. \n\n----------------------------------------------------------------------------------\n\n### Comparison against ACFM\n\nWe compared our method against the template-based ACFM. Note that ACFM is category-specific; hence we only compare all the quadruped animals in the PlanetZoo testset to ensure a fair comparison. We report the results in supp Table 5. Results show that CASA significantly outperforms ACFM even though the later has a network component trained specifically for quadruped animals (mIOU: CASA 0.499 vs ACFM 0.234). \n\n----------------------------------------------------------------------------------\n\n### Our retrieval + other baselines\n\nDue to time constraints, we have not yet completed adjusting the LASR code to use our retrieval template. Our final version will compare CASA-retrieval + LASR vs. the entire CASA pipeline. This comparison will mainly demonstrate the efficacy of our optimization pipeline.\n\nThat said, we also think the current comparison against template-based and template-free methods is fair, as CASA's 2D-3D retrieval is a crucial part of our contribution. Yet, being template-free is one core claim in many baselines; Augmenting other baselines with our proposed retrieval results in a different approach for comparison. \n\n----------------------------------------------------------------------------------\n\n### Root initialization\n\nFor real-world data or synthetic settings without GT camera poses, we initialize the root transform for each frame by minimizing the rendering mask loss at a coarse level while treating the rest as rigid. A diverse set of random initial root rigid transforms are used for repeated optimization and the root transformation at the lowest mask loss is selected. There are indeed local optimal as described by the reviewer (180 flips), but our multi-init procedure helps get rid of most cases, as correct alignment still offers lower loss. For synthetic data with given camera poses, we directly use them as root transform initialization (the same is applied for all competing baselines for a fair comparison). \n\nAfter the initialization, the root transformation is jointly optimized with other parameters (bone joint angle, displacement field, etc.) by minimizing the proposed energy function. ",
" ### Qualitative Comparison\n\t\nWe contacted the author of LASR/ViSER regarding reproducing their results. \n* **LASR**: We verified that we fully reproduced LASR results reported in their table through the email discussions. Our comparison is also conducted fairly. \n* **ViSER**: Our reported qualitative and quantitive results in the submission are worse than ViSER results for two primary reasons: 1) the master Github repo in ViSER had a bug by NeurIPS 2022 deadline, which was fixed in June; We updated ViSER results with the latest master repo. 2) ViSER reported the qualitative shapes used a larger smooth hyper-parameter (0.25) for better visual quality. This setting differs from the config used in the paper's quantitative evaluation. For consistency and fair comparison, we reported the qualitative and quantitative results using the same hyper-parameters. The authors have verified our reported qualitative results in the revised submission on BADJA dataset. \n\n--------------------------------------------------\n\n### Clarification of train/val split.\n\nThe inverse graphics stage is training-free test-time-optimization. Hence as the reviewer points out, there is no concern regarding cross-instance generalization. However, our retrieval stage currently relies on retrieval from an existing asset bank as a template. To demonstrate category-agnostic reconstruction ability, it is crucial to ensure the assets do not overlap with testing animals at both instance and category levels. In other words, testing samples should come from unseen categories/instances or even include unseen topologies. In addition, the optimization stage also has several hyper-parameters. Our dataset split also allows us to tune hyper-parameters in the training set. The testing dataset is only used for evaluation purposes. Hence, train/test split is necessary for our dataset/benchmark. \n\n--------------------------------------------------\n\n### Necessity of CLIP\n\nCLIP has been trained with significantly richer semantic information than ImageNet pre-trained models. Such information is encoded in a rich text corpus and allows us to capture complicated relationships between images from animals. In practice, we found it is crucial for retrieval performance. Specifically, compared against models pre-trained on ImageNet, we found CLIP retrievals provide a better skeletal shape (see **supp Table 3** for a comparison). The results show that CLIP is the preferred retrieval backbone than ImageNet pre-trained models (mIOU: CLIP 0.217, ImageNet pre-trained 0.111).The retrieved animal also agrees with humans' common sense.\n\n--------------------------------------------------\n\n### CLIP features computation\n\nWe provided details of CLIP feature computation and retrieval in **supp Sec.H**. \n\n--------------------------------------------------\n\n### Inverse graphics\n\nTo our limited knowledge, we are the first work to incorporate skeletal and stretchable bone parameterization in generic articulated shape reconstruction. The topology of the skeleton tree is difficult to directly recover using inverse graphics, especially when the shape is jointly optimized. We innovatively use template retrieval and bone-length optimization to overcome this challenge, making it possible to optimize shape and skeleton jointly.",
" ### \"All quadruped animals in paper\"\n\t\nWe respectfully disagree that “they are all quadruped”: please see supp video (02:07) and supp Figure 1 for ostrich; supp video (02:10) for chimpanzee; supp Figure 2 and supp video (01:57) for seal.\n\n--------------------------------------------------\n\n### Novelty of template retrieval\n\nWe respectfully disagree that our retrieval is “*more-or-less an engineering work*.” We will discuss the contributions of our proposed retrieval based on technical novelty and the impacts on 4D reconstruction. \n\n* **Novelty**: 2D-3D articulated retrieval is underexplored. Our CLIP-based approach to such a problem, to our limited knowledge, has not been explored before. \n* **Impact**: Prevailing template-based methods are limited to one of a few templates, often in a category-specific manner [a, b]. This restricts the method from achieving comparable results in the category-agnostic setting. Our approach closes this gap by enabling us to initialize from various fine-grained templates for articulated reconstruction. As suggested by Reviewer ziQq, and Reviewer 4EwH, it allows the reconstruction method to leverage 3D priors and offer stronger generalization ability and improved reconstruction quality. \n\n[a] Kulkarni, Nilesh, et al. \"Articulation-aware canonical surface mapping.\" CVPR 2020\n\n[b] Kokkinos, Filippos, and Iasonas Kokkinos. \"Learning monocular 3D reconstruction of articulated categories from motion.\" CVPR 2021\n\n--------------------------------------------------\n\n### Contributions of Optimization\n\nWe want to stress the key difference between our optimization vs. previous works lies in the parametric models. \n* We adopt the skeletal shape model. It is critical since this model induces bone constraints for nonrigid motion and allows us to conduct realistic re-animation using the reconstructed shape. As a comparison, most previous category-agnostic 4D reconstruction (e.g., LASR, VISER, BANMo) uses a mixture of rigid transforms as their nonrigid kinematic model. During inference, they directly optimize rigid body transformation without considering the bone constraints, which could result in less appealing nonrigid motion. \n* We also leverage a stretchable bones parameterization and a neural-parametric vertex deformation model, offering more realistic and smooth shape deformation. \n\nAs noted in the paper, although the two techniques are used in graphics for animation simulation and modeling, applying them to category-agnostic articulated objects is highly non-trivial and innovative to our community. Our superior results also justify the importance of such technical choice. \n\n--------------------------------------------------\n\n### Optimization details\n\n* **Optimizer**: We optimize all the parameters jointly using Adam. No alternative optimization is used, thanks to a good initialization from the retrieval stage. \n* **Skinning weight**: Compared to fixing skinning weight, updating skinning weights would allow more flexibility when the initial skinning weight is of low quality. Since our retrieval strategy provides high quality skinning weight initialization in most cases, the metrics would not show significant differences (mIOU: optimizing skinning 0.435, fix skinning 0.433). \n* **Initialization**: Our shape/rigging/bone parameters are initialized with all the corresponding parameters from the template. Joint angles are initialized from a T-pose. Global camera poses (at object-centric-coordinate) are encoded as the root node transformation. The global pose is initialized by minimizing the mask loss at T-pose.\n\n--------------------------------------------------\n\n### Contributions Clarification\n\nThe key contributions of this paper are 1. we present **a diverse skeletal shape asset** (our dataset); 2. we revive template-based reconstruction using **a simple, effective and generalizable 2D-3D retrieval algorithm** based on a pretrained CLIP model; and 3. a novel **skeletal shape optimization** procedure. We show that through the three key components, we could push the articulated shape reconstruction quality to another level. We also introduced a new realistic simulation-based benchmark in the hope of bringing more vibrance to the community.",
" This paper focuses on the problem of 3D reconstruction of animals from a monocular video. The proposed method is claimed to be Category-agnostic, which means they can deal with different categories such as dogs, horses, but they are all quadruped. The major contributions is the category-agnostic reconstruction which is realized by first retrive a 3D template model from an asset. Then the template model is deformed and optimized to fit to the input video. Strengths:\nFirst, having a good initial template to start with the following up optimization will certainly reduce the deformation space and make the optimization to be more feasible. They have demonstrated better visualization results by adopting this retrived template model. The idea is pretty easy to understand and the paper is organized and written well.\n\nWeakness:\n1) Obtaining this template model by retrival using the pre-trained CLIP model is more or less an engineering work. I don't think this could be claimed as a technical contribution. The performance is improved mainly due to this retrived template, while the compared approaches start with some general shape for example, a sphere.\n2) The optimization pipeline or the loss function is not novel. It is pretty much a standard optimization, I'm a bit confused what the authors want to claim on this optimization problem. In addition, the skeletal representation of articulated models, they are just standard way of dealing with those kinds of animals. I'm not sure what the authors want to emphasize on this.\n3) Missing important comparison: -- BANMo: Building Animatable 3D Neural Models from Many Casual Videos. \n I have some questions or confuse on the some technical details of the proposed method, \n1) How to optimize both the bone length, joints angles, skinning weights together? Do we need to optimize one while fixing others? Furthermore, what is the improvement of optimizing skinning weights, what if the skinning weights are not optimized, but using those from retrived template?\n\n2) How is the optimization initialzed? How can we achieve the initial fitting to images or videos, including initial global pose?\n\nThe authors might also want to include the response to the issues I raised in Weakness above. The major limitation is lack of technical contribution. Using the pre-trained CLIP model as features to retrive the 3D template is good, but I'm not sure this could be claimed as technical novelty. And the following optimization is also pretty standard. The authors should point out what are the major technical contributions that really stands out.",
" This paper presents a category-agnostic character animation reconstrucion from a casual video input. To alliviate the ill-posed nature of the problem, this work first queries the closest template model from the database using CLIP features. As the following inverse graphics optimization stage can warm start with the retrieved template, the proposed approach produces better results both qualitatively and quantitatively over prior methods. Additionally, the paper introduces a large amount of synthetic dataset for qualitative evlauation of predicted attributes with diverse categories. This paper has the following strengths:\n- This work presents an interesting use of CLIP feature. To retrieve the closest animal template from the database, the CLIP features are extracted from both input video and synthetically rendered database. This type of semantic-based retrieval is more general and flexible than hand-crafted descriptors for the retreival task. \n- The paper presents an impressive qualitative results even from videos in the wild. \n- The proposed dataset would be a great resource for community for both training and evaluation to assess the accuracy of predicted attributes from diverse categories. \n\nThere are several weaknesses of this work:\n- The qualitative results of baseline methods (LASR, ViSER) are substantially worse than what’s presented in their original papers. I’m wondering if the results are cherrypicked or their code was not properly run. I would highly recommend reaching out to the authors of these papers to confirm that these results are expected. Please answer to this in the rebuttal to prove that the experiments are credible.\n- L58-60: I’m very confused about the notion of generalization here. The paper also discuss train/test split in L258. However, as far as I understand, this work presents an instance-specific training and there is no cross-instance generalization. Please clarify.\n\nOverall the paper presents an interesting approach and the results are impressive. However, the aforementioned concerns prevent me from giving a higher score at this stage. - Please answer to the comments above.\n- It is not clear if CLIP is necessary for this retrieval task as text is not involved at all. It would be great if the use of various feature backbone including CLIP and ResNet pretrained with ImageNet is evaluated. \n- From the exposition, it is not clear how to compute CLIP feature from videos (CLIP only provides embedding per frame). Please elaborate.\n\nOther comments:\n- L43: The neural inverse graphics framework in general has been extensively used in prior works, and not novel. Please state more concretely what is novel over the prior works in terms of the optimization.\n- L231: Please add citation to Cobra-tools. The limitation is discussed, but its societal impact is not.",
" The authors propose CASA, a method for recovering 3D shape and skeletal movements from monocular videos. Given a video, it first retrieves a shape and a skeleton from a library with 200+ animals, then optimizes both shape and articulations with differentiable rendering. Results are shown synthetic real datasets, with applications in re-posing. A new dataset PlanetZoo with 200+ animated animals is introduced. **Strengths**\n- The proposed PlanetZoo dataset is interesting, as I'm not aware of a dataset of similar size (>200 animals) and quality (with texture, skeleton and deformation). It has potential to be used to evaluate and benchmark the performance of animal 3D reconstruction algorithms. From that perspective, it would be helpful to highlight the features of the dataset, and compare with the existing ones (such as deforming things 4D).\n- The method is sensible. It leverages 3D shape and skeletal priors of specific categories to improve the reconstruction quality. Since the library is large, the method can be applied to a wide range of animal categories.\n- The usage of skeleton structure also helps the re-posing application and user controllability.\n\n**Weakness**\n- Quality of results. The results are not faithful to the input data even test-time optimization is used. For instance in Fig.7, the reference cow image does not contain a horn but the reconstruction has horns, possibly from the template being retrieved. This is not desirable. I'm also confused as I did not notice a term that prevents the template shape to change when there is disagreement with the image evidence during optimization. This makes it unclear whether the optimization works properly.\n- The experimental comparison can be made fairer. LASR/ViSER does not have access to a 3D shape library. Proper baselines would be methods that use a 3D shape template, such as ACSM [A], ACFM, or proving LASR a 3D template.\n- There are some missing details (see questions). The one I'm most concerned about is the initialization of root transformations, which is crucial for reducing the ambiguity between shape and deformation. For instance, if the heading direction is rotated by 180 deg at initialization (the head becomes the tail), the optimization might focus on deforming the shape without correcting the heading.\n- Ablation study is missing in the main paper. How well does retrieval perform? As it is a major contribution, a quantitative evaluation and thorough analysis is expected. For the ablation study on flexible bone model, a qualitative comparison is desired as numbers does not give much insight in this context.\n- Fig. 3 is slightly misleading as it is not clear which parameters are per-frame and which are per-video. The subtitles also conflated per-video deformation (stretch and deform) vs per-frame deformation (rigging), vertex deformation (deform) vs bone deformation (stretch). \n\n**Other related work**\n- [A] Articulation-aware canonical surface mapping. CVPR 20.\n- [B] Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects, CVPR 22. - How many categories does the dataset contain? What are the categories?\n- How are the root node transformations represented and initialized? Are they the same as LASR/ViSER?\n- During model retrieval, how to sample viewpoints when rendering the 3D model? How many viewpoints are needed? How long does it take to sampling and matching? Also, pairwise matching should produce SxNxT scores, where S is the number of shapes, and N is the number of renderings per shape, and T is the number of frames. Then how is T and N marginalized?\n Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"4hNKy6CUspJ",
"jWeQ3RGE2YW",
"nips_2022_-zYfrOl2I6O",
"f4g18gNqH6T",
"6XzKgobF_FU",
"VnZTuK6QUN",
"VFydtWFE-cJ",
"nips_2022_-zYfrOl2I6O",
"nips_2022_-zYfrOl2I6O",
"nips_2022_-zYfrOl2I6O"
] |
nips_2022_6H00JM-DZjU | Fair and Efficient Allocations Without Obvious Manipulations | We consider the fundamental problem of allocating a set of indivisible goods among strategic agents with additive valuation functions. It is well known that, in the absence of monetary transfers, Pareto efficient and truthful rules are dictatorial, while there is no deterministic truthful mechanism that allocates all items and achieves envy-freeness up to one item (EF1), even for the case of two agents. In this paper, we investigate the interplay of fairness and efficiency under a relaxation of truthfulness called non-obvious manipulability (NOM), recently proposed by~\citep{troyan2020obvious}. We show that this relaxation allows us to bypass the aforementioned negative results in a very strong sense. Specifically, we prove that there are deterministic and EF1 algorithms that are not obviously manipulable, and the algorithm that maximizes utilitarian social welfare (the sum of agents' utilities), which is Pareto efficient but not dictatorial, is not obviously manipulable for $n \geq 3$ agents (but obviously manipulable for $n=2$ agents). At the same time, maximizing the egalitarian social welfare (the minimum of agents' utilities) or the Nash social welfare (the product of agents' utilities) is obviously manipulable for any number of agents and items. Our main result is an approximation preserving black-box reduction from the problem of designing EF1 and NOM mechanisms to the problem of designing EF1 algorithms. En route, we prove an interesting structural result about EF1 allocations, as well as new ``best-of-both-worlds'' results (for the problem without incentives), that might be of independent interest. | Accept | Reviewers agreed that this paper explored a natural and interesting strategic aspect of fair division (non-obvious manipulability). This helped escape classical impossibility results in fair division. Minor concerns were raised about the practical significance of NOM, but overall the sentiment was quite positive. | train | [
"jIwMat57jNj",
"CMjz54AGT0v",
"Rwc40XjGwbr",
"CMuiwB9X2Pn",
"wHBnJSBp5p",
"mnwgYpaTI7J3",
"RROxOv3mdU9O",
"SUQHYw3zJ9p",
"yV3xyunBi8O",
"t6QsAxE3RCK",
"zubo2tQnpqG",
"u9xZs5YDhA",
"jzTRXWlKSS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We will certainly incorporate this discussion in the final version. We hope you will reconsider your score in light of the response. Please let us know if you have any further questions.",
" Thanks for the additional discussion of these issues. I think bringing discussion along these lines into appropriate places in the paper will strengthen it.",
" Thank you for your comment. Some of our general comments apply to the specific setting studied here, but let us clarify more, as well as expand on some of these points.\n\n- “While incorporating...a real limitation.”\n\nFirst, regarding your specific question about whether mechanisms that satisfy NOM have manipulations that might seem obvious, but don’t fit the definition, the answer is quite subjective and context specific. In our setting, when participating in Round Robin (RR), if I *know* that my favorite item is ranked last by everyone else, it seems obvious that I shouldn’t select it first, but I should instead pick my second favorite item. This is not an obvious manipulation, according to the definition of obvious, since it requires detailed knowledge of others’ preferences. We do not have data that confirms or refutes whether this occurs in practice, and without a formal model about what is and what is not obvious/reasonable it’s hard to provide anything but a subjective opinion. Our subjective opinion is that, when agents know a bit about each other, and when such a situation occurs (my favorite is ranked low for others), this manipulation is very reasonable, and the designer should worry about such things. Our subjective opinion is also that when participating in a second-price auction everyone should report the truth, but the practical evidence says otherwise (see [Li17]).\n\nConnecting the previous point to the reasonableness of NOM in our setting (in theory and practice), lack of knowledge about others’ preferences in practice, the fact that makes the above deviation in RR non-obvious, and one of the core reasons of why studying NOM in our setting is worthwhile (see our original response), is quite literally one of the two justifications given in [Caragiannis et al. 2019] for why manipulations (in MNW) are not a major concern in Spliddit, a popular platform for allocating indivisible goods (i.e. the same setting as here). This also serves as an example of why we need these formal models: MNW *can* be manipulated in obvious ways (according to the given definition of obvious) contradicting the intuition/informal argument of Caragiannis et al.\n\nInterestingly, the other justification for why Caragiannis et al. do not address incentives is that truthfulness rules out reasonable algorithms in this setting (that is, implicitly the authors are saying “since we can’t get truthfulness, we have to settle for no guarantees on manipulations”); the lack of more nuanced guarantees between “no manipulation is possible, ever” (i.e. truthfulness) and “any manipulation could be an issue” (non-truthfulness) is another core reason of why studying NOM in our setting is worthwhile.\n\n- “Regarding...in practice.”\n\nOne might want to avoid using RR in practical settings because, even though it is simple, it is very inefficient: it does not even guarantee a constant approximation to *Pareto* efficiency (so, let alone concrete objectives, like sum/product of utilities): consider the case where agent 1 wants all items equally at a value of 1 (except the first item with a value of 1+epsilon) and agent 2 wants the first item for a large value and all the other items at epsilon. RR would give the same number of items to each agent, with the first item going to agent 1. Agent 2 would be happy to trade all her items for the first item, vastly improving her utility, and also doubling the utility of agent 1. A similar example for more agents gives a super constant improvement to everyone.\n\nA closely related and very reasonable question is why should we not use MNW, which is EF1 + PO but not NOM, if we don’t care about NOM? (Note that MNW is NP-hard to compute, but it is arguably a simpler EF1 + PO algorithm to explain compared to polytime algorithms with the same guarantees) Indeed, if one is truly not worried about *any* manipulation, there is no reason to use our reduction. To reuse the argument of Caragiannis et al., one might not worry about manipulations in MNW because agents don’t know each others’ preferences. However, as we show in this paper, this is not correct: even if participants know nothing about each other, there are (formally speaking obvious) deviations. Roughly speaking, if there are n agents and n items, if I only like two items, I should *obviously* misreport and declare a positive value for only my favorite: MNW wants to give positive utility to me, to avoid getting an objective of zero, so I’m forcing it to give me my favorite item. Our reduction says that adding three lines of code provably protects against such deviations. We don’t see this as adding complexity or majorly impactful in terms of explainability, since the current version of MNW employed in practice also doesn’t simply maximize the product of utilities, but first handles corner cases (namely, if the optimal product is zero, it first maximizes the number of agents that get positive utility, followed by MNW on the chosen agents).\n",
" While incorporating aspects of the overall and specific response into the paper would be beneficial, I still feel that neither addresses the core substance of the relevant concerns. On the reasonableness of NOM, the case presented is generic. But nothing in the response or the paper seems to address whether they are reasonable in this specific setting. Do these mechanisms that satisfy NOM have manipulations that, while not technically satisfying the definition of obvious, might be called such by a lay reader? The response details how Troyan and Morrill argue this in a different setting, and not at least attempting to make such a case here seems a real limitation.\n\nRegarding the reduction, suppose I do not care about NOM. Are there any other reasons to prefer it to Round Robin? Or tying this back to the former can you identify a natural class of examples where NOM provides better results than a manipulated round robin? The mechanism seems substantially more complex to interact with and explain to users (both because of the more complex underlying algorithm and that added by the reduction) than round robin, so I'm really looking for this sort of justification for whether it has benefits in practice. ",
" Thank you for the thoughtful review and question. Please see our response to all reviewers for your comment about the weakness of NOM as a definition.\n\n- \"Can you make a case that the stronger theoretical properties yielded by the reduction lead to meaningfully better results in practice.\"\n\nSee our response to all reviewers for more justification on why NOM is interesting/meaningful.\nOne way our results, and specifically our reduction can inform practice is the following, concrete message. By adding a few lines of code (implementing the first 3 steps of the reduction; Cases I, II and III in lines 320-329) to the implementation of any EF1 + PO algorithm (e.g. MNW, which is used by the popular website Spliddit) one can *provably* protect against certain deviations (or, equivalently, strategic but not perfectly rational and all-knowing agents).",
" Thank you for the thoughtful review and question.\n\n- “Can you comment on the jump from 2 to 3 agents in both Theorems 2&3 and in the core reduction and result? Does NOS retain relevance with three agents having overlapping preferences or is this a case where it is capturing”\n\nLet us clarify. Regarding the jump from 2 to 3 in Theorems 2 & 3 (utilitarian welfare), the issue is tie-breaking. With 2 agents, a mechanism will end up tie-breaking in favor of one of the two, giving this winner an obvious deviation in certain scenarios. Fortunately, this corner case happens to be the only obstacle to achieving NOM. And, with 3 or more agents, this can be avoided by setting up a cyclical tie-breaking rule (1 loses to 2, 2 loses to 3, … n loses to 1) that avoids the unique, consistent winner (and therefore, for all agents, in the worst-case the deviation won’t work because they’ll be faced with an agent that beats them).\nThe reduction works for any number of agents (also see our response to reviewer iRAr for an alternative presentation of the reduction). In the case of 2 agents, it would have an especially simple form, where we simply check if allocating the items in the intersection (if any) to the agent with the largest value for these items is EF1, and if not run the black-box. Thank you for this interesting observation.\n",
" Thank you for the thoughtful review and questions. Please see our response to all reviewers for your comment about the weakness of NOM as a definition.\n\n- “Lines 249-250 grabbed my attention, since my intuition is that manipulating the utilitarian SW-maximizing mechanism is (intuitively) \"obvious\" when valuations are not required to be normalized: I should report huge valuations for all goods. Is there intuition for why we shouldn't think of this as an \"obvious\" manipulation?”\n\nIndeed, it would seem that utilitarian welfare maximization is obviously manipulable since overreporting definitely dominates telling the truth. However, in the worst case, no matter how much you overreport your values, it is not enough, and you won’t win a single item. And, in the best case, reporting the truth (or anything positive really) is enough to get you all items you want. This phenomenon is perhaps akin to a first price auction, where bidding your value is definitely a bad idea, but it is not clear how much lower than your value you should bid (and in the worst case, for all $\\epsilon$, underbidding by $\\epsilon$ was too low). \n\n- “It would also help me if you could lay out the formal relationship between NOM and obvious strategyproofness of Li17, if there exists one. Lines 38-40 hint at such a relationship but leave it ambiguous.”\n\n[Li17] defines what it means for a strategy to be obvious. He then uses this definition to define what it means for a mechanism to be obviously truthful (if it has an equilibrium in obviously dominant strategies), a requirement more strict than truthfulness (equilibrium in dominant strategies). [Li17] points out that even though many mechanisms are truthful, in practice it is not always easy for real people to figure out their dominant strategy. One of the interesting properties of Li’s definition is that it manages to formally separate truthful mechanisms in a way that is consistent with practical evidence, e.g. an ascending auction and a second price auction (empirically, it is much easier for real people to figure out how to play in an ascending auction vs a second price auction). \nHere, we use exactly the same definition of “obvious” for a strategy. And, similarly to [Li17], our paper (following [Troyan and Morrill, 2020]) aims to separate non-truthful mechanisms in terms of how easy it is to find a profitable deviation. We aim to identify and design mechanisms that might be non-truthful, but finding a profitable deviation is not obvious.\n\n- \"Can you write down the mechanism that is obtained from applying the reduction (Mechanism 1) to MNW? Even just for 3 agents and 4 items?\"\n\nA perhaps simpler way to look at the reduction is the following:\n1) First, check if all agents want distinct items. If so, we are done.\n2) For all $i \\in [n]$, check if removing $i$ makes all remaining agents want distinct items. If $i$ is the unique such agent, temporarily assign everyone else the items they want. If $i$ is happy (in the EF1 sense) to be allocated all unclaimed items, we are done.\n3) If there are exactly two agents, $i$ and $j$, whose desired sets overlap (but everyone else wants distinct stuff), temporarily assign everyone else the items they want. Give $i$ or $j$ the items they want, depending on who wants the items in the intersection more. Give the remaining agent everything that’s left. If this is EF1 we are done.\n4) If all previous steps failed, run the black box algorithm (e.g., MNW).\n\nOverall, we believe it is easier to think of the reduction as taking care of some extreme cases in an engineered way (to guarantee NOM), followed by calling the black box if the input is not in the extreme cases. In our view, this simplicity (in terms of the coding overhead) is a feature: we can very easily take an implementation of an EF1 + PO algorithm and turn it into an EF1 + PO + NOM mechanism.\n",
" Thank you for the thoughtful review and questions. Please see our response to all reviewers for your comment about the weakness of NOM as a definition.\n\n- “For the reduction part, if all the valuations are strictly positive, the reduction actually does not need to do any change. I am not sure if this can be a good warm-up to show at the start of this section so that the readers can understand why we need to design the reduction like this.”\n\nThis is indeed the case for the “mechanics” (i.e. the code) of the reduction. However, for the NOM guarantee to go through, an agent must take into account the possibility of others reporting zero values when considering her best/worst-case outcomes. Specifically, we use the possibility of zero values in two places: (a) in the proof of Thm 6 (and specifically in lines 413-414), and (b) in the proof of Lemma 3 (lines 795-836). So, even if both the true and reported valuations are strictly positive, the reduction might be simpler, but the proof breaks down. For this reason we felt like this alternative presentation (presenting the case of the strictly positive values first) might mislead the reader into thinking that “if the valuation space is such that all values are always positive, then every EF1 + PO algorithm is also NOM”, which is not what we prove here.\n\n- “The word “truthfulness” is used throughout the paper except related work, in which strategyproofness is sometimes used. I guess the authors want to be consistent with the referred papers, but I am not sure if the readers will get confused.”\n\nWe’d be happy to update all instances of “strategyproofness” with “truthfulness” (or add a footnote to explain that both terms refer to the same property).\n\n- “Have you ever defined what “PO” is short for?”\n\nPO stands for “Pareto optimal” or “Pareto efficient” (see line 164 in page 4). We will clarify this in other places as well, so it’s easier to track down for the reader.\n",
" We would like to thank all reviewers for their constructive feedback. We will incorporate the valuable suggestions from all reviewers in the final version of this paper.\n\nReviewers Hf6D, iRAr and FBTg comment that NOM feels like a weak requirement and that it should be further motivated. We would like to first note that in the study of the allocation of indivisible items, a dominant research thread in fair division, most works point out that requiring truthfulness is very demanding (e.g., only dictatorships satisfy truthfulness + efficiency, while no mechanism is truthful and always EF1), and proceed to completely ignore the possibility of misreports. A series of recent papers that don’t fall in this category require instead a domain restriction (e.g. binary values). We view our work not as advocating NOM as the ultimate guarantee one should aim for, but as initiating the exploration of formal guarantees between “agents are always honest” and “agents are perfectly rational and all-knowing expected utility maximizers.” So, NOM for us is a relaxation of truthfulness as much as it is a strengthening of “absolute honesty.” \n\nRegarding general motivation for NOM, NOM protects against agents that consider the best and worst case outcomes under different reports. This is a much more realistic assumption compared to truthfulness, which requires an agent to argue about *all* possible scenarios (which is clearly impossible for a real person to do). And, even though a real person might be more sophisticated than simply considering these two extremes, NOM does manage to separate non-truthful mechanisms in terms of how manipulable they are in a way consistent with evidence from practice. For example, the Boston mechanism (which has been observed to be manipulable in practice) is also obviously manipulable, while the deferred acceptance algorithm (which is widely believed to be fairly robust against most manipulations, since manipulating it requires a detailed understanding of others’ preferences) does satisfy NOM; see [Troyan and Morrill, 2020].\n\nA technical benefit of arguing only about the min and max outcome is that conclusions are not tied to any distributional assumptions on preferences. Finally, on another technical note, even though NOM sounds weak, as we show, it is not so weak that NOM+X is always possible (e.g. NOM + MNW and NOM + egalitarian welfare are not possible), so there is a clear separation between “no incentives,” NOM, and standard truthfulness.\n\nBelow we respond to each reviewer individually.\n",
" Truthfulness is an important issue in the field of indivisible resource allocation. However, it is widely known that fairness and efficiency are usually not compatible with truthfulness. In this work, the authors studied a relaxed notion of truthfulness, namely, non-obvious manipulability (NOM). Fortunately, under the relaxed notion, the negative results do not hold anymore. \n\nOriginally, truthfulness means it is each agent’s dominant strategy to report the true values no matter what values will be reported by the others. The related notion NOM only requires that reporting the true values yields a (weakly) higher utility than lying in either the best and worst-case scenarios. \n\nThe fairness notion concerned in this work is EF1 (envy-free up to one good), a popular relaxation of EF (envy-freeness). The efficiency notions concerned include utilitarian welfare maximization, egalitarian welfare maximization, and Nash welfare maximization. \n\nFor fairness, under the new notion, the authors first show that the Round-Robin is actually NOM, which significantly separates truthfulness and NOM. Therefore, NOM and EF1 are compatible. Actually, using the algorithm by Aziz, WINE 2020, we can have a stronger result by achieving ex ante EF, ex post EF1 and NOM simultaneously. \n\nFor efficiency, (a little bit unexpected to me since the requirement of NOM is weak to me), the authors proved that for any number of agents, both egalitarian and Nash welfare maximizations are not compatible with NOM. For utilitarian welfare maximization, however, although it is still not compatible with NOM with two agents, they are compatible with more than two agents. \n\nFinally, the authors investigated the compatibility among NOM, EF1, and Pareto efficiency. To answer this question, the authors provided a black box reduction from any algorithm that outputs (clean and non-wasteful) EF1 allocations to a new mechanism that not only ensures EF1 but is also NOM. Moreover, the reduction preserves the property of Pareto efficiency, which implies that NOM, EF1, and Pareto efficiency can be satisfied together, via existing results in the literature.\n Strengths\n\nI agree with the authors that NOM might be a better notion for non-manipulability since truthfulness is hard to achieve together with fairness and efficiency. Thus, I think this work may be a good initiating work. \n\nThe authors also provide a relatively complete picture for NOM. \n\n\nWeaknesses\nThe requirement of NOM is still a bit weak to me (although it is still not compatible with some efficiency criteria). It will be better if the authors can justify why only the best and the worst cases are particularly interesting. \n\nThe proofs are not technically hard.\n\nThe writing in general is good but can be further improved. \n\n\nTypos:\n\nLine 130 M should be non-italic in \\Pi_n(M).\nLine 214 l does not dependent of her report\nLine 273 argmin should be a function\nLine 274 also report her\nLine 275 she get\nLine 277 I do not see why Nash social welfare maximization is the most popular objective. \nLine 329 each of these two subsets are \nLine 351 some useful notation\nLine 417 establish Theorem 5\nLine 418 that that\n“with respect to” is sometimes abbreviated and sometimes not; better to be consistent\n For the reduction part, if all the valuations are strictly positive, the reduction actually does not need to do any change. I am not sure if this can be a good warm-up to show at the start of this section so that the readers can understand why we need to design the reduction like this.\n\nThe word “truthfulness” is used throughout the paper except related work, in which strategyproofness is sometimes used. I guess the authors want to be consistent with the referred papers, but I am not sure if the readers will get confused. \n\nHave you ever defined what “PO” is short for?\n N.A.",
" This paper studies the problem of non-monetary resource allocation when agents have additive valuation functions over indivisible goods. Since there exist strong impossibility results in this setting if one also imposes truthfulness, the authors consider the relaxed incentive guarantee known as non-obvious manipulability (NOM). The main result is that any algorithm that satisfies envy-freeness up to one good (EF1) can be transformed into a not-obviously manipulable and EF1 algorithm (subject to mild conditions). This main result is complemented by some other results regarding NOM in the domain of non-monetary resource allocation: the round robin and utilitarian social welfare-maximizing algorithms are NOM, but Nash and egalitarian social welfare-maximizing algorithms are not. Strengths: The idea of the paper is novel and the idea is very interesting. Escaping and refining classic impossibilities in this space is an important problem, in my opinion. The results that are obtained, especially the main result (Theorem 5), are strong and surprising.\n\nWeaknesses: I am not convinced that NOM is a good relaxation of truthfulness (for this problem, at least). See my question below.\n\nMinor:\n\nLine 99: \"efficiency\" -> \"efficient\"\nLine 184: \"an\" -> \"a\"\nLine 191: \"agent\" -> \"agents\"\nCould lines 197-202 go to the appendix?\n 1) Lines 249-250 grabbed my attention, since my intuition is that manipulating the utilitarian SW-maximizing mechanism is (intuitively) \"obvious\" when valuations are not required to be normalized: I should report huge valuations for all goods. Is there intuition for why we shouldn't think of this as an \"obvious\" manipulation? It would also help me if you could lay out the formal relationship between NOM and obvious strategyproofness of Li17, if there exists one. Lines 38-40 hint at such a relationship but leave it ambiguous.\n\n2) Can you write down the mechanism that is obtained from applying the reduction (Mechanism 1) to MNW? Even just for 3 agents and 4 items? Adequately addressed.",
" This work considers the tools available to a central designer needing to allocate goods fairly and efficiently to agents with additive valuation functions and no monetary transfers. There are no general envy-free and deterministic truthful mechanisms to do so. Recent other work in loosening truthfulness has explored what types of manipulations are likely to be taken advantage of. This work takes one such notion, “non-obvious manipulability” and asks what additional strength does relaxing truthfulness to NOM give to the central designer - notably, can she achieve a mechanism that is deterministic and envy-free up to one item? More generally, how do her options change when she relaxes truthfulness to NOM?\n\nThe paper answers in the affirmative, and in fact gives a black box reduction from designing an EF1+NOM mechanism to that of designing an EF1 algorithm. \n\nThis black box reduction hinges on handling the special cases where the demanded items are fully disjoint sets (1), fully disjoint except for the goods desired by one person (case 2), fully disjoint except the goods desired by two people (case 3), and if none of those apply, applying the algorithm directly (case 4).\n\n\n The paper comprehensively addresses the impact that relaxing truthfulness to non-obviously manipulability gives to a designer. This makes for a helpful and interesting contribution to the literature surrounding relaxed notions of truthfulness.\n\nThe nature of the black box reduction - special cases for when two or fewer agents have overlapping demand sets raises a small question of whether or not there is a more fundamental core regarding three agents that can be embedded to simplify the reduction. \n Can you comment on the jump from 2 to 3 agents in both Theorems 2&3 and in the core reduction and result? Does NOS retain relevance with three agents having overlapping preferences or is this a case where it is capturing N/A",
" This paper reconsiders the classic problem of allocating indivisible goods through the lens of non-obvious manipulability (NOM). The results show that this can bypass known impossibility results for the stronger and more standard requirement of truthfulness. In particular is possible to achieve deterministic+EF1+NOM and that this can be combined with (fractional) Pareto efficiency among other positive results. Negative results show that Nash and egalitarian welfare maximization both fail to satisfy NOM. On the positive side, the paper is clear and does a nice job revisiting a classic problem and gaining new insights about both classic mechanisms and new designs. The black-box-reduction seems technically non-trivial and from a theoretical perspective the combination of properties it achieves is an improvement over prior work.\n\nOn the negative side, the arguments in Sections 3 and 4 largely seem straightforward, so the contribution here seems somewhat limited. For Section 5, which has more of a technical contribution, it is unclear to me how important this is for practice. Other than identifying that it has stronger theoretical properties, there isn’t any evidence presented that the results of the reduction are noticeably better than Round-Robin or PS-Lottery. \n Can you make a case that the stronger theoretical properties yielded by the reduction lead to meaningfully better results in practice. NOM is a relatively recent approach. Given that the interest of the results hinges on the reasonableness of relaxing truthfulness to NOM, I was surprised there does not appear to be discussion of whether the guarantee provided by NOM and the incentives of them mechanisms that satisfy it are reasonable in this setting."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"CMjz54AGT0v",
"Rwc40XjGwbr",
"CMuiwB9X2Pn",
"wHBnJSBp5p",
"jzTRXWlKSS",
"u9xZs5YDhA",
"zubo2tQnpqG",
"t6QsAxE3RCK",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU",
"nips_2022_6H00JM-DZjU"
] |
nips_2022_xwBdjfKt7_W | SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training | Spiking neural networks (SNNs) are promising to be widely deployed in real-time and safety-critical applications with the advance of neuromorphic computing. Recent work has demonstrated the insensitivity of SNNs to small random perturbations due to the discrete internal information representation. The variety of training algorithms and the involvement of the temporal dimension pose more threats to the robustness of SNNs than that of typical neural networks. We account for the vulnerability of SNNs by constructing adversaries based on different differentiable approximation techniques. By deriving a Lipschitz constant specifically for the spike representation, we first theoretically answer the question of how much adversarial invulnerability is retained in SNNs. Hence, to defend against the broad attack methods, we propose a regularized adversarial training scheme with low computational overheads. SNNs can benefit from the constraint of the perturbed spike distance's amplification and the generalization on multiple adversarial $\epsilon$-neighbourhoods. Our experiments on the image recognition benchmarks have proven that our training scheme can defend against powerful adversarial attacks crafted from strong differentiable approximations. To be specific, our approach makes the black-box attacks of the Projected Gradient Descent attack nearly ineffective. We believe that our work will facilitate the spread of SNNs for safety-critical applications and help understand the robustness of the human brain. | Accept | This paper proposes an adversarial training method for Spike neural networks. One challenge is that spike networks are non-differentiable and the paper develops various gradient approximation methods and builds on previous attack methods like FGSM and PGD with approximate gradients. An additional innovation is the development of a regularization method that estimates Lipschitz constants. Estimating Lipschitz constants of spike neural networks is another technical challenge and the paper develops a rigorous bound using a concept they call spike distance. which is an upper bound to the normal Lipschitz contanst.
Several concerns were raised by the reviewers including incomplete discussion of prior work, clarifications on the performed ablation study and comparison to prior SOTA. Overall the authors did a good job in their rebuttal and discussion to convince the revewers and this meta-reviewer that the paper merits publication. This is a somewhat niche problem setting but the paper has several theoretical and practical innovations that are interesting and suitable for publication.
| test | [
"E4WEPpQBR_g",
"uATX9PK9nUA",
"qgQ_zEQsbT",
"jS1cuqYKab",
"EPPiP_R9kI2",
"IXvET_EefiO",
"G4-S-vbfvl5",
"WuUMJ_r7tN",
"H1-yxyxMkO",
"KsxmZdJmar",
"7yF83O_4Dd8",
"2Cgc6SU9_LQ",
"n2MpYT8yEc1",
"6VA-Tk5zCtX",
"mmnFAs8SgdL",
"W1xNu1uOujz",
"WvVi_GeRRc",
"u_fE3pJrGa9",
"enAt10tm_jR",
"2Muk4sH2d7",
"H3Z0zMz1LuH",
"R9UTY7uy4Av"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading the rebuttal, most concerns are addressed. I will increase my score to borderline accept. The combination of adversarial training with SNN seems promising but the current version lacks theoretical contribution.",
" **Tabel R4: Layerwise Matrix Norm of Batch Normalization**\n| Performance | RAT (w/ BN) | RAT (w/o BN) | Vanilla (w/o BN) | Vanilla (w/ BN) |\n| ---------------- | ----------------- | ---------------- |:-------------------- | --------------------- |\n| FGSM (BPTR/BPTT) | 38.890/33.410 | 33.440/27.530 | 15.110/9.000 | 8.400/6.140 |\n| **Matrix Norm** | **RAT (with BN)** | **RAT (w/o BN)** | **Vanilla (w/o BN)** | **Vanilla (with BN)** |\n| Layer 1 | 1.10 | 1.34 | 5.44 | 6.27 |\n| Layer 2 | 1.07 | 1.21 | 3.80 | 3.25 |\n| Layer 3 | 1.11 | 1.22 | 3.35 | 3.44 |\n| Layer 4 | 1.96 | 2.09 | 3.06 | 4.10 |\n| Layer 5 | 2.18 | 2.21 | 4.71 | 4.81 |\n| Layer 6 | 1.43 | 1.39 | 4.36 | 4.92 |\n\nThanks for your acknowledgment of our efforts. For batch normalization, we would like to present some numeric results and insights to better understand the impact of batch normalization in the context of Lipschitz analysis. Table R4 reports the matrix norms of the weights of each layer in spiking VGG5. Here we rearrange the columns so that the columns are in descending order of robustness. We observe a phenomenon where the matrix norm of the weights at each layer increases as the robustness of the model decreases under different experimental settings, which implies that there is a correlation between the robustness of the SNN and the matrix norm of the weight (the Lipschitz constant of the linear layer).\n\nWe can try to explain the results in terms of the magnitude of the Lipschitz constant and robustness. Without using the RAT scheme, the network without BN has a similar but smaller layerwise Lipschitz constant and, therefore, more robust. When using the RAT scheme, BN can normalize the input of each layer, making regularization training in RAT more effective, resulting in a smaller Lipschitz constant and better robustness. In addition, for the direct training of deep SNN, BN can improve the clean accuracy to a certain extent.\n\nThus, from the perspective of Lipschitz constraint, the technique of BN is also necessary. We hope our explanations will help you understand and support our work.",
" Dear Authors,\n\nThanks for answering the reviewers' comments in a clear and comprehensive way.\n\nI don't have further questions.",
" Dear Reviewer BBvf,\n\nWe notice that in your initial review, your concern lie in three parts.\n\n1) The first concern is our contribution to the field, for which we provide a comparison to the SOTA related works of SNN robustness (see the section of **TO All Reviewers**) and add the literature on SNN robustness in the revised paper. \n2) The second concern is the difference from the previous work, for which we explain that the biggest differences lie in the theoretical derivations for spiking Lipschitz analysis. We would like to note that the calculation scheme of Lipschitz constant of ANNs cannot be directly applied to SNNs with discrete activation and time-series processing capability, which forms a difficulty in determining Lipschitz constant of SNNs.\n3) The third concern is about the ablation study, for which we clarify the meaning of MIX & REG and the result of our ablation study.\n\nGiven these facts and positive feedbacks from other reviewers, we sincerely hope that our feedback could settle and answer your concerns. We hope you could reconsider and improve your initial rating. Also, if you have any further questions or comments, please let us know, and we are glad to give further responses.",
" Dear Reviewer PjMN,\n\nWe notice that your concerns lie in five parts in the previous review. \n\n1) The first concern is about the ambiguous expressions, for which we have altered the expression. \n2) The second concern is the absence of experimental parameters, for which we provide the absent noise budget parameters.\n3) The third concern is about the novel features of Lipschitz, for which we explain that the novel features lie in the theoretical derivations for spiking Lipschitz analysis. We would like to note that the calculation scheme of Lipschitz constant of ANNs cannot be directly applied to SNNs with discrete activation and time-series processing capability, which forms a difficulty in determining Lipschitz constant of SNNs.\n4) The fourth concern is the comparison with the SOTA works, for which we provide a comparison to the SOTA related works of SNN robustness (see the section of **TO All Reviewers**).\n5) The fifth concern is about the experiment on event-based datasets, yet how to perform attacks on event data has not been thoroughly discussed, so we would like to extend the experiments in our future work.\n\nWe sincerely hope that our feedback could settle and answer your concerns. Also, if you have any further questions or comments, please let us know, and we are glad to give further responses.",
" Dear authors,\n\nI have read the recent response and have seen the results. I want to make one follow up point:\n\n1. It would be good if you can have insight on why with your approach the with BN model is performing better.\n\nAlso, please update the results (that are currently running), on the rebuttal response. \n\nThe authors have put a significant effort in rectifying their lacking and I am pleased with the additional set of experiments and clarifications. Given the authors **will include experiments, reasoning for all the concerns raised by me**, I tend to accept this paper now. ",
" Dear Reviewer BBvf,\n\nThanks for your thorough initial comments. We really hope to know whether our previous response has addressed your questions and concerns properly. Since it is approaching the end of author-reviewer discussion period, please let us know if you have any further comments, and we are glad to write a follow-up response. Thank you very much!",
" Dear Reviewer PjMN,\n\nThank you for the detailed feedbacks and constructive suggestions. As the discussion period will end soon, we would like to kindly ask if our previous response clarifies your concerns and if there are any further questions that we could answer to facilitate the review process. Thanks a lot for your time!",
" ## 4. To my understanding of the rebuttal, the proposed method has additional training cost, and thus is not fair to directly compare with [4,5]. It is important for the author to tone down on their claims and show a fair table and detail this in the paper. It also hints at additional inference cost, thus please clearly mention this if I am correct. Otherwise it might pose an incorrect information to the community, as both [4] and [5] tried to show inherent robustness.\n\nThanks for the feedbacks and constructive suggestions. We agree that the improvement of robustness of our proposed training scheme is at a cost. We would like to note that the cost is actually not that high. Compared to the setting of typical adversarial training that only uses BPTT differentiable approximation, we mix fast and effective BPTR to alleviate the drop in training efficiency. At the same time, we update the weights of the orthogonal regularization by sampling instead of updating all weights, as suggested in [6]. This can also help reduce the computational overhead. \n\nTo give a fair comparison of the related works, we have updated Table R1 in 'To All Reviewers' to show the additional training cost. In addition, we have added a discussion on additional training costs, which is temporally put in the appendix. We will incorporate your suggestion and move the discussion of training cost in our final version where one additional page allows us to present the comparison of the SOTA works along with the discussion of the training cost. \n\nAdditional discussion:\n*It is worth noting that although our training algorithm improves the robustness of SNNs, it comes at a cost compared to the work of Sharmin et al.[4] and Kundu et al.[5]. The cost is mainly reflected in the training time. First, our training includes time to generate adversarial noise. Adversarial learning is a common scheme to improve robustness, and generating adversarial examples using only BPTT differentiable approximation in SNN is a time-consuming operation. Our algorithm mitigates the increase in training time by mixing in a faster yet efficient BPTR approximation. In addition, the orthogonal regularization of the weights is computed every update, which also increases the training time. Solutions to reduce the time consumption of regularization include sampling fewer weights for regularizing, or reducing the number of regularization updates.*\n\n[1] Wang, H., Zhang, A., Zheng, S., Shi, X., Li, M., & Wang, Z. (2022, June). Removing Batch Normalization Boosts Adversarial Training. In International Conference on Machine Learning, 23433-23445.\n\n[2] Kundu, S., Datta, G., Pedram, M., & Beerel, P. A. (2021). Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3953-3962.\n\n[3] Wong, E., Rice, L., & Kolter, J. Z. (2019, September). Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations.\n\n[4] Andriushchenko, M., & Flammarion, N. (2020). Understanding and improving fast adversarial training. Advances in Neural Information Processing Systems, 33, 16048-16059.\n\n[5] Shafahi, A., Najibi, M., Ghiasi, M. A., Xu, Z., Dickerson, J., Studer, C., ... & Goldstein, T. (2019). Adversarial training for free!. Advances in Neural Information Processing Systems, 32.\n\n[6] Cisse, M., Bojanowski, P., Grave, E., Dauphin, Y., & Usunier, N. (2017, July). Parseval networks: Improving robustness to adversarial examples. In International Conference on Machine Learning, 854-863.\n\n[4] Sharmin, S., Rathi, N., Panda, P., & Roy, K. (2020, August). Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations. In European Conference on Computer Vision, 399-414.\n\n[5] Kundu, S., Pedram, M., & Beerel, P. A. (2021). Hire-snn: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5209-5218.",
" Thank you for your continued interest and constructive comments on our work. We are \nto know that we have resolved some of your concerns. We would like to answer your remaining questions in the following.\n\n## 1. It is well researched that using BNs are apparently [1] not good to get adversarial robustness. So, I would strongly encourage the authors to provide results on models without BNs particularly when providing results on robustness. There are other alternate approaches that can handle the \"lack of BN issues\", please refer to [2]. So, a discussion on this is necessary.\n\n**Tabel R3: Effect of Batch Normalization**\n| | RAT (with BN) | Vanilla (with BN) | RAT (w/o BN) | Vanilla (w/o BN) |\n| ---------------- | ------------- | ----------------- | ------------- |:---------------- |\n| CLEAN | 82.03 | 90.170 | 75.820 | 88.880 |\n| FGSM(BPTR/BPTT) | 38.890/33.410 | 8.400/6.140 | 33.440/27.530 | 15.110/9.000 |\n| RFGSM(BPTR/BPTT) | 58.060/53.460 | 25.040/17.880 | 52.950/48.890 | 33.300/19.650 |\n| PGD(BPTR/BPTT) | 28.390/16.530 | 0.280/0.030 | 27.380/19.460 | 1.100/0.040 |\n\nThanks for pointing it out. We agree that BN is recognized to have a negative impact on model robustness in some literature. Hence, we have done further research on the effect of Batch Normalization on our proposed RAT scheme. We trained the spiking version of VGG5 on the CIFAR-10 dataset with four different settings: RAT+BN, Vanilla+BN, RAT without BN, and Vanilla without BN. The results are presented in Table R3.\n\nFrom the table, we find that for vanilla models without proposed RAT, the absence of BN helps improve the robustness. Training with RAT has promoted the robustness of models, either with or without BN. The robustness of RAT improves with BN, as BN can increase the clean accuracy. Therefore, according to Table R3, we can roughly think in terms of robustness that: RAT (with BN) ) > RAT (w/o BN) > Vanilla (w/o BN) > Vanilla (with BN).\n\nAs for SNN robustness without BN, we have added a discussion in the section of 'Conclusions and Discussions', where we pointed out the problem with BN and alternate approaches to train SNN without BN. We are glad to investigate it further in future work.\n\nAdded Contents in Conclusions and Discussions:\n*Besides, recent works have shown that SNN can achieve good results without BN~\\cite{kundu2021spike}[2]. Note that BN are included in our model, which may be harmful to the robustness~\\citep{wang2022removing}[1]. Thus valuable future research directions will include how to train robust SNNs while getting rid of the adverse effects of BN.*\n\n## 2. I understand that randomly initialized FGSM provides better results [3], however, it is not clear whether the FGSM training of the current manuscript follows this.\n\nYes, we would like to note that we follow the dedicated works of [3], [4], [5] mainly for the reason that the FGSM and RFGSM training has become a powerful baseline method. We are currently engaged in empirical proof of FGSM training. Due to the recent limitations of our computing resources, our experiments are still ongoing. We would like to provide the results in the final version.\n\n## 3. The authors should further change the claim of L208, as they dont have any empirical or theoretical evidence to : \"which may bridge the robustness of SNN to the discovery of neuroscience and is also sensitive to the change of both firing rate and temporal information.\"\n\nThanks for your suggestion. We have rewritten the sentence according to your advice in the revised manuscript.",
" I have updated my score to borderline accept (conditional), primarily due to the authors' apparently thorough rebuttal. However, I believe this paper requires further input from authors and necessary discussion among reviewers for consensus. \n\n",
" I thank the authors for the detailed rebuttal, and few of my concerns are well addressed. However, I have the following concerns remaining:\n\n1. It is well researched that using BNs are apparently [1] not good to get adversarial robustness. So, I would strongly encourage the authors to provide results on models without BNs particularly when providing results on robustness.\nThere are other alternate approaches that can handle the \"lack of BN issues\", please refer to [2]. So, a discussion on this is necessary.\n\n2. I understand that randomly initialized FGSM provides better results [3], however, it is not clear whether the FGSM training of the current manuscript follows this.\n\n3. The authors should further change the claim of L208, as they dont have any empirical or theoretical evidence to : \"which **may bridge** the robustness of SNN to the discovery of neuroscience and is also sensitive to the change of both firing rate and temporal information.\"\n\n4. To my understanding of the rebuttal, the proposed method has additional training cost, and thus is not fair to directly compare with [4,5]. It is important for the author to tone down on their claims and show a fair table and detail this in the paper. It also hints at additional inference cost, thus please clearly mention this if I am correct. Otherwise it might pose an incorrect information to the community, as both [4] and [5] tried to show inherent robustness. Having said these, I believe the current manuscript has value, and I will assert my final decision based on that.\n\n**Post rebuttal initial rating**: I now increase my score to 5.\n\n[1] Removing Batch Normalization Boosts Adversarial Training, ICML 2022.\n\n[2] Spike-thrift: Towards Energy-Efficient Deep Spiking Neural Networks by Limiting Spiking Activity via Attention-Guided Compression, WACV 2021.\n\n[3] Fast is better than free: Revisiting adversarial training, ICLR 2020.\n\n[4] Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations. In European Conference on Computer Vision, 2020.\n\n[5] Hire-snn: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.\n",
" We appreciate the reviewer for the advice. We are grateful that you find our paper original and exhibit advanced performance. We would like to address your concerns and answer your questions here.\n\n## 1. In Section 1: “it is necessary to improve the adversariality of SNNs”. Do you mean to improve the adversarial robustness of SNNs?\n\nYes, we mean to improve the adversarial robustness of SNNs. As the expression may cause confusion to readers, we have rewritten it in the revised paper.\n\n## 2. In Section 1: “The update amount of weights is the key to constructing adversarial attacks.” Please rewrite this sentence. The adversarial attacks should update the input intensities, rather than the weights.\n\nThanks for pointing it out. We have rewritten the sentence to *''the key to constructing SNN gradient-based attacks is back-propagation, which is the same as that of ANNs''* to avoid confusion. \n\n## 3. What is the noise budget used in the attacks for the results reported in Table 1?\n\nThe noise budget is 8/255 for FGSM and (alpha=0.01, step=7) for PGD. To clearly clarify the noise budget, we have moved the declaration to Section 2.2, which introduces gradient-based methods:\n*''Without specific instructions, we set $\\epsilon$ to $8/255$ for all methods for the purpose of testing. For iterative methods like PGD and BIM, the attack step $\\alpha=0.01$, and the step number is 7. ''*\n\n## 4. The proposed regularization method described in Section 4 is strongly based on previous works. It is recommended to explain more clearly the novel features.\n\nYou have raised an important concern. We would like to clarify that our novelty on Lipschitz is mainly reflected in the theoretical derivation and implication. The calculation scheme of Lipschitz constant of ANNs cannot be directly applied to SNNs with discrete activation and time-series processing capability, which forms difficulty in determining Lipschitz constant of SNNs. As described in Section 4, we theoretically give a bound on the Lipschitz constants for SNNs using a spike distance. The biggest differences between previous ANN work and our work lie in two points.\n1. The bound space for computing matrix norm is very different from that of ANN, which is supported by our Theorem 1; \n2. The spiking Lipschitz has an upper bound over traditional Lipschitz, which is supported by our Proposition 1. Based on these results, we adopt orthogonal regularization to control the spiking Lipschitz. \n\nWe believe our theoretical work on spiking Lipschitz will raise the focus of the community on the theoretical bounds of SNN robustness.\n\n## 5. In Section 5, the results of the proposed method have been compared only to the vanilla adversarial training. If possible, the comparison with other methods among the related works should be included.\n\nThanks for pointing it out. Please refer to the Section of To All Reviewers. We have included the comparison with SOTA models in the revised paper (please refer to the appendix). \n\n## 6. The experiments are conducted only on static data, while SNNs are commonly used also on event-based data. Therefore, it is recommended to extend the experiment set including results on event-based datasets.\n\nYou have raised an interesting concern. SNN is indeed known to be commonly used in event-based data. We would like to point out that event data consists of discrete spikes while the adversarial methods (FGSM, BIM, etc.) produce floating-point attack. How to perform attack with different noise budgets on event-based data has not yet been thoroughly discussed. Hence, this paper focuses more on the floating-point data type static images with mature attack methods. We are glad to investigate the robustness of event-based data further in future work.",
" Thank you for your detailed and insightful comments. We are delighted that you find our paper well-written and seem very effective. We would like to address your concerns and answer your questions here.\n\n## 1. This paper aims to improve the robustness of SNN but didn’t summarize previous efforts on this topic. Besides, this paper didn’t provide comparisons with other related state-of-the-art algorithms, which makes it hard to recognize the contribution and value to this field.\n\nThanks for your comments. We would like to clarify that SNN adversarial robustness is a very new research field, and there are still few works focusing on improving the adversarial robustness of SNNs. We have rewritten Section 2.1 (related work on SNN robustness) by adding the literature on SNN robustness therein. SNNs are robust due to input coding, spike communication, potential decay, etc. \n\nBesides, we have added comparisons with these related SOTA algorithms as you suggested. Please refer to the section of **To All Reviewers**. We hope this will increase your recognition of our work. \n\n## 2. Training network with Lipschitz constraints have been well exploited in ANN. Some key references are missing in this paper. The authors should illustrate the differences between their work and those Lipschitz constraints proposed on ANN.\n\nWe agree that training networks with Lipschitz constraints have been well exploited in ANN, and the robustness of ANN can be improved when combined with Lipschitz regularization. Thanks for your suggestion,we have added more related work on Lipschitz constraints in Section 2.3. \n\nWe would like to note that the calculation scheme of Lipschitz constant of ANNs cannot be directly applied to SNNs with discrete activation and time-series processing capability, which forms a difficulty in determining Lipschitz constant of SNNs. As described in Section 4, we theoretically give a bound on the Lipschitz constants for SNNs using a spike distance. The biggest differences between previous ANN work and our work lie in two points. 1. The bound space for computing matrix norm differs significantly from that of ANN, which is supported by our Theorem 1; 2. The spiking Lipschitz has an upper bound over traditional Lipschitz, which is supported by our Proposition 1. Based on these results, we adopt orthogonal regularization to control the spiking Lipschitz. \n\nThus, our main contributions are in the following:\n\n(1) We design and summarize three different gradient approximations (i.e., CBA, BPTR, BPTT) to attack the non-differentiable SNN.\n\n(2) We theoretically give a bound on the Lipschitz constants for SNNs using a spike distance. Our theoretical results show that the spiking Lipschitz differs ANN Lipschitz in the norm space, and it has an upper bound. \n\n(3) Based on our theoretical implication, we propose a regularized adversarial training scheme for SNN, which proves to be effective in our experiments. \n\nWe believe our theoretical work on spiking Lipschitz will raise the focus of the community on the theoretical bounds of SNN robustness.\n\n## 3. The setting of the ablation study is unclear. In Table 3, the definitions of MIX and REG are not provided. Besides, what is the baseline of this algorithm? Since this work is named Regularized Adversarial Training, I expect the baseline would be normal adversarial training without Lipschitz constraint. However, I couldn’t find it.\n\nThanks for your suggestions. The proposed RAT scheme is composed of ''a regularizer to control the spiking Lipschitz constant'' (abbreviated to REG) and ''mixed adversarial neighbourhoods for adversarial training'' (abbreviated to MIX). So our ablation study is to discuss the effect of the two training components. \n\nWe would like to clarify that the result of adversarial training without Lipschitz constraint is provided in Line 3, Table 3 of the manuscript. {AT+no regularization} achieves FGSM-attacked accuracy of (37.49%), which is higher than that of {no AT+regularization} (26.60%), but less than that of {AT+regularization} (45.23%).",
" [1] Sharmin, S., Rathi, N., Panda, P., & Roy, K. (2020). Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations. In European Conference on Computer Vision, 399-414. \n\n[2] Kundu, S., Pedram, M., & Beerel, P. A. (2021). Hire-snn: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5209-5218.\n\n[3] Fang, W., Yu, Z., Chen, Y., Huang, T., Masquelier, T., & Tian, Y. (2021). Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems, 34, 21056-21069.\n\n[4] Li, Y., Guo, Y., Zhang, S., Deng, S., Hai, Y., & Gu, S. (2021). Differentiable spike: Rethinking gradient-descent for training spiking neural networks. Advances in Neural Information Processing Systems, 23426-23439.\n\n[5] Zheng, H., Wu, Y., Deng, L., Hu, Y., & Li, G. (2021). Going deeper with directly-trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 11062-11070.\n\n[6] Kim, Y., & Panda, P. (2020). Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. Frontiers in neuroscience, 1638.\n\n[7] Wong, E., Rice, L., & Kolter, J. Z. (2019). Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations.\n\n[8] Andriushchenko, M., & Flammarion, N. (2020). Understanding and improving fast adversarial training. Advances in Neural Information Processing Systems, 33, 16048-16059.\n\n[9] Shafahi, A., Najibi, M., Ghiasi, M. A., Xu, Z., Dickerson, J., Studer, C., ... & Goldstein, T. (2019). Adversarial training for free!. Advances in Neural Information Processing Systems, 32.\n\n[10] Ortiz-Jiménez, G., Modas, A., Moosavi-Dezfooli, S. M., & Frossard, P. (2021). Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness. Proceedings of the IEEE, 109(5), 635-659.\n\n[11] Lin, J., Gan, C., & Han, S. (2018). Defensive Quantization: When Efficiency Meets Robustness. In International Conference on Learning Representations.\n\n[12] Xu, Z., Shafahi, A., & Goldstein, T. (2020). Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training.\n\n[13] Schwinn, L., Raab, R., & Eskofier, B. (2020). Towards rapid and robust adversarial training with one-step attacks. arXiv preprint arXiv:2002.10097.",
" ## 6. Authors mentioned they mixed and matched different FGSM attacked images for training to make the attack during training more diverse, no proof (theoretical or empirical) is provided.\n\nThanks for your valuable suggestion. We would like to note that the reason we mix the different FGSM variants are to find generalization on multiple adversarial methods, which mainly consists of three folds:\n\n(1) To improve the effect on PGD attacks. The motivation is based on some conclusions from previous works. [7] suggested that random initialized FGSM-training has similar effects as PGD-training, but with lower computational cost. [11] focused on the robustness of quantized networks and claimed that R-FGSM introduced randomness, making it less likely to cause gradient masking than FGSM. In fact, in the experiments of [11], R+FGSM training is just as effective as PGD adversarial training. Also, [12] showed that adversarial robustness against PGD attacks could be achieved with RFGSM-based training.\n\n(2) To improve the generalization of the noises. The adversarial distortion produced by BPTT and BPTR are of different nature as the formation is different (compare Eq.9 for BPTT and Eq.11 for BPTR). Besides, the mixed attack can bring about a noisy variant of FGSM. As [13] suggested, noisy FGSM moves the perturbation boundary of the FGSM attack, which leads to a larger variety of adversarial examples.\n\n(3) To improve the overall computational efficiency. BPTR is proven to be an efficient and powerful adversarial attack. The efficiency is proven by the analysis of overall testing time. BPTR is about 3x faster than BPTT in testing (Please refer to Analysis of Computational Cost in the revised appendix). In addition, BPTR has passed the gradient obfuscation test (Please refer to Analysis of Gradient Obfuscation in the revised appendix). Overall, the addition of BPTR can shorten the time consumed by backward pass without reducing the effectiveness. \n\n## 7. Analysis of additional training cost is missing. [2] provided improvement in robustness without any additional training cost.\n\nThank you for pointing this out. Our proposed training scheme included three differential approximations. For robust training, each step of the update process mainly consists of two forward and two backward passes. For testing, each step of the update has two forward passes and one backward pass. \n\nWe evaluate the time of the testing process of three approximations (CBA, BPTR, BPTT) instead. During the tests, we fix the mini-batch size to 64 and run the test on a NVIDIA 3090 GPU. The results are presented in the revised paper (Please refer to the Appendix). It turns out that the time consumption ratio of the three methods (CBA:BPTR: BPTT) is about 1:1:3. Please refer to the appendix of the revised paper.\n\nWe believe the time consuming cost by proposed training scheme will benefit from the addition of BPTR-based adversarial training, compared with the vanilla pure BPTT attack and training.",
" ## 4. During training the authors used FGSM attack variants, and during testing they got robustness against PGD variants. This raises significant question about the experimental set up.\n\nWe would like to note that our experimental setup is similar to the works in adversarial robustness, which train with some attack methods and evaluate models with the same or more powerful adversarial attacks [7][8][9][10]. The setting of FGSM-training in our paper is the same as a subfield called fast adversarial training. Here we give two examples to clarify it. [7] proposed that using a single-step attack during training is an efficient and feasible scheme to improve model robustness. Random initialized FGSM-training has similar effects as PGD-training, but with lower computational cost. Similarly, the role of FGSM-training in adversarial learning is also recognized by [8]. Generally, [7] and [8] have experimental settings of single-step attack training and multi-step attack testing. \n\n## 5. The authors should proof the robustness is real as this might easily due to gradient obfuscation, and to me it already fails the gradient obfuscation.\n\nThanks for your suggestion. In order to identify the attack effectiveness of differentiable approximations of CBA, BPTT, BPTR for SNN, we adopt the checklist mentioned in [2] to systematically analyze the gradient obfuscation of the three schemes. The analysis is mainly based on Table 1 and Table 2 in the main text. Our brief results are presented in Table R2. Since CBA has failed in Test (1), subsequent tests on CBA are not considered. CBA is also not used in our robust training scheme. Detailed analyses are presented in the appendix. \n\n**Table R2: Checklist for characteristic behaviors caused by obfuscated and masked gradients.**\n| Items to identify gradient obfuscation | CBA | BPTR | BPTT |\n| -------- | -------- |-------- |-------- |\n| (1) Single-step attack performs better compared to iterative attacks | Fail | Pass | Pass |\n| (2) Black-box attacks performs better compared to white-box attacks | NA | Pass | Pass |\n| (3) Increasing perturbation bound can’t increase attack strength | NA | Pass | Pass |\n| (4) Unbounded attacks can’t reach ∼100% success | NA | Pass | Pass |\n| (5) Adversarial example can be found through random sampling | NA | Pass | Pass |\n\n*We design and summarize three differentiable approximations, i.e. CBA, BPTT, BPTR, which can be deployed in gradient-based attacks to show the vulnerability of SNNs. The main concern of the gradient obfuscation lies in the inaccurate of updating gradients. In particular, the performance of the three differentiable approximations was checked against the five tests that can identify gradient obfuscation as done in \\cite{kundu2021hire}. Our analysis is mainly based on the quantification results in Table 1 and Table 2 in the main text. Also, this will explain the reason why we choose BPTT and BPTR in the procedure of the mixed training.*\n\n*As shown in Tab. 1, for all the trials, the performance of single-step FGSM is worse than its iterative counterpart PGD except for that of the WRN-16 experiment for CIFAR-100 (Attacked Accuracy: FGSM 37.68\\% v.s. PGD 43.87\\%). Thus, the CBA approximation has the potential not to provide powerful enough attacks.*\n\n*Hence, the rest of the analysis is about BPTT and BPTR. The results in Tables 1 and 2 certify the success of BPTT and BPTR approximation in terms of Test(1) in Table R2. To verify Test(2), we conduct black-box attacks on the proposed models and the vanilla ones. The black-box perturbation performs weaker in Table 2, and Test(2) is satisfied. To verify Test(3)(4), we analyze VGG-11 on CIFAR10 with increasing attack bound. In Figure A1, the classification accuracy decreases as we increase $\\epsilon$ and finally reach an accuracy of random guessing. As suggested in [2], Test(5) \"can fail only if gradient-based attacks cannot provide adversarial examples for the model to misclassify\". To sum up, we found no gradient obfuscation for the BPTT and BPTR approximation, which are suitable for adversarial training and testing.*",
" Thank you for your detailed and insightful comments. We would like to address your concerns and answer your questions here.\n\n## 1. The authors missed a very important point, as earlier literature has already showed the effectiveness of robustness when the input is rate coded [1] and significantly less robustness when input is direct coded [2]. This key aspect is completely overlooked in the manuscript.\n\nThanks for your suggestion. [2] considered rate coding and proposed to train SNNs with crafted-noise under the observation that SNNs are less robust using direct coding. This also inspires us to propose a robust learning scheme. We have revised the paper according to your suggestion and adjusted the motivation and related work. Our work is actually based on the insights of [2] and direct coding, which is complementary to the work of [1][2]. We focus on discussing the robustness of inter-layer spike communication using the theoretical derivation of the spiking Lipschitz coefficient.\n\nRevised content:\n\n*The operation mechanism and structure of SNN are similar to those of the biological brain, and studying its response to perturbation can help us understand how the human brain works. SNN is recognized as a new potential candidate with adversarial robustness due to its input coding and neuronal dynamics \\citep{perez2021neural, leontev2021robustness}. **Among the coding schemes commonly used in SNN, constant input coding is considered to be more susceptible to disturbances than others, like Poisson coding~\\citep{sharmin2020inherent}.** Therefore, \\cite{kundu2021hire} suggested that careful training is required for constant input coding. In this setting, SNNs are now facing more challenges than typical ANNs. Because the key to constructing SNN gradient-based attacks is back-propagation, which is the same as that of ANNs. However, compared with ANNs, SNNs can learn through various gradient approximations. Therefore, combining various differentiable approximations and attack methods will pose a more severe threat to SNN~\\citep{liang2021exploring}.*\n\n*The inter-layer communication of SNN is through spikes with a time dimension, which is very different from ANN. Therefore, one question can be raised naturally: whether and to what extent does spike communication detain adversarial invulnerability? And, are there training tools that can help SNNs defend against the threats described above? This paper aims to extend the Lipschitz analysis theory to spike representation and propose a more robust training algorithm on this basis.*\n\n## 2. The authors made a strong claim in L208, without any empirical validation of the fact whether it happens or not. Particularly, when the model is further fine tuned in SNN domain. Hence, the motivation of the work is weak.\n\nThanks for your suggestion. We would like to claim that our intention of using spike train distance is to give a solution of distance with the intention of reducing the loss of temporal information encoded in the spike trains. Hence, we adopt the definition of the spike distance function in Eq.12, which contains both rate and temporal information. Our inspiration is from the field of neuroscience, where various spike train distances are proposed and applied. \nWe reorganize the statement in Section 4.1 as follows:\n\n*\\cite{kundu2021hire} used this rate-based distance to bridge the robustness of ANN and SNN. Spike trains in SNN not only contain rate information but also have a temporal structure. To evaluate the distance in the spike train space, various kernel methods are proposed for neuronal identification and encoding~\\citep{weng2018towards}. Inspired by these works, we propose to model the distortion of the spike response using the spike train distance, which can bridge the robustness of SNN to the discovery of neuroscience and is also sensitive to the change of both firing rate and temporal information.*\n\n## 3. The paper does not clearly mention what might be their inspiration to use BN in their SNN models as the model definitions are never explicitly mentioned.\n\nThanks for your suggestion. We would like to clarify that Batch Normalization has been used in SNN direct training with BPTT [3][4], which plays an important role in mitigating gradient explosion and vanishing. Recent works explore various BN variants for SNN direct training [5][6]. The BN variant proposed in [6] is considered to play a role in the adversarial robustness of SNNs. Therefore, we consider using BN in this paper. We have clarified the motivation for using BN in the revised paper.",
" We are grateful to all the reviewers for their insightful feedback. We would like to address the common concerns about the comparison with the state-of-the-art in this general response. \n\n## Comparison with the SOTA\n\nWe compare our methods with the state-of-the-art models and report the results in Table R1. As SNN adversarial robustness is a very new research field and has not been researched thoroughly, here we compare two SOTA works [1][2] that highly relate to our work. The evaluation is based on the VGG-11 on the CIFAR-100 dataset. The noise budget has been fixed to $\\epsilon=8/255$ for FGSM and $\\alpha=0.01,step=7$ for PGD, and the attack is based on the surrogate gradient produced by BPTT.\n\nIn Tabel R1, one can find that our training scheme outperforms the others in terms of both clean accuracy and perturbed accuracy. The performance of accuracy attacked by FGSM is 25.86% for our work, higher than that of Sharmin et al.[1] (15.5%) and Kundu et al.[2] (22.0%). Moreover, our clean accuracy (70.89%) is higher than that of Sharmin et al.[1] (64.4%) and Kundu et al.[2] (65.1%). This implies that our proposed method can bring better generalization compared to other SOTA robust models.\n\nWe have added the comparison in the revised paper (please refer to the appendix). In addition, to show a fair comparison of the synthetical performance, we have also included a row to present the additional training cost. For a detailed discussion on training cost, please refer to the appendix.\n\n**Table R1: Performance comparison with the SOTA models.**\n| Attack by BPTT | Proposed training | Sharmin et al. [1] | Kundu et al. [2] | Regular BPTT training |\n| --------------------------------- | ----------------- | ------------------ | ---------------- | ------------- |\n| FGSM | 25.86 | 15.5 | 22.0 | 5.30 |\n| PGD | 10.38 | 6.3 | 7.5 | 0.02 |\n| Clean | 70.89 | 64.4 | 65.1 | 73.33 |\n| Additional Training Cost | Regularized Training | - | - | - |\n\n[1] Sharmin, S., Rathi, N., Panda, P., & Roy, K. (2020). Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations. In European Conference on Computer Vision, 399-414. \n\n[2] Kundu, S., Pedram, M., & Beerel, P. A. (2021). Hire-snn: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5209-5218.",
" The paper proposes robust regularization during SNN training to motivate and inspire improved robustness for the trained models. ## Weakness\n\n### Technical\n\n1. The authors missed a very important point, as earlier literature has already showed the effectiveness of robustness when the input is rate coded [1] and significantly less robustness when input is direct coded [2]. This key aspect is completely overlooked in the manuscript.\n\n2. The authors made a strong claim in L208, without any empirical validation of the fact whether it happens or not. Particularly, when the model is further fine tuned in SNN domain. Hence, the motivation of the work is weak.\n\n3. The paper does not clearly mention what might be their inspiration to use BN in their SNN models as the model definitions are never explicitly mentioned.\n\n4. During training the authors used FGSM attack variants, and during testing they got robustness against PGD variants. This raises significant question about the experimental set up.\n\n5. The authors should proof the robustness is real as this might easily due to gradient obfuscation, and to me it already fails the gradient obfuscation.\n\n6. Authors mentioned they mixed and matched different FGSM attacked images for training to make the attack during training more diverse, no proof (theoretical or empirical) is provided.\n\n7. Analysis of additional training cost is missing. [2] provided improvement in robustness without any additional training cost.\n\nOverall the paper lacks motivation and enough contribution. Comparison with SOTA is also missing. I would encourage the authors to work on the said issues to make the paper better.\n\n[1] Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations, ECCV 2020.\n\n[2] HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training With Crafted Input Noise, ICCV 2021. Please see weakness. N/A.",
" This paper proposes Regularized Adversarial Training (RAT), an adversarial training framework to improve the robustness of the Spike Neural Network (SNN). This paper adopts three different gradient approximations (i.e., CBA, BPTR, BPTT) to mitigate the non-differentiable of SNN. At first, to achieve stronger adversaries, this paper augments FGSM, RFGSM, and PGD with the above approximations. Then this paper proposes spiking the Lipschitz constant, a variant of normal Lipschitz constraint that regularizes the model to be Lipschitz smooth. The proposed RAT is evaluated over several popular architectures. **Strengths:**\n\n1. This paper is well-written and easy to follow.\n2. The proposed SAT seems very effective over several strong adversaries.\n\n**Weaknesses:**\n\n1. This paper aims to improve the robustness of SNN but didn’t summarize previous efforts on this topic. Besides, this paper didn’t provide comparisons with other related state-of-the-art algorithms, which makes it hard to recognize the contribution and value to this field.\n2. Training network with Lipschitz constraints have been welly exploited in ANN. Some key references are missing in this paper. The authors should illustrate the differences between their work and those Lipschitz constraints proposed on ANN.\n3. The setting of the ablation study is unclear. In Table 3, the definitions of MIX and REG are not provided. Besides, what is the baseline of this algorithm? Since this work is named Regularized Adversarial Training, I expect the baseline would be normal adversarial training without Lipschitz constraint. However, I couldn’t find it. 1. Some key references are missing in this paper, including the robustness of SNN and the Lipschitz constraint in ANN. The authors should make a comprehensive summary of these fields and highlight their contributions.\n2. The authors should provide comparisons with other state-of-the-art defense methods on SNN and also the baseline of this work. The authors have addressed the limitations and potential negative societal impact of this work.",
" This paper provides a theoretical analysis of the SNN robustness against adversarial perturbation. In this regard, a regularized adversarial training method for SNN has been proposed. The regularization is based on the spiking Lipschitz constant. The results conducted on various benchmarks demonstrate that the proposed training scheme achieves better robustness compared to the vanilla adversarial training. Strengths:\n1.\tThe proposed idea is original and relevant to the NeurIPS community.\n2.\tThe results show an advancement in the state-of-the-art.\n\nWeaknesses:\n1.\tSome key concepts need better clarification. See the specific questions below.\n2.\tThe experiments should be extended using event-based datasets. 1.\tIn Section 1: “it is necessary to improve the adversariality of SNNs”. Do you mean to improve the adversarial robustness of SNNs?\n2.\tIn Section 1: “The update amount of weights is the key to constructing adversarial attacks.” Please rewrite this sentence. The adversarial attacks should update the input intensities, rather than the weights.\n3.\tWhat is the noise budget used in the attacks for the results reported in Table 1?\n4.\tThe proposed regularization method described in Section 4 is strongly based on previous works. It is recommended to explain more clearly the novel features.\n5.\tIn Section 5, the results of the proposed method have been compared only to the vanilla adversarial training. If possible, the comparison with other methods among the related works should be included.\n6.\tThe experiments are conducted only on static data, while SNNs are commonly used also on event-based data. Therefore, it is recommended to extend the experiment set including results on event-based datasets. The limitations and societal impact have been discussed by the authors in the supplementary material."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"6VA-Tk5zCtX",
"IXvET_EefiO",
"EPPiP_R9kI2",
"H3Z0zMz1LuH",
"R9UTY7uy4Av",
"H1-yxyxMkO",
"H3Z0zMz1LuH",
"R9UTY7uy4Av",
"KsxmZdJmar",
"2Cgc6SU9_LQ",
"nips_2022_xwBdjfKt7_W",
"mmnFAs8SgdL",
"R9UTY7uy4Av",
"H3Z0zMz1LuH",
"W1xNu1uOujz",
"WvVi_GeRRc",
"u_fE3pJrGa9",
"2Muk4sH2d7",
"nips_2022_xwBdjfKt7_W",
"nips_2022_xwBdjfKt7_W",
"nips_2022_xwBdjfKt7_W",
"nips_2022_xwBdjfKt7_W"
] |
nips_2022_cFOhdl1cyU- | M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design | Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly. Multi-tasking models have become successful and often essential for many sophisticated systems such as autonomous driving and indoor robots. However, when deploying MTL onto those real-world systems that are often resource-constrained or latency-sensitive, two prominent challenges arise: (i) during training, simultaneously optimizing all tasks is often difficult due to gradient conflicts across tasks, and the challenge is amplified when a growing number of tasks have to be squeezed into one compact model; (ii) at inference, current MTL regimes have to activate nearly the entire model even to just execute a single task. Yet most real systems demand only one or two tasks at each moment, while flexibly switching between tasks per need: therefore such “all tasks activated” inference is also highly inefficient and non-scalable in practice.
In this paper, we present a model-accelerator co-design framework to enable efficient on-device MTL, that tackles both training and inference bottlenecks. Our framework, dubbed M³ViT, customizes mixture-of-experts (MoE) layers into a vision transformer (ViT) backbone for MTL, and sparsely activates task-specific experts during training, which effectively disentangles the parameter spaces to avoid different tasks’ training conflicts. Then at inference with any task of interest, the same design allows for activating only the task-corresponding sparse “expert” pathway, instead of the full model. Our new model design is further enhanced by hardware-level innovations, in particular, a novel computation reordering scheme tailored for memory-constrained MTL that achieves zero-overhead switching between tasks and can scale to any number of experts. Extensive experiments on PASCAL-Context and NYUD-v2 datasets at both software and hardware levels are conducted to demonstrate the effectiveness of the proposed design. When executing the practical scenario of single-task inference, M³ViT achieves higher accuracies than encoder-focused MTL methods, while significantly reducing 88% inference FLOPs. When implemented on a hardware platform of one Xilinx ZCU104 FPGA, our co-design framework reduces the memory requirement by 2.40×, while achieving energy efficiency (as the product of latency and power) up to 9.23× times higher than a comparable FPGA baseline. | Accept | This paper presents a model-accelerator co-design framework to enable on-device Multi-task Learning (MTL). At the model level, customized mixture-of-expert (MOE) layers are introduced for MTL, which alleviate gradient conflict at training time and improve the efficiency at inference time via sparse activation. At the accelerator level, the paper proposes computation reordering which allows zero-overhead switching between tasks. The algorithm is verified the on popular multi-task datasets, and the accelerator is implemented on commercial FPGAs, demonstrating improved efficiency.
The paper is very well written, the details on the algorithm and hardware implementation are clearly explained. The author chose a particular setting of MTL, then design the model and tailor the parameters to enable efficient on-device MTL. The work is complete, covering from algorithm design to hardware implementation with sufficient innovations.
Reviewers have raised concerns such as
1). Evaluation on small datasets. During rebuttal period, the authors provide more experimental results from the large-scale Taskonomy dataset.
2). Overclaiming. For example, double buffering is a well-known technique for dataflow optimization. The technique itself is by no means novel. However, I think using it to solve a practical problem still has value.
Overall, it is a solid paper and is recommended for acceptance.
| train | [
"QxGckJISqnY",
"Qn2UA6sG_XW",
"9U1tEAVCiSK",
"R0Uf74TLDrS",
"Gpq8ORTV1kl",
"ertzBwxe6A-H",
"y9Jk4RbmnON0",
"-W-Yh2m_RSD",
"JIrol0AaZyV",
"1yVI4Vzi6b3",
"f7gb3Up7mPl",
"UzA_XxIpOS",
"XgORurZT_Ud",
"r9cUm6ih1ZV",
"G_NAXruH3I2",
"L8Al9lhUbm5"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. The rebuttal has well addressed my questions. I support this paper for its novelty and solid experiments, and I will keep my original score.",
" Dear Reviewer V1Gi:\n\nSince the author-reviewer discussion period will end by tomorrow, we will appreciate if you could check our response to your review comments soon.\n\nIf our response resolves your concerns, we kindly ask you to consider raising the rating of our work.\n\nThank you very much for your time and efforts",
" **[Q2]** Resonating with Review V1Gi, why do you choose a ViT backbone? The reviewer feels that baselines around ViT targeting edge devices or FPGA's with very limited on-chip bandwidth can be naturally very bad, compared to SoTA models built around ConvNets. The reviewer would like to see evaluations built around SoTA ConvNet models. To my understanding, ResNet 18 is not a strong baseline targeting mobile devices or FPGA and ViT beats ResNet 18 is not surprising. Instead, you could compare with MobileNet-V3 or EfficientNetV1 or V2 or ShuffleNet. \n**[A2.1]** We adopt ViT backbones because they are the latest performant deep models, and have achieved impressive performance on various computer vision tasks [9-11]. While ViT-small-MTL (row 10) achieves better performance than ResNet-MTL (row 2) in Table 1 (ViT-small: -1.77% vs. ResNet-18: -2.86%), adopting our task-dependent MoE design to ViT further boosts the performance by a large margin (Ours: +2.71% vs. ViT-small: -1.77%). \n**[A2.2]** We respectfully draw attention to the numerous existing works which explore transformer development on edge devices and model acceleration. As the latest performant deep models, targeting ViT on edge devices or FPGA is well motivated and practically mature now - in both academia and industry. Below [1-7] is a non-exhaustive list of only the most recent literature. \n**[A2.3]** (1) We compare against ResNet-18 because several state-of-the-art MTL dense prediction frameworks [8-11] are all developed based on ResNet-18. \n(2) We also change our backbone to MobileNet-V3-large [12] and test on the PASCAL-Context dataset. The backbone is pre-trained on ImageNet-1k and we load the pretrained weights from [13]. We can see from the table below, MobileNet-v3 performs even worse than the ResNet-18 baseline. We speculate that it is because MobileNet-v3 is a more compact model, during training, the gradient conflicts between different tasks are even more severe. We also notice that the previous MTL on dense prediction frameworks tend to adopt ResNet backbones[14], rather than highly compact ones (e.g., MobileNet). Meanwhile, our proposed M³ViT achieves much higher MTL accuracy while requiring fewer inference FLOPs, coupled with the novel software-hardware co-design. This demonstrates that our model efficiently balances feature reuse with compact model capacity and avoids conflict between different tasks; both are enabled by our MoE design. We will release the code of ResNet-MTL, MobileNet-MTL, and M³ViT.\n| Model | Seg. ↑ | Norm. ↓| H. Parts ↑ | Sal. ↑ | Edge ↑ | $\\Delta(m)$ ↑ |FLOPS(G)|\n| :----: | :----: | :----: | :----: | :----: | :----: |:----: | :----: |\n|ResNet18-MTL | 63.8 | 14.9 | 58.6 | 65.1 | 69.2 |-2.86 | 167 |\n|MobileNet-MTL|56.7 |18.8 |48.8 | 58.5 |63.1 |-17.6 |157 |\n|M³ViT-small| 72.8 | 14.5 | 62.1 | 66.3 | 71.7 | +2.71 | 83 |\n\n[1] Qi, Panjie, et al. “Accommodating Transformer onto FPGA: coupling the balanced model compression and FPGA-implementation optimization.” In Proceedings of the 2021 on Great Lakes Symposium on VLSI, 2021. \n[2] Liu, Zejian, et al., “Hardware acceleration of fully quantized BERT for efficient natural language processing.” Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2021. \n[3] Peng, Hongwu, et al. “Accelerating Transformer-based deep learning models on FPGAs using column balanced block pruning.” International Symposium on Quality Electronic Design (ISQED), IEEE, 2021. \n[4] Li, Bingbing, et al., “FTRANS: energy-efficient acceleration of Transformers using FPGA.” In Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design, 2020 \n[5] Sun, Mengshu, et al., “VAQF: fully automatic software-hardware co-design framework for low-bit Vision Transformer”, arXiv 2022\n[6] Kong, Zhenglun, et al., “SPViT: enabling faster Vision Transformers via soft token pruning”, arXiv 2021 \n[7] He, Jiaao, et al. ''Fastmoe: A fast mixture-of-expert training system.\" arXiv 2021 \n[8] Cross-stitch networks for multi-task learning \n[9] Latent multitask architecture learning \n[10] NDDR-CNN: Layerwise feature fusing in multi-task cnns by neural discriminative dimensionality reduction \n[11] End-to-end multi-task learning with attention \n[12] Searching for MobileNetV3 \n[13] https://github.com/d-li14/mobilenetv3.pytorch \n[14] Multi-Task Learning for Dense Prediction Tasks: A Survey \n",
" **[Q1]** How the proposed double buffered computation strategy is novel as compared to the Ping-pong buffer? In your rebuttal to Review YbW8, you agree that \"the double-buffering strategy is also known as ping-pong buffering. We will add a reference to Xilinx, “Specifying Arrays as Ping-Pong Buffers or FIFOs” [5] to make it more clear.\" As I pointed out in my previous review, multi-task MoE does not seem to be very novel, as compared to Google's task-MoE for translation, your response further makes me doubt the novelty of this work. Please help clarify. \n**[A1.1]** Sorry for your confusion, but you seem to have misread our hardware innovation as well as rebuttal context. We clarify our hardware novelties as follows: \n(1) To adapt MoE ViT on hardware with acceptable latency and power, we first propose an effective per-expert queue design to enable expert-by-expert computation rather than token-by-token. Our design uses O(1) on-chip memory so that it can scale to any K and N (line 228-229) and eliminate task-switch and frame-switch overhead in our MTL system (line 229-232). No prior work, to our best knowledge, has ever achieved these. \nThe designing philosophy is clear (lines 205-215): computing each token normally either requires all N experts stored on-chip, incurring **extreme on-chip memory usage** that scales with O(N), or requires an on-chip cache of experts that causes **severe memory delays due to frequent cache misses**. Thus our model is **infeasible to implement in hardware** until we introduce our hardware co-design. \n(2) While double-buffering/ping-pong buffering itself is a well-established technique, compared with previous frameworks, our co-design introduces double-buffering in a totally different new way. To be more specific, the novelty of our design is that we propose a new computation flow that makes double-buffering effective where it otherwise would not be. \nDouble-buffering is only effective at hiding latency when computation latency dominates the memory access latency. Fulfilling this prerequisite is not trivial. For instance, in the cache-based hardware design (lines 210-215), severe memory delays are incurred by continuously reloading the cache (line 215). As a result, memory access latency will dominate and double-buffering cannot alleviate this latency bottleneck. \nOur expert-by-expert reordering unifies each expert’s memory accesses (lines 217-220), creating a novel scheme for MTL MoE ViT execution where computation latency dominates memory latency. Thus we propose to adopt double-buffering to hide memory latency almost completely (lines 227-228), where it otherwise would be ineffective. As this technique is unique to our work, it has never been exploited by any prior work utilizing double-buffering. \n**[A1.2]** As mixture-of-experts provides the scaffold upon which our computation reordering strategy is built, we reiterate that we propose a novel co-design framework for efficient on-device MTL, where software and hardware contributions are strongly and uniquely **TIED** and cannot be decoupled. Our hardware optimizations are predicated on an MoE ViT algorithm design where each token requires any K of the N total experts, and they solve the hardware challenges of extreme on-chip memory usage and extreme latency associated with such an algorithm design. Furthermore, they ensure zero task-switch overhead, necessary for our MTL system, which has fast task switching as a primary goal. ",
" Thanks the authors for the timely and detailed rebuttal. After reading other reviews and the rebuttal, I have a few additional questions:\n1) How the proposed double buffered computation strategy is novel as compared to the Ping-pong buffer? In your rebuttal to Review YbW8, you agree that \"the double-buffering strategy is also known as ping-pong buffering. We will add a reference to Xilinx, “Specifying Arrays as Ping-Pong Buffers or FIFOs” [5] to make it more clear.\" As I pointed out in my previous review, multi-task MoE does not seem to be very novel, as compared to Google's task-MoE for translation, your response further makes me doubt the novelty of this work. Please help clarify. If neither task-moe nor double buffering stragegy is not new, or there is no joint design space, I would not agree with the author about the definition of co-design.\n\n2) Resonating with Review V1Gi, why do you choose a ViT backbone? The reviewer feels that baselines around ViT targeting edge devices or FPGA's with very limited on-chip bandwidth can be naturally very bad, compared to SoTA models built around ConvNets. The reviewer would like to see evaluations built around SoTA ConvNet models. To my understanding, ResNet 18 is not a strong baseline targeting mobile devices or FPGA and ViT beats ResNet 18 is not surprising. Instead, you could compare with MobileNet-V3 or EfficientNetV1 or V2 or ShuffleNet. \n",
" Dear Reviewer V1Gi:\n\nSince the author-reviewer discussion period has started for a few days, we will appreciate if you could check our response to your review comments soon. \n\nIf you have further questions and comments, we can still reply before the author-reviewer discussion period ends. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. \n\nThank you very much for your time and efforts.",
" **[Q1]** Only two datasets (NYUD, PASCAL) are considered in experiments--no real world datasets \n**[A1]** Following the previous MTL paper [1], which conducts a thorough survey on the multi-task dense prediction field, we validate our proposed framework and compare it against other state-of-the-art methods on NYUD-v2 and PASCAL-Context. Both of them are real-world MTL datasets. Additionally, we have conducted more experiments by choosing tasks from the large-scale Taskonomy dataset [2]. Like our main manuscript, we use ViT-small as the baseline model and MoE-ViT-small for our model. We increase the number of tasks from three to nine and perform detailed evaluations. Following the same data pre-processing and evaluation method as [3], we report the **relative performance improvement** from M³ViT over the baseline ViT. As shown in the table below, M³ViT demonstrates even stronger superiority as the number of tasks increases.\n\n| DeiT-S | Depth zbuffer | Normal | Segment semantic | Edge | Occlusion | Reshading | Keypoints2d | Principal curvature | Auto encoder | Average |\n| :------------: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: |\n|3tasks| 3.33% | 0.44% | 7.74% | | | | | | | 3.84% |\n|6tasks| 4.68% | 2.58% | 10.36% | 0.80% | 3.28% | 8.20% | | | | 4.98% |\n|9tasks| 5.41% | 1.58% | 7.67% | 0.34% | 4.34% | 5.06% | 7.83% | 0.26% | 15.01% | 5.28% |\n\n\n**[Q2]** How does the model accuracy and efficiency (Latency, Energy, Memory) depend on the total number of experts N and top experts K? \n**[A2]** (1) For a fixed number of tasks, we can fix the total number of expert candidates N and increase per-task expert selection K to encourage feature reuse. In this way, performance can smoothly increase but will quickly saturate, and the training memory and testing energy/latency will also increase. \n(2) A model for a larger number of tasks is likely to benefit from a larger total number of experts N, as more experts can bring larger model capacity. However, if we fix per-task expert count K, the training efficiency and testing energy/latency will not change. This shows the significance of our modularized design in scaling up the number of tasks. \n(3) Based on the above observations, we choose N=16 and K=4, which empirically balance accuracy and efficiency nicely.\n\n**[Q3]** Is the softmax in equation 3 needed, given that you are only interested in the top-k activations? \n**[A3]** Yes, this design has been proven useful in several classic sparse MoE papers [4-6]. We follow their original Top-K implementations, which all introduce a softmax activation function in the output of the gating network. By scaling the logits into a multinomial probability distribution, they can be used to normalize the outputs. If you are referring to the fact that removing softmax would not change the output indices of the Top-K experts, we kindly highlight that we not only use the output indices of the Top-K experts, but also their expert selection scores for representation computations (Equation 2 in the main manuscript). \n\n**[Q4]** How are the top-k experts computed at training and inference time? What is the runtime? \n**[A4.1]** We replace the dense feed-forward network in the ViT block with sparsely activated MoE experts. Each token will be passed into the task-dependent gating network, to select the subset from all expert candidates, in both the training and inference stages. \n**[A4.2]** For an image with a resolution 640x480, the frame rate of our model is 36.0/s on a NVIDIA Quadro RTX 6000 GPU, and on an Xilinx ZCU104 FPGA the frame rate is 11.8/s.\n\n[1] Multi-Task Learning for Dense Prediction Tasks: A Survey \n[2] Taskonomy: Disentangling task transfer learning \n[3] Which Tasks Should Be Learned Together in Multi-task Learning \n[4] OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER \n[5] Gshard: Scaling giant models with conditional computation and automatic sharding \n[6] Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity \n",
" **[Q1]** My main concern is that the MTL baselines used are all somehow weak. I believe some level of comparison and discussion with “Task Adaptive Parameter Sharing for Multi-Task Learning” is necessary. \n**[A1]** (1) The ViT-small model used in TAPS [1] is bigger than our adopted DeiT-S (Ours: 4.5GFLOPs vs. TAPS: 9.8GFLOPs). Moreover, [1] was pre-trained on ImageNet-21k while ours was pre-trained on ImageNet-1k. (We have communicated and confirmed this with the authors.) Due to the limited time window of the rebuttal, we don’t have enough time to reimplement their code and pre-train a new model on ImageNet-21k. As their code is not available, we will reimplement their method and provide a fair comparison in our final version. \n(2) Nevertheless, we attempt a comparison with TAPS by conducting experiments on the benchmark: Flowers [2], Cars [3], Sketch [4], CUB [5] and WikiArt [6]. Following the comparisons in Table 3 of TAPS, we compare our method (MoE-DeiT-S) with the fine-tuned model (DeiT-S w/ fine-tuning). We find that our method surpasses the fine-tuned model on most datasets, which validates the effectiveness of our method. However, TAPS only demonstrates comparable results to the fine-tuned ViT-S.\n|DeiT-S| Flowers | WikiArt | Sketch | Cars | CUB |\n|:----:|:----:|:----:|:----:|:----:|:----:|\n|Fine-tuning| **96.1** | 77.5 | 76.2 | 86.1 | 84.8 |\n|MoE-Deit-S| 95.7 | **79.5** | **79.7** | **86.5** | **85.6** |\n\n**[Q2]** The authors should compare with or discuss some decoder-focused methods and demonstrate the trade-offs. \n**[A2]** (1) These decoder-focused architectures typically require initial predictions or intermediate features of all the tasks, both in training and inference, to improve the predictions. However, activating all tasks in inference violates our motivation: sparsely activating the network to achieve efficient MTL inference. Moreover, those models consume a large number of FLOPs [7], which makes them difficult to deploy onto real-world edge devices with resource and latency constraints. This is because they need higher parallelism factors, more resources, or clever tricks to hit the desired latency requirement, which is out of scope of the discussion of this paper. \n(2) Ignoring the previously mentioned efficiency and memory bottleneck, we conduct comparisons between our M³ViT-base model and decoder-focused work PAD-Net [8], which have similar FLOPs (PAD-Net: 212 GFLOPs vs. Ours: 191 GFLOPs). Our MoE ViT-base model achieves higher performance than PAD-Net on both the PASCAL-Context dataset (Ours: +4.0% vs. PAD-Net: -4.41%) and the NYUD-V2 dataset (Ours: +8.32% vs. PAD-Net: +7.43%).\n\n**[Q3]** Why jumping to a ViT backbone? Looking at Table 1, the ViT baseline is already stronger than most prior works built on ResNet 18. Can you compare with other MTL works with the same ViT backbone, or can you replace ViT with Res18 in your co-design framework? \n**[A3]** We adopt ViT backbones because they are the latest performant deep models, and have achieved impressive performance on various computer vision tasks [9-11]. Although ViT-small-MTL (row 10) achieves better performance than ResNet-MTL (row 2) in Table 1 (ViT-small: -1.77% vs. ResNet-18: -2.86%), adopting our task-dependent MoE design to ViT **further boosts the performance by a large margin** (Ours: +2.71% vs. ViT-small: -1.77%).\nApart from that, we will replace ViT with ResNet in our co-design framework and report the results in our later version. Due to the limited time window of the rebuttal, we don’t have enough time to pre-train the MoE-ResNet model.\n\n**[Q4]** If some MTL applications do require to activate all or most tasks all the time, will your framework still be advantageous? I understand this will change your problem target, but I suggest the authors to include some discussions and clarity for future readers. \n**[A4]** Our co-design framework is based on single-task execution. To achieve the goal of multi-task inference, we can design the gating network conditioned on multi-label encodings, to activate model paths for multiple tasks. We will provide a detailed discussion in the future work section. \n\n[1] Task Adaptive Parameter Sharing for Multi-Task Learning. \n[2] Automated flower classification over a large number of classes. \n[3] 3d object representations for fine-grained categorization. \n[4] How do humans sketch objects? \n[5] The caltech-ucsd birds-200-2011 dataset. \n[6] Large-scale classification of fine-art paintings: Learning the right metric on the right feature. \n[7] Multi-Task Learning for Dense Prediction Tasks: A Survey. \n[8] Pad-net: Multitasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing. \n[9] Vision transformers for dense prediction. \n[10] Segformer: Simple and efficient design for semantic segmentation with transformers. \n[11] Pyramid vision transformer: A versatile backbone for dense prediction without convolutions",
" **[Q4]** In table 1, did you implement all the related work similarly on FPGAs? How did you generate the energy numbers? If all the baselines are also implemented in FPGA, then the quality of models are largely determined by implementation not necessarily by the method itself. \n**[A4.1]** The metrics in Table 1 of the main manuscript are based on standard PyTorch implementations on GPU for all rows, except the “M³ViT (+ MoE + co-design)”, i.e., the last row. All rows except the last first provide a fair comparison of those algorithms on GPU, showing that our algorithm “M²ViT (+MoE)” outperform all prior work at smaller FLOPs. Then, comparing the second-to-last row and the last row demonstrates the strong energy efficiency gains of our hardware co-design on FPGA. \n**[A4.2]** Energy metrics are computed as the product of the power consumption of the target device (GPU for all rows except the last; FPGA for the last row) in Watts and inference latency in seconds. \n**[A4.3]** Furthermore, we provide cross-platform and cross-model performance improvement breakdowns in Table 4 for a fairer comparison. Specifically, compared to a naive implementation on FPGA, our hardware co-design using computation re-ordering decreases the energy consumption of MoE ViT on FPGA from 6.375 W·s to 0.690 W·s on the PASCAL-Context dataset.\n\n**[Q5]** Two out of six tasks are worse than cross-stitch, any reasons? \n**[A5]** The reasons are two-fold: \n(1) Cross-stitch [16] network has a more complex network design (Cross-stitch: 647 GFLOPs vs. Ours: 84 GFLOPs). \n(2) Both normal estimation and saliency detection tasks require a relatively small receptive field to retain a detailed estimation, and [16] is only allowed to use limited local information (i.e., small receptive field) when fusing the activations from the different single-task networks [17]. But for other tasks that require larger receptive fields, our model performs significantly better than Cross-stitch, since our task-dependent MoE design helps effectively avoid different tasks’ training conflict. We will add those discussions into our final draft.\n\n**[Q6]** Why make the total FLOPs only half of the baseline models (Table 1)? A fair comparison would be with similar FLOPs and compare the accuracy. \n**[A6]** (1) We use half the FLOPs because our backbone is MoE-ViT-small, while the baseline model uses ResNet18 as backbone (rows 1, 2). We adopt ViT as it is the latest performant deep model, which has achieved impressive performance on various computer vision tasks [18-20]. \n(2) Please note for a fair comparison, we also report a baseline model based on ViT-small (Table 1, row 10). Although using a ViT-small backbone helps to achieve better performance than the ResNet baseline (ViT-small: -1.77% vs. ResNet-18: -2.86%), introducing our task-dependent MoE design **further boosts the performance by a large margin** (Ours: +2.71% vs. ViT-small: -1.77%). \n(3) Meanwhile, in Table 1 of the supplement, we also report comparisons between baseline models (row 2, 7) and our MoE-ViT-base model (row 5, 10), with similar FLOPs (MTL-B: 167G vs. Ours: 161G, MTL-B: 192G vs. Ours: 191G). Our model boosts the accuracy by a large margin (Ours: +4.0% vs. MTL-B: -2.86% on PASCAL-Context dataset and Ours: +8.32% vs. MTL-B: +0.41% on NYUD-v2 dataset). \n\n[1] Fast drivable areas estimation with multi-task learning for real-time autonomous driving assistant \n[2] Indoor semantic segmentation for robot navigating on mobile \n[3] Task Inference and Distributed Task Management in the Centibots Robotic System \n[4] Evolutionary swarm robotics: genetic diversity, task-allocation and task-switching \n[5] https://nips.cc/Conferences/2022/CallForPapers \n[6] Mest: Accurate and fast memory-economic sparse training framework on the edge. (NeurIPS2021) \n[7] Learning Semantic Representations to Verify Hardware Designs. (NeurIPS2021) \n[8] Hardware-adaptive efficient latency prediction for nas via meta-learning. (NeurIPS2021) \n[9] Shiftaddnet: A hardware-inspired deep network. (NeurIPS2020) \n[10] Learning-in-the-loop optimization: End-to-end control and co-design of soft robots through learned deep latent representations. (NeurIPS2019). \n[11] Constrained deep neural network architecture search for IoT devices accounting for hardware calibration. (NeurIPS2019). \n[12] Towards hardware-aware tractable learning of probabilistic models. (NeurIPS2019). \n[13] Hardware conditioned policies for multi-robot transfer learning. (NeurIPS2018). \n[14] Taskonomy: Disentangling task transfer learning \n[15] Which Tasks Should Be Learned Together in Multi-task Learning \n[16] Cross-stitch Networks for Multi-task Learning \n[17] Multi-Task Learning for Dense Prediction Tasks: A Survey \n[18] Segformer: Simple and efficient design for semantic segmentation with transformers \n[19] Pyramid vision transformer: A versatile backbone for dense prediction without convolutions \n[20] Vision transformers for dense prediction.",
" **[Q1]** The paper is rephrasing many contributions of conditional computation/mixture-of-experts and multi-task MoE. Those contributions are not unique contributions from this paper. \n**[A1]** Indeed some prior works have used MoE for MTL, but our novelty stands out clearly in several ways, as reiterated below: \n(1) Tailored to efficient on-device MTL, we are the first to explore the novel setting of **multi-task training, single-task inference, and swiftly switching between tasks**. This setting is practically convincing [1-4] and can be uniquely enabled by MoE. No prior work utilizing MoE has ever exploited this setting. \n(2) With our task-dependent MoE and software-hardware co-design, we enable realistic efficient on-device MTL models and demonstrate that MoE for MTL can achieve **real-world memory and energy benefits**. No prior MoE for MTL work, to our best knowledge, has ever accomplished these. \n(3) The introduction of MoE is well-motivated from two aspects (recognized by Reviewer DMLe): resolving cross-task training conflicts and sparsely activating for single-task inference. We respectfully point out that we mainly claim the introduction of MoE as a unified tool for these two purposes (line 90).\n\n**[Q2]** The proposed reordering scheme is a system-level contribution which might not be best suited for NeurIPS. The reviewer feels that task-level MoE and hardware execution reordering should be decoupled. \n**[A2.1]** We respectfully point out that each year, there are dozens of accepted papers in NeurIPS that propose various hardware co-designs and put their hardware-system-level contributions as one of their main claims. Besides, software-hardware co-design for deep neural networks is one of the subfields of “Deep Learning,” listed in the NeurIPS author guidelines [5]. Below [6-13] is a non-exhaustive list from NeurIPS in recent years only, and the list can go way longer. \n**[A2.2]** Moreover, as ML and FPGA experts, we are very confident on the necessity of our co-design as our hardware innovation is indeed strongly and uniquely tailored for MoE. We reiterate that, if all tokens choose any K experts out of the N candidates, it either requires extreme on-chip memory, or incurs severe delays under a cache-based design (Section 3.2). This is precisely our motivation for the proposed hardware design, which enables zero-overhead switching between tasks and scales to any number of experts. That further lays the foundation for our efficient single-task inference in MTL. Overall, the co-design is tightly integrated that we see no reason to decouple, making it a well noted and well-justified holistic “co-design”. We note that all other reviewers unanimously appreciate this point, quoted as: “co-design for MoE is a timely attempt” (Reviewer YbW8), “hardware design is tailored for memory-constrained MTL and MoE” (Reviewer DMLe), and more.\n\n**[Q3]** The paper does not prove its point of \"handle many different tasks\" by evaluating only on a small set of tasks. \n**[A3]** (1) In Table 2 of the supplement, we experimentally show that, when we increase the number of tasks, our method consistently demonstrates an increase in the improvement over the baseline method (MTL-B: −2.86% vs. Ours: +2.71% on PASCAL-Context, MTL-B: −4.22% vs. Ours: −0.91% on NYUD-v2). This validates our claim that our method is more effective when handling more tasks. \n(2) To further validate this point, we conduct new experiments by choosing tasks from the large-scale Taskonomy dataset [14]. Like our main manuscript, we use ViT-small as the baseline model and MoE-ViT-small for our model. We increase the number of tasks from three to nine and perform detailed evaluations. Following the same data pre-processing and evaluation method [15], we report the **relative performance improvement** from M³ViT over the baseline ViT. As shown in the table below, M³ViT demonstrates even stronger superiority as the number of tasks increases. We will be happy to integrate those new results into our final draft.\n\n| DeiT-S | Depth zbuffer | Normal | Segment semantic | Edge | Occlusion | Reshading | Keypoints2d | Principal curvature | Auto encoder | Average |\n| :------------: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: |\n|3tasks| 3.33% | 0.44% | 7.74% | | | | | | | 3.84% |\n|6tasks| 4.68% | 2.58% | 10.36% | 0.80% | 3.28% | 8.20% | | | | 4.98% |\n|9tasks| 5.41% | 1.58% | 7.67% | 0.34% | 4.34% | 5.06% | 7.83% | 0.26% | 15.01% | 5.28% |",
" **[Q1]** Using mixture-of-expert to deal with multi-task learning is not a novel idea, but the co-design attempt is interesting to me. \n**[A1]** We agree that some prior works use MoE for MTL, but we respectfully point out that: \n(1) Tailored to efficient on-device MTL, we are the first to explore the novel setting of using multi-task training with single-task inference and swiftly switching between tasks, which is practically convincing [1-4] and can be uniquely enabled by MoE. No prior work utilizing MoE has ever exploited this setting. \n(2) By proposing the task-conditional MoE MTL ViT and the hardware innovations, we enable realistic efficient on-device MTL models and demonstrate that incorporating MoE into MTL can achieve significant memory and energy benefits in real-world systems. No prior work, to our best knowledge, has ever accomplished these. \n\n**[Q2]** Is the mentioned double buffered computation strategy similar to Ping-pong buffer? If so, it would be better to add the reference. \n**[A2]** Yes, the double-buffering strategy is also known as ping-pong buffering. We will add a reference to Xilinx, “Specifying Arrays as Ping-Pong Buffers or FIFOs” [5] to make it more clear.\n\n**[Q3]** How are different experts shared between different tasks? Will the scaling in task number require more experts to maintain a satisfied performance? \n**[A3.1]** Each of the different tasks will select its own top-K experts from the total of N expert candidates via a task-dependent gating network. Expert sharing across tasks is learned automatically through training. \n**[A3.2]** (1) Yes. A model for a larger number of tasks is likely to benefit from a larger total number of experts N, as more experts can bring a larger model capacity. (2) However, we do not need to scale up the per-task expert count K when more tasks are involved. (3) The training efficiency and inference resource cost are dependent only on K and per-expert size. Our results on both NYUD-v2 and PASCAL-Context (Table 1 in appendix) also validate that our proposed co-design model can scale up nicely with more tasks.\n\n**[Q4]** What is the training cost of the proposed model compared with baseline methods? Does it need extra training to learn the expert-task mapping and fully train the large number of experts? \n**[A4.1]** M³ViT has a very similar number of training FLOPs to the ViT baseline, as (1) we only activate a small portion of experts for each image token, and (2) the computational cost incurred by the added gating network, which is a one-layer MLP network per ViT block, is negligible (~0.05 GFLOPs for image resolution 640x480). \n**[A4.2]** We do not need extra training for the expert-task mapping. In our experiments, we keep the number of training epochs of MoE-ViT the same as that of the ViT baseline. \n\n**[Q5]** What is the performance improvement breakdown for the proposed hardware design? \n**[A5]** Our proposal aims to improve the latency of the “experts computation”. For example, vanilla “experts computation” takes 684.586ms in a cache-based FPGA implementation on the NYUD dataset, and is reduced to 18.567ms with our hardware co-design—a 97.3% improvement. Please refer to Table 4 in the main manuscript for more details on the overall performance improvement with our hardware co-design, as well as Figure 2 in the supplement for a latency breakdown.\n\n[1] Fast drivable areas estimation with multi-task learning for real-time autonomous driving assistant \n[2] Indoor semantic segmentation for robot navigating on mobile \n[3] Task Inference and Distributed Task Management in the Centibots Robotic System \n[4] Evolutionary swarm robotics: genetic diversity, task-allocation and task-switching \n[5] https://docs.xilinx.com/r/en-US/ug1399-vitis-hls/Specifying-Arrays-as-Ping-Pong-Buffers-or-FIFOs ",
" We thank all reviewers for their insightful and constructive suggestions. We are glad that reviewers found \n(1) The introduction of MoE for multi-task learning (MTL) is well motivated (Reviewer DMLe); \n(2) The model-accelerator co-design for efficient on-device MTL is an attempt in the right direction (Reviewer YbW8, V1Gi, DMLe, SvCo), and it provides promising performance and might be of independent research interest (Reviewer DMLe); \n(3) Experiments are solid (Reviewer DMLe, SvCo) and the proposed framework achieves good accuracy and/or efficiency (Reviewer YbW8, V1Gi).\n\nWe have addressed all the questions that the reviewers posed with additional experimental results. We will carefully modify our main manuscript later, following those suggestions.",
" In this paper, the authors propose to solve the gradient conflict in training and dense activation in inference in multi-task learning using a co-designed mixture-of-expert model. In algorithm level, the authors propose to replace ViT layer with mixture of expert layer and sparsely activate them during training to alleviate the gradient conflict issue, while during inference, the author propose a novel computation reordering scheme to better support the sparse activation for different tasks. Experiments on GPU and FPGA show the proposed co-design achieves better performance than existing methods. Strengths:\n- The co-design for Mixture-of-Expert to enable multi-task learning is a timely attempt. \n- The achieved results is promising in both accuracy and efficiency. \n\nWeaknesses:\n- Using mixture-of-expert to deal with multi-task learning is not the novel idea, but the co-design attempt is interesting to me. \n- Is the mentioned double buffered computation strategy similar to Ping-pong buffer? If so it would be better to add the reference. - How are different experts shared between different tasks? Will the scaling in task number requires more experts to maintain a satisfied performance? \n- What is the training cost of the proposed model compared with baseline methods? Does it need extra training to learn the expert-task mapping and fully train the large number of experts? \n- What is the performance improvement breakdown for the proposed hardware design? The authors have adequately addressed the limitations and potential negative societal impact of their work.",
" The paper presents a multi-task MoE and model-accelerator co-design. At the algorithm level, the method adopts a task-level mixture-of-expert approach that can effectively reduce training and inference cost. At the model-accelerator level, the paper proposes a computation reordering scheme that is tailored for memory-constrained MTL and achieves little switching overheads. The paper observes significant inference FLOPs reduction and memory requirement reduction, compared to a FPGA baseline. Strengths:\n- The paper has a very thorough and complete related work and reference section. \n- Multi-task MoE is an important field that generates numerous important works. This paper seems to be a right direction addressing important problems. Co-designing a software/hardware systems can be the right problem to solve. \n- The proposed reordering reduces memory footprint at small overheads. \n\nWeaknesses:\n- The paper is rephrasing many contributions of conditional computation/mixture-of-experts and multi-task MoE. Those contributions are not unique contributions from this paper. For example, token-MoE has been prevalent for a long time and more recently, task-MoE has been demonstrated useful in many tasks including machine translations (e.g. Machine translation from Google). [1] \n\n- The proposed reordering scheme is a system-level contribution which might not best suited for NeurIPS. Even though, there is a co-design claim, the reviewer feels that task-level MoE and and hardware execution reordering should be decoupled. The same reordering scheme can be applied to other types of networks equally. In order to prove co-design is effective, some co-optimization should be applied which can require some joint search space on the MoE model and reordering scheme. The paper should show that co-designing these two parts can be useful and is better than separately optimizing each of them.\n\n- The paper listed challenges round \"handle many different tasks\" and \"switch between tasks\" and how different tasks can interfere each other when training on a shared model backbone. An example given in the paper is the autonomous driving setting where potentially hundreds of tasks are running on the shared model backbone. However, the paper does not prove it's own point by evaluating only on a small set of tasks. \n\n[1] \"Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference\", https://aclanthology.org/2021.findings-emnlp.304.pdf 1. In table 1, did you implement all the related work similarly on FPGAs? How did you generate the energy numbers? If all the baselines are also implemented in FPGA, then the quality of models are largely determined by implementation not necessarily by the method itself. \n2. Two out of six tasks are worse than cross-stitch, any reasons?\n3. Why make the total FLOPs only half of the baseline models (Table 1)? A fair comparison would be with similar FLOPs and compare the accuracy. Yes. ",
" This paper studies a novel setting of MTL: training with multiple tasks hoping them to boost each other, and inference with a single task each time. The setting is practically convincing as most MTL systems switch between tasks instead of executing all tasks at the same time. The authors presented a model-accelerator co-design framework, involving both algorithm and hardware innovations. Strong points:\n1. The overall problem and idea are novel and interesting. Most MTL papers assume all tasks to be trained and tested in the same bundle. This paper allows for more flexibility. \n2. The introduction MoE is well motivated from two aspects: efficiency for single-task inference (sparse activation); and avoiding MTL training conflicts (grouping similar modules). Using MoE is hence natural in this new setting, and to my best knowledge, is novel for the MTL field. The authors also designed a task-dependent gating network for this new purpose.\n3. The authors meanwhile present interesting hardware innovations. To my best knowledge, not too many hardware works were done in the MTL domain. This work proposed a novel computation reordering mechanism tailored for memory-constrained MTL and MoE, which allows scaling up to any number of experts and also achieves zero-overhead switching between tasks. It sounds very promising and might be of independent research interest. \n4. Experiments are solid, involving customized FPGA design on real hardware, and evaluating two major MTL datasets (NYUD-v2, PASCAL-Context). The proposed co-design can save memory and energy costs by up to one order of magnitude, with no worse accuracies than baselines. Besides, more tasks seem to save more (PASCAL versus NYUD), which shows great scalability to larger numbers of tasks.\n5. The abstract and intro were written elegantly to give a proper big picture. The authors also did a good job in making figures. As another plus, codes are provided in the supplementary: I did not have time to carefully check but from quick reading, the code quality looks fine.\n\n\nWeak points:\n1. My main concern is that the MTL baselines used are all somehow weak. Even though the problem setting is new, and I know not many peer algorithms did exactly the same job, there are still some. For example, the authors should check “Task Adaptive Parameter Sharing for Multi-Task Learning” (CVPR 2022). That paper has important similarity to this submission, in learning modularized models for MTL while allowing for single-task inference. Although it does not use dynamic MoE and has no hardware co-design, I believe some level of comparison and discussion is necessary. There might be more in literature. \n2. One other baseline-related issue is that the authors only compare with encoder-focused MTL models, and claim that is because decoder-focused ones are in general heavier (but also, more performant). I think the authors should at least compare with or discuss some decoder-focused methods and demonstrate the trade-offs here, rather than completely ignoring. For example, is it because decoder-focused MTL models cannot fit into FPGA, really? \n3. Why jumping to a ViT backbone? Looking at Table 1, the ViT baseline is already stronger than most prior works built on ResNet 18 (as expected). Can you compare with other MTL works with the same ViT backbone, or can you replace ViT with Res18 in your co-design framework? That would help disentangle your own contributions.\n4. If some MTL applications do require to activate all or most tasks all the time, will your framework still be advantageous? For example, what if always executing 2, 3, or 5 tasks simultaneously on PASCAL? I understand this will change your problem target, but I suggest the authors to include some discussions and clarity for future readers. Please see the weakness part. No, the current version claimed it has no negative social impacts.",
" The paper describes an efficient multi-task neural network for vision applications that can be deployed on resource-constrained devices. The model consists of a mixture of vision transformer experts that are sparsely activated by a GELU activation function. The main innovation is that mixture of expert layers are computed expert-by-expert instead of token-by-token, which reduces latencies of loading expert weights. ## Strengths\n* Computing mixture-of-expert layers seems like a simple yet effective why to increase the efficiency of a vision transform mixture of expert architecture\n* The paper is well well\n* Experiments are solid\n\n## Weaknesses\n* Only two datasets (NYUD, PASCAL) are considered in experiments--no real world datasets * How does the model accuracy and efficiency (Latency, Energy, Memory) depend on the total number of experts N and top experts K?\n* Is the softmax in equation 3 needed, given that you are only interested in the top-k activations?\n* How are the top-k experts computed at training and inference time? What is the runtime? Yes"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
2
] | [
"-W-Yh2m_RSD",
"Gpq8ORTV1kl",
"R0Uf74TLDrS",
"Gpq8ORTV1kl",
"1yVI4Vzi6b3",
"r9cUm6ih1ZV",
"L8Al9lhUbm5",
"G_NAXruH3I2",
"1yVI4Vzi6b3",
"r9cUm6ih1ZV",
"XgORurZT_Ud",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU-",
"nips_2022_cFOhdl1cyU-"
] |
nips_2022_NaZwgxp-mT_ | Training Uncertainty-Aware Classifiers with Conformalized Deep Learning | Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty and estimate reliable probabilities. In particular, they tend to be overconfident. We begin to address this problem in the context of multi-class classification by developing a novel training algorithm producing models with more dependable uncertainty estimates, without sacrificing predictive power. The idea is to mitigate overconfidence by minimizing a loss function, inspired by advances in conformal inference, that quantifies model uncertainty by carefully leveraging hold-out data. Experiments with synthetic and real data demonstrate this method can lead to smaller conformal prediction sets with higher conditional coverage, after exact calibration with hold-out data, compared to state-of-the-art alternatives. | Accept | Decision: Accept
This paper extends conformal prediction techniques to multi-class classification using deep neural networks and make the training of the neural network to be aware of the conformal inference processing. The main technical contribution is a differentiable objective to approximate a CDF-based test on the conformity score. The paper provides both theoretical analysis as well as empirical evaluation results of the proposed approach.
Reviewers found the paper to be well written and the approach to be well motivated and supported. There were a few technical concerns but many of them were addressed in author feedback.
Still the main technical downside is the expensiveness of the approach, and the experiments being relatively small scale regarding network size & dataset size. Also a lot of practical issues are not discussed, e.g., data augmentation, distribution shift, etc.
However, this is indeed an early work in the conformal prediction area that tries to make neural network training adaptive to conformal inference "post-processing", and I believe the work is going to the right direction and will have good impact in "conformal prediction + DL" area.
As a side note, I'd encourage the authors to add discussions on related work that proposes regularisers for better neural network calibration, e.g., MMCE https://proceedings.mlr.press/v80/kumar18a/kumar18a.pdf. | train | [
"MWQ40uZ86p",
"ky-KOEzDU0_",
"luTqRfaHUX",
"TOk5gBkQyGJ",
"9tDeUTly8zI",
"zoWfSQH0yFK",
"U9iDarhnHrg",
"yRVBlTBy6Mm",
"pdCp6KmoHo4",
"bPKi8Thg6HV",
"5D1vE2-isEj",
"lHiEfbN0KY_",
"Vi-6KuqupI_",
"O6goads0k1d",
"8y9Qx6AyrM3",
"5-Mf2nPzZC",
"AhPtj6ROenb",
"QPD2xxvu1MZ",
"AYJjsj0wUDa",
"HEIzvVWtAZK",
"k0Sob07SnHi",
"2g4yW8aMdQE",
"6NqaD4rLbOX",
"_e2dThOs-WE",
"pPEsGlpA-wZ",
"f3XAPkRJAO_",
"DrrdSoQVjb6",
"yLYWRAsKjLs",
"hTfgQmJgaQx",
"9nPmxpTZO0F",
"6fuennJBfag",
"iAgcSKyV1GN",
"ZAbPk_ZEsu",
"fDcaB4Bn8MC",
"Ef1wFM7Mp98"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the stimulating discussion! We will incorporate these ideas in the paper, and the extra analyses in the appendix. You have also successfully convinced us to look at data augmentation more closely in the near future.",
" In any event, thanks for engaging. Given the amount of discussion that has arisen, I do think that others in the community will also find it interesting to discuss--and am raising my score.\n\nI do heavily encourage you to include the extra analysis in the appendix, including the discussion of the questions brought up here (and with the other reviewers).",
" Sure, that might be a good idea to try. \nPerhaps we should write a follow-up paper specifically focused on data augmentation.",
" Thanks. Not to beat a dead horse, but then as we agree that it is indeed feasible within your framework, adding in data augmentation to see how your method compares with stronger base models would be a good addition. Having such low accuracy compared to other Resnet18s only raises questions, needlessly.\n\nAnother thought: $X_1, f(X_1), X_2, f(X_2)$ aren't exangeable, but $X_1, X_2$ and $f(X_1), f(X_2)$ are. Or, even $F(X_1), F(X_2)$ should be as well, where $F$ is a random function. Would it help your method (perhaps even from a data efficiency view, requiring smaller $\\mathcal{I}_2$), if you randomly sampled a single augmentation per image at each round (i.e., randomly keep only the original image, or its mirror, if the augmentation was $F \\in \\{ \\text{original, mirror} \\}$.\n",
" > Except my question was in reference to the fact that you do data splitting---what prevents data augmentation of split $\\mathcal{I}_1$ only?\n\nOh, got it now: thanks for clarifying. You're right, it is easy and it makes sense to use data augmentation in $\\mathcal{I}_1$; we could have done that. We'd be happy to mention the idea in the paper. Your question previously got us thinking about data augmentation in $\\mathcal{I}_2$ and $\\mathcal{D}_2$ simply because that would be conceptually more interesting from the point of view of the methodological innovations proposed in this paper, but it is not straightforward.",
" The general case I understand. Except my question was in reference to the fact that you do data splitting---what prevents data augmentation of split $\\mathcal{I}_1$ only? The exchangeability of $\\mathcal{I}_2$ and $\\mathcal{D}_2$ are unaffected, no? This is the only split where it would apply for training a good base model anyway.",
" Similarly to the cross-validation or jackknife idea discussed above, there is no theoretical reason why one could not use data augmentation in combination with our training method. However, it is not so obvious how to best use data augmentation for conformal calibration. The problem is that data augmentation violates the exchangeability assumption.\n\nSuppose we augment each of n images in a data set with a mirror image. Even if we assume the original n images were exchangeable, the new 2n augmented images clearly are not, because they are tied pairwise by mirror relationships. In other words, suppose I give you this data set: (Image 1, Mirror Image 1, Image 2, Mirror Image 2, Image 3, Mirror Image 3, Image 4, Mirror Image 4). Could you tell that this data set is statistically different from the following: (Image 4, Mirror Image 3, Mirror Image 1, Image 2, Mirror Image 2, Image 1, Image 3, Mirror Image 4)? Yes, it should be quite clear, because there was meaning in the order of the images in the first data set, but there is no meaning in the second data set.\n\nGiven that we are trying to be as precise and rigorous as possible in this paper, it doesn't feel right to simply naively go ahead with data augmentation disregarding this subtle exchangeability issue. We think that combining data augmentation with our method is possible, but it must be done with care. We feel that this problem is sufficiently challenging and important to be best left to follow-up work.\n\nWe hope this answers your question. Please let us know if something is unclear or if you have any more thoughts! ",
" Yes, I think that a jackknife+ or J+aB approach would be a great followup.\n\nI\"m still not sure I understand your previous comments about what makes data augmentation hard during training though per your comment on exchangeability. Could you clarify?",
" Dear Reviewer m7U1,\n\nThank you for taking the time to consider our long response, and for your good suggestions of adding further technical details about differentiable sorting as well as mentioning the possible complications related with using our method with data augmentation.\nWe will follow your advice in the next round of revision, or while preparing the camera-ready manuscript if our paper is accepted.\n\nSincerely,\nThe auhtors",
" Dear Reviewer 3Bsi,\n\nIt is indeed very interesting to ask whether a more data-parsimonious version of our method (e.g., using cross-validation or jackknife hold-outs instead of sample splitting) could be developed in the future. As a first approach, it seemed intuitive for us to evaluate the novel conformity loss function on data that are never processed through the cross-entropy loss, but it is true that this is not the only possible approach in theory. Our intuition is that cross-validation or the jackknife may be relatively more susceptible to overfitting in this context---due to the multi-epoch nature of the training algorithm. However, we agree that this is something that may be worth verifying empirically in a near future. \n\nThank you again for the great discussion! ",
" Dear authors,\n\nThank you for your very complete response, which has answered many of my original questions. \n\n- I'm satisfied with the response regarding empirical improvements; thanks for clearing up my misunderstanding, as I missed that the target coverage in that table was only 90%.\n\n- I tend to agree with the authors point that higher computational cost may be worth paying for better uncertainty quantification---and it depends on the application. Hopefully, this computational overhead may be reducible in future work.\n\n- I understand that there are both epistemic and aleatoric types of uncertainty. While your experiments may have focused on aleatoric uncertainty, and this is great, there are other problems in which epistemic uncertainty may dominate, and data collection is expensive. In these cases, as a practioner, it will be hard to decide what is better: to leave our data for calibration because some of my uncertainty may be aleatoric, or use more data to try to improve my model. Other conformal methods (like jackknife+) that have better data efficiency don't make this choice as difficult. That said, there is nothing not to disagree about when you say that uncertainty is not necessarily a small sample issue, and that data splitting is not necessarily a bad thing. I think it's worthwhile to acknowledge, though, that data splitting also has its downsides, and is not a panacea. \n\n- I'm still not sure why you couldn't have taken the typical training scheme for loss $\\ell_a$. After all, you do data splitting, so simply not doing data augmentation on splits $\\mathcal{I}_2$ and $\\mathcal{D}_2$ would preserve exchangeability, no? It's not clear why any restrictions must be placed on $\\mathcal{I}_1$, or why one cannot simply follow state-of-the-art for computing the $\\ell_a$ loss (including data augmentation). Data augmentation is only performed at training time, not at inference. While it may be an _orthogonal method_, it would still be good to know how much of the _performance benefit_ your method and data augmentation methods give are independent.\n\nThat said, in light of some of discussion in the response, I will be happy to raise my score.\n",
" Thank you for reviewing this paper. Could you respond to the author feedback, or at least acknowledge that you've read the reply? Does the author reply address your concerns?\n\nBest, AC",
" Thank you for reviewing this paper. Could you respond to the author feedback, or at least acknowledge that you've read the reply? Does the author reply address your concerns?\n\nBest, AC",
" Thank you for the discussion!\nWe will think about how to address the important follow-up problem you suggested.",
" I see that the strongest case for the model is using it as used in the experiments in the paper. But generalizing to other corruptions may actually be a side effect of the outlined training procedure. Without experiments, one can never know. Even if they were to fail, they would provide more insight as to the full extent of the method and what can be worked on in the future. \n\nAnyway, I still think the method is useful and novel which is reflected in my original score. \n\nThanks",
" Thank you for your detailed response. I particularly appreciate the clarification on the post-hoc recalibration and the additional results demonstrating the reduced reliance on this second step. I’m happy to facilitate a consensus between the reviewers and will increase my score.\n\nA couple more specific comments on the response:\n\n* **W1**: I appreciate that under the hood these differentiable sorting methods may be complex and that the reader does not need to understand their inner workings if software packages supporting them exist. However, I would imagine that based on the API and abstractions of such a software package it would be possible to provide an intuition to the reader of what the core functionality necessary is. A deep learning paper won’t discuss the inner workings of PyTorch’s autodiff module to calculate gradients, but will give an idea of e.g. the innovations on a new layer type/architecture etc that would fit into the module library. I also appreciate that you intend to release code accompanying the paper, however the core artifact of a publication is still the paper itself, so the paper should give at least a good idea of the implementation (even if it is not possible to reproduce all results without the code, but most readers won’t attempt to do this, so they shouldn’t have to refer to the code to understand the paper).\n* **W3**/**Q2**: Thank you for the clarifications, I had not noticed that you weren’t using data augmentation, I can see that this may lead to a need for early stopping. I would suggest mentioning as a limitation that the method can’t be used with data augmentation out-of-the-box.",
" There is a good reason why all models in our experiments are trained on data containing at least a few corrupted images. How could any model otherwise learn how to properly deal with aleatory uncertainty, after being trained on clean CIFAR-10 data which contain virtually no aleatory uncertainty? The strength of our method is that it can be more effective than its benchmarks in recognizing uncertainty in the training data and learning how to account for it at test time. However, despite its relatively good performance, our method is not \"magic\". It cannot learn about patterns that are never observed in the data, and neither could any other realistic alternative approach! This is why we think it is more informative to carry out experiments in which we train our method on a mixture of clean and corrupted images, and then we apply it to test data involving similar mixtures with varying proportions. These are in a certain sense simpler experiments than you suggest, but they are better fitted to the main point of this paper and they are still highly non-trivial, as demonstrated by the clearly less than ideal performance of the existing benchmarks.\n\nAnalogous reasoning explains why we did not try to apply our models, nor any of the benchmarks, to CIFAR10-C test data. The CIFAR10-C images are affected by a completely different set of possible corruptions, including manipulation of contrast, brightness, sharpness/blur, level of noise, etc. If we were to apply our pre-trained models (fitted on CIFAR10 with masking corruptions) to CIFAR10-C images, the results would speak more as to the robustness of these models to major distributional shifts than to their ability to learn about previously observed uncertainty. Of course, uncertainty estimation and robustness to distributional shifts are related issues, but they are also clearly distinct insofar as this paper goes. This paper focuses on uncertainty estimation and conditional coverage, not on robustness to completely new and previously unseen types of data. If we wanted to obtain models that are more robust to completely new types of images, we would most likely need even more sophisticated models trained on much more diverse types of data (e.g., image classification data with lots of different types of corruption) [1]. We think this is a very interesting direction for future work, and we would be happy to mention it in a paper revision.\n\n[1] D. Hendrycks, N. Mu, E. D. Cubuk, B. Zoph, J. Gilmer, and B. Lakshminarayanan. \"AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty.\" In International Conference on Learning Representations. 2019.",
" Thank you or your response and extra experiment on Weakness(2). My main point in asking this question was to see how the model behaved when seeing corruptions which were not present in the training set. Section 4.2 says that the training set contains corrupted images too. \n\nIf I am not mistaken, the pretrained models you have can be compared on the CIFAR-10 test set in this way without any retraining, correct?",
" Dear Reviewers Xxp1, 3Bsi, 9Dxm, and m7U1,\n\nThank you for reading our paper carefully and providing many insightful comments. We have responded point-by-point below, and we have conducted new numerical experiments to accompany our answers to your comments.\nTo facilitate the second round of review, we have kept the new empirical results separate from the original submission. NewFigures 1--10 summarize the results of these new additional experiments, and they can be found in the supplementary file \"response_figures.pdf\".\n\nIf our paper is accepted, we will incorporate the new figures into the manuscript (or in the supplementary material, with pointers in the main text). We will also distill the main points of our discussion into the paper.\n\nWe would be very happy to continue the discussion if you have any remaining/follow-up comments or questions! \n\nThank you!\n\nThe anonymous authors",
" Limitation. Regarding the use of post-hoc calibration, this issue is related to a comment by Reviewer Xxp1. Our method is specifically designed to train models that are approximately calibrated on their own, which tends to make the post-hoc conformal calibration step relatively less crucial compared to models trained through standard means. To illustrate this point, we have added several figures (NewFigures 5-9) in the supplementary material (response_figures.pdf) which summarize the results of new numerical experiments conducted without post-hoc calibration, on both real and synthetic data. These results demonstrate that prediction sets computed by models trained with our method are much closer to having valid coverage compared to models trained by other means, even if no post-hoc conformal calibration is performed. These new results will be integrated into the main paper if we are given the opportunity to revise it. While such observations speak positively as to the performance of our method, we do not believe they would justify skipping the post-hoc calibration step, and indeed we never advocate to do so. To the contrary, we believe it is generally a good idea to perform post-hoc calibration regardless of how the model is trained because that is the only currently available solution to obtain rigorous finite-sample mathematical guarantees. Our goal in this paper is simply to provide methods to train models that can be later calibrated as smoothly and efficiently as possible. If given the opportunity to revise this manuscript, we will indicate more clearly in the introduction that post-hoc conformal calibration remains a good idea with any model, including those trained with our method.",
" Minor note (1). Neural nets trained with proper scoring rules may very well be calibrated in some imaginary infinite-data limit, but unfortunately this does not mean they are calibrated in practical applications with finite data sets. They are typically not even close to being well-calibrated; they are often very poorly calibrated. Think of Figure 1, and most other results in our paper: it’s not easy to get high conditional coverage, which means the models are not well calibrated. Even when neural networks happen to be approximately calibrated in some marginal “average” sense, they are usually not calibrated at all for many different types of test cases (poor conditional coverage). The difficulty of training neural networks (or other machine learning models) that are well-calibrated in practice is notorious, which is why there is a rich literature on all sorts of regularization techniques, early stopping strategies, post-hoc calibration methods, and conformal inference to try to address this problem. \n\nWe also find ourselves at disagreement on the statement that “Intrinsically noisy data is not problematic, as this would be reflected in the training set and lead to less confident predictions.”. If only that were true! Deep neural networks were not originally designed to work with intrinsically noisy data, and indeed the comments of other reviewers seem consistent with our impression that aleatory uncertainty has not always received sufficient attention in deep learning. The practical overconfidence of deep neural networks is a particularly serious problem with intrinsically noisy data even if there is no covariate shift, as our numerical experiments and applications demonstrate (compare for example Tables 1 and 2). Again, the confusion here might arise because the reviewer is thinking more about theoretical limits of infinite data. Regardless of whether deep neural networks could in theory become naturally well-calibrated using any (reasonable) loss function in some abstract infinite-data limit, we have to deal with the fact that they are not really doing well enough from that regard in the real world (see Table 2, for example). The goal of this paper is not to suggest that some theoretical flaw in traditional loss functions fundamentally prevents them from learning well-calibrated models in some infinite-data-limit abstraction. Our goal is to provide a concrete and directly applicable solution to the very real ill-calibration problems that anyone can easily observe in many applications.\n\nFinally, regarding covariate shift, we do agree: of course covariate shift makes calibration even more challenging. See for example our Figure A5, or the new figures added in this rebuttal, as discussed above. However, covariate shift is not the only problem. We hope that at this point we have made our case that it is generally not easy to train well-calibrated deep neural networks, with or without covariate shift, but our method can be useful for that purpose under both settings.\n\nMinor note (2). As discussed in previous answers to related comments, Figure A5 reports on the results of numerical experiments based on synthetic data with covariate shift. Additionally, we have carried out additional experiments based on the CIFAR-10 data under different degrees of image corruption in the test set (covariate shift). These results are included in NewFigure 10, within the new supplementary document “response_figures.pdf”. These figures show that our method is more robust to covariate shift compared to all alternative benchmarks, consistently with the results in Figure A5 pertaining synthetic data. We will insert this figure into the paper if given the opportunity to revise it.\nIt is interesting to spend a few words to explain why it should be unsurprising that our method performs relatively well under these forms of covariate shift even though data exchangeability is violated. The reason is that our method practically achieves higher conditional coverage on these data sets compared to the benchmarks, and methods with perfect conditional coverage are theoretically immune to covariate shift. ",
" Question (2). As mentioned in our answer above to your earlier comment, early stopping is a well-known technique which can serve as an implicit form of regularization and can often mitigate overfitting and lead to 1-guess predictions with higher test accuracy. However, early stopping is not a satisfactory solution for overconfidence because it is typically implemented in such a way as to approximately maximize 1-guess prediction accuracy, not to mitigate overconfidence. This fact is consistent with our observations that early stopping does not seem to be as useful in combination with our method, and it can make the conditional coverage of models trained via cross-entropy even worse.\n\nRegarding the performance of different methods on the CIFAR-10 data, this is related to a similar question by reviewer 3Bsi. The reason why we do not reach 90%+ accuracy with Resnet18 on CIFAR-10 is that we do not utilize, for simplicity, all the different data augmentation techniques applied in the paper mentioned by the reviewer. As prior works have demonstrated empirically, data augmentation can be a very effective strategy to boost predictive accuracy in image classification tasks. However, as it involves the very definition of the available data set, we see it as a somewhat orthogonal issue to that of designing the loss function and learning algorithm, which is the problem we consider here. There is no particular reason why the conformalized training ideas discussed in this paper could not in the future be applied in combination with data-augmentation techniques, and in fact we suspect that it might help to do so. However, it would not be completely straightforward to add image augmentation in this paper. First, it would excessively lengthen the manuscript and complicate the comparison with different methods, because it would result in lots of different combinations of training algorithms and data augmentation techniques to consider. Second, the theory behind conformal inference typically assumes all data points to be exchangeable with one another, which is not necessarily true after data augmentation. This issue can likely be addressed in an effective and theoretically rigorous way, but the topic deserves sufficient attention and care that it is best left to future work. We will emphasize this exciting opportunity for further research if given the opportunity to revise this manuscript.\n\nRegarding why we corrupted some images in CIFAR-10, instead of working only with the original clean images, is that this is a data set with very low aleatory uncertainty (see our response to reviewer Xxp1 for a full discussion of different sources of uncertainty). As explained in the introduction, overconfidence is an especially concerning problem in applications with significant aleatoric uncertainty, and our method is specifically designed to address that issue. Therefore, the clean CIFAR-10 data set would not offer the most interesting application, because uncertainty isn’t really a huge concern there and there is very little room for improvement. That being said, we have explored the effect of different proportions of corrupt images in the training data on the performances of different methods in Figure A27. We have also performed additional experiments on the CIFAR-10 to investigate the effect of covariate shift (changes in the proportion of corrupt images) in the test set; see our answer to the related comment by Reviewer 9Dxm for further details. The results of these additional experiments are included in NewFigure 10, within the new supplementary document “response_figures.pdf”. These results show that our method is more robust to covariate shift compared to all alternative benchmarks, consistently with the results in Figure A5 pertaining synthetic data. We will insert this figure into the paper if given the opportunity to revise it. Unfortunately, we did not have sufficient time to carry out the particular experiments you suggested within the limited time frame allowed for this rebuttal period, because that would require re-training all models on clean CIFAR-10 data, which is not something we have already done. In any case, we hope that this response can already answer your question. \n\nQuestion (3). Please see our answer to your comment about “weakness 4”, which also answers this question.",
" Questions. “I'm overall not too confident in my review due to being unfamiliar with the related literature and commonly used experimental setups and results”. We appreciate the honesty, but we would like to reassure you that we found your questions to be quite insightful, and we are grateful for the opportunity they give us to provide further clarifications. \n\nQuestion (1). This question is related to a similar question reviewer 3Bsi. Our guiding theoretical result (Proposition 1) says that the conformity scores of a well-calibrated model should be uniformly distributed when evaluated on independent (or hold-out) data. The “hold-out” part of this statement is crucial. Having uniformly distributed conformity scores on the training data is perfectly consistent with the model being grossly overfitted and overconfident, and thus it says nothing about whether the predictions for future test points will be more or less well calibrated. This is why we prepare the model to produce well-calibrated predictions for future test points by training it to produce approximately uniform conformity scores for hold-out data. Please see our longer answer to reviewer 3Bsi and other related responses to other reviewers for further details.",
" Weakness (3). We don’t quite agree that our results on real data are a “mixed bag”. We actually think they are quite informative and clearly encouraging, but it is true that they may take a moment to parse. Uncertainty is a subtle concept to quantify and communicate, and it has to be measured carefully along different dimensions. First, as discussed with reviewer 9Dxm, conditional coverage and prediction set size need to be interpreted together. In fact, the trivial prediction sets which always include all possible labels would have perfect 100% conditional coverage but would also be completely useless. Second, the room for improvement in conditional coverage within the CIFAR-10 data set (Table 1) is small, because the benchmark methods already do quite well (see our discussion of epistemic vs aleatory uncertainty with reviewer 3Bsi). Yet, our method fully achieves the desired 90% conditional coverage in Table 1 without an excessive increase in the size of the prediction sets. Third, our results on the credit card data set are even more clearly positive: we achieve 60% conditional coverage while the best alternative benchmark only achieves 53%; again, this improvement is obtained without an excessive increase in the size of the prediction sets. These results are far from being mixed, and they are consistent with the picture drawn from the simulated data experiments.\nRegarding the comparison of different methods with and without early stopping, this is actually very important and informative. Early stopping is a well-known technique which can serve as an implicit form of regularization and can often mitigate overfitting and lead to 1-guess predictions with higher test accuracy. Therefore, it is important to include it in our experiments for a fair comparison with all benchmarks. At the same time, early stopping is not a satisfactory solution for overconfidence. In fact, early stopping is typically implemented in such a way as to approximately maximize 1-guess prediction accuracy, not to mitigate overconfidence. This fact is consistent with our observations that early stopping does not seem to be as useful in combination with our method, and it can make the conditional coverage of models trained via cross-entropy even worse. In this sense, it is not true that “Reporting results without early stopping seems unnecessary [...] since the classification performance is significantly better with early stopping”. As shown in Tables 1 and 2, early stopping leads to (mostly) higher 1-guess accuracy for our benchmark methods, but at the same time it often makes their overconfidence problems even worse (lower conditional coverage). We hope that this clarifies the confusion. In any case, we are grateful for your question because these are important discussions which deserve more space in the manuscript. We will be glad to include them in a shorter format if given the opportunity to revise the paper. \n\nWeakness (4). We have indeed studied the effect of different values of the hyper-parameter lambda in our loss function, but we omitted some results in the interest of space. We appreciate the opportunity given to us here to provide some additional information about the effect of this hyper-parameter. Let us recall that the parameter lambda controls how closely our loss function resembles the traditional cross-entropy. If lambda=0, our loss function reduces to the standard cross-entropy; with that choice, we would expect our method to yield models with reasonably accurate 1-guess predictions, but without mitigating overconfidence. Towards the other end of the spectrum (lambda=1), the cross-entropy loss component of our loss function effectively disappears. This makes the learning progress much slower (because the novel conformity score function is noisier and harder to optimize) and it tends to lead to models that are not very accurate (high coverage but uninformatively large prediction sets). Empirically, we found that values of lambda between 0.1 and 0.3 work best in practice (see details in the appendix), but in principle this hyper-parameter could be tuned using hold-out data, possibly the same data used by other methods for early stopping).\nTo provide some concrete evidence in support of the above intuitive discussion, we have added additional figures in the new supplementary material “response_figures.pdf”; see NewFigures 2–4 therein. Those figures report on experiments with the synthetic data sets of Section 4.1 in which our hyper-parameter lambda is varied between 0 and 0.7 (we do not go above 0.7 because the training often becomes too slow and ineffective beyond that point). These results indicate that values of lambda between 0.2 and 0.3 lead to the smallest prediction sets with the highest conditional coverage. Recall from Section A3.1 that the experiments reported in our paper are based on lambda=0.2. We will include the new figures in the supplementary material if given the opportunity to revise it.",
" Weakness (1). As mentioned in our discussion with Reviewer 9Dxm, we thought it might be a bit overwhelming to lay out all the technical details of our method implementation in the paper, and therefore we focused on explaining the key ideas. We also thought it would be unnecessary to write all implementation details in the paper because our code has been made available to the reviewers (and it will be published with the paper). In case you missed it, the code can be found in the “code” folder of the supplementary archive. Therefore, the reviewers and the readers do not need to figure out how to implement what is described in the paper solely by reading the main text. Everything is already implemented for them in the accompanying code. The code is well organized and documented, so it is not difficult to see what it does and modify it as desired. Simple usage examples are also provided in the “examples” sub-directory. \nRegarding the existing fast soft-sorting and ranking techniques which we utilize to implement our method, they are indeed a bit too complex to explain in a self-contained way within this paper. However, we cite the relevant papers and we refer the interested readers to those. For many readers however, it might be unnecessary to gain a full understanding of the inner workings of those techniques. The authors of [74] have made a really nice and user-friendly software package publicly available, and that is how we apply fast soft-sorting and ranking.\nThat being said, we will follow the suggestion and include more implementation details (as well as a longer, more technical version of Algorithm 1) in the supplement if given the opportunity to revise the paper. We are also working on an even more carefully documented release of the software, which will be made available as an update through GitHub.\n\nWeakness (2). The limitation of conformal inference discussed in the introduction of our paper is not that a post-hoc calibration step is required to mathematically guarantee valid marginal coverage. The limitation discussed in the introduction is that the typical approach in the literature consists of carrying out the training phase in a way that is completely unaware of the subsequent post-hoc calibration. As a result of this lack of coordination, the output prediction sets may have unnecessarily low conditional coverage. To criticize this standard two-step approach as inefficient is not the same as to suggest that all possible two-step approaches are inefficient. To the contrary, the contribution of this paper is precisely that of developing a more coherent and coordinated two-step training/calibration conformal prediction framework.\n\nAs demonstrated in this paper, our new training method leads to models whose predicted probabilities are naturally better calibrated and which lead to smaller prediction sets with higher conditional coverage when applied in combination with post-hoc conformal calibration. As also discussed in our response to reviewer Xxp1, post-hoc conformal calibration is relatively less crucial for models trained with our method compared to models trained through standard means. To illustrate this point, we have added several figures (NewFigures 5-9) in the new supplementary material (response_figures.pdf) which summarize the results of new numerical experiments conducted without post-hoc calibration. These results demonstrate that prediction sets computed by models trained with our method are closer to having valid coverage compared to models trained by other means, even if no post-hoc conformal calibration is performed. These new results will be integrated into the main paper if we are given the opportunity to revise it. While such observations speak positively as to the performance of our method, we do not believe they would justify skipping the post-hoc calibration step, and indeed we never advocate to do so. To the contrary, we believe it is generally a good idea to perform post-hoc calibration regardless of how the model is trained because that is the only currently available solution to obtain rigorous finite-sample mathematical guarantees. Our goal in this paper is simply to provide methods to train models that can be later calibrated as smoothly and efficiently as possible.",
" Weakness (1). Thank you for pointing out that some of the notation in Section 2.2 can be clarified. We agree that right now it relies too heavily on Appendix A1, and even so there are some discrepancies which may be confusing. We will be happy to clear this up and make Section 2.2 self contained if given the opportunity to revise this manuscript. \n\nWeakness (2). We are glad to read you found the experiments demonstrating increased robustness to covariate-shift interesting (e.g., Figure A5). Your suggestion to include additional covariate-shift experiments involving the CIFAR-10 data set is also very interesting. It is regrettable that we don’t have enough time to carry out those experiments within the short time window allowed for this rebuttal phase, but we have managed to perform new experiments that are very similar to what you suggested. The approach we followed was faster to implement than what you suggested because it didn’t involve re-training the models we saved, but it is very close in spirit. What we did is that we applied the pre-trained models from our paper to a modified CIFAR-10 test set in which the percentage of corrupted images is varied as a control parameter. The results are shown in NewFigure 10, within the new supplementary document “response_figures.pdf”. These results show that our method is more robust to covariate shift compared to all alternative benchmarks, consistently with the results in Figure A5 pertaining synthetic data. We will insert this figure into the paper if given the opportunity to revise it. We hope this answers your question satisfactorily.\n \nWeakness (3). Figure A6(a) shows that the conditional coverages obtained with our method and with the focal loss decrease as the number of classes increases, while those of the other benchmark methods remain more or less constant. Meanwhile, Figure A6(b) shows that the corresponding size of the prediction sets increases very rapidly for all methods except ours. These results need to be interpreted together, because conditional coverage does not mean much by itself without taking the size of the prediction sets into consideration. In fact, the trivial prediction sets which always include all possible labels would have perfect 100% conditional coverage but would also be completely useless. Now, that being said, do we have a good theoretical understanding of why different methods seem to find different trade-offs between conditional coverage and size of the prediction sets depending on the number of possible labels? Not really: this is an interesting question which may need to be investigated in more depth by future work. We will better highlight this subtlety in the paper if given the opportunity to revise it; thank you for pointing it out. \n\nAt the same time, does this curious phenomenon reflect a weakness of our method? Hardly so, we argue. According to Figure A6, the relative drop in conditional coverage that we experience with 12 labels is not huge (64% vs 70% of hybrid and cross-entropy, out of target 90%), but difference in the size of the prediction sets is very large (3.75 vs 7.75 of hybrid and cross-entropy, out of 12 possible labels). It is reasonable to imagine that a prediction set with less than 4/12 labels could be much more informative than one with almost 8/12, even if the conditional coverage is a little lower. Of course, it would be nice if we could always outperform all benchmarks with respect to every meaningful metric, but the relevant metrics are often competing with one another, and therefore that is quite an unrealistic goal. In light of this discussion, we hope the reviewer can agree that, by any reasonable holistic measure, our method does quite well in Figure A6 (even though we do not beat the benchmarks as thoroughly there as in other settings; e.g., Figure A5).\n\nQuestion (1). We thought it might be a bit overwhelming to lay out all the technical details of our method implementation in the paper, and therefore we focused on explaining the key ideas. We also thought it would be unnecessary to write all implementation details in the paper because all of our code has been made available to the reviewers (and it will be published with the paper). In case you missed it, the code can be found in the “code” folder of the supplementary archive. Therefore, the reviewers and the readers do not need to figure out how to implement what is described in the paper solely by reading the main text. Everything is already implemented for them in the accompanying code. The code is well organized and documented, it is not difficult to see what it does and modify it as desired. Simple usage examples are also provided in the “examples” sub-directory. That being said, we would be happy to include more implementation details to the supplement if given the opportunity to revise it. We are also working on an even more carefully documented release of the software, which will be made available as an update through GitHub.",
" Question (1). There are three main reasons why we do not reach accuracy above 90% with Resnet18 on CIFAR-10: lack of data augmentation, smaller training sample size, and the presence of corrupt training images.\n\nThe first reason is that we do not utilize, for simplicity, all the different data augmentation techniques applied in the paper mentioned by the reviewer. As prior works have demonstrated empirically, data augmentation can be a very effective strategy to boost predictive accuracy in image classification tasks. For example, Shorten and Khoshgoftaar (2019) report that the Resnet18 accuracy on CIFAR-10 without data augmentation is only about 89%. This is still a little higher than ours, but this remaining gap can be explained by reasons 2 and 3, discussed below. Here, let us just emphasize that data augmentation involves the very definition of the available observations, and thus we see it as a somewhat orthogonal issue to that of designing the loss function and learning algorithm. There is no particular reason why the conformalized training ideas discussed in this paper could not in the future be applied in combination with data-augmentation techniques, and in fact we suspect that it might help to do so. However, it would not be completely straightforward to add image augmentation in this paper. First, it would excessively lengthen the manuscript and complicate the comparison with different methods, because it would result in lots of different combinations of training algorithms and data augmentation techniques to consider. Second, the theory behind conformal inference typically assumes all data points to be exchangeable with one another, which is not necessarily true after data augmentation. This issue can likely be addressed in an effective and theoretically rigorous way, but the topic deserves sufficient attention and care that it is best left to future work. We will emphasize this exciting opportunity for further research if given the opportunity to revise this manuscript.\n\nThe second reason for lower accuracy is that we train on a smaller sample size. We set aside a significant number of images for early-stopping validation and for post-hoc calibration. These hold-out data sets are useful within the scope of our numerical experiments and they necessarily limit the sample size available for training, but they do not point to a limitation of our method.\n\nThe third reason is that we are training all models on data containing a fraction of heavily corrupted images. It is not too surprising that these corrupt training images end up negatively affecting the test accuracy of all methods. However, as we discussed above, corrupt images are useful to add extra aleatory uncertainty, which makes our problem more interesting. Given that there are many other real-world applications with intrinsically noisy data (aleatory uncertainty), this characteristic of our partially corrupted CIFAR-10 data set is not unrealistic and provides an informative demonstration of the importance of reliable uncertainty estimation.",
" Weakness (4). It is not accurate to say that “uncertainty estimation is particularly important for tasks in which we don't have much data (and hence base models are poorly trained).” As discussed with reviewer Xxp1, there are two types of uncertainty: epistemic and aleatory. Epistemic uncertainty can be eliminated by increasing the training sample size or the flexibility of the model. If this were the only type of uncertainty in data science, the premise of your comment would be correct. However, our method is especially designed to deal with aleatory uncertainty. Aleatoric uncertainty refers to intrinsic randomness in the outcome to be predicted that is due to unmeasured variables, which cannot be so easily eliminated. \n\nAs explained in the introduction, overconfidence is an especially concerning problem in applications with significant aleatoric uncertainty, regardless of how big the data set is. Our method addresses overconfidence with a novel training algorithm and loss function that are better equipped to take advantage of the available data in order to accurately capture uncertainty. Of course our method tends to perform relatively better when more training observations are available, but this does not mean uncertainty always disappears from large data sets. Think of Figure 1: none of the practical methods considered there achieves perfect conditional coverage, even when the training data set is large, because the constant aleatoric uncertainty is hard to capture. However, the conditional coverage obtained with our method visibly increases with the sample size (which is what we would always like to see), while the conditional coverage obtained with the benchmarks does not increase as quickly (because those methods were not really designed to capture uncertainty). Same story in Figures A8 and A12. Further, the results with CIFAR-10 shown in Figure A24 are even more striking: the conditional coverages of all methods except ours visibly decrease as the training sample size increases! \n\nIn conclusion, we hope to have clarified that uncertainty is not necessarily a small-sample size issue, and that data splitting is not necessarily a weakness. Of course, it would be nice to have a new method that can achieve even better results without splitting the data, but what matters here is that our method already performs better than existing alternatives which do not split the data. ",
" Weakness (3). While our method splits the training set into two disjoint subsets (one for each component of our loss function), the comparison with the alternative benchmarks is fair because we apply all of them to the full training set without any data splitting, precisely as suggested by the reviewer. The only exception is the hybrid conformal benchmark, which utilizes the same data splitting strategy as our method. We recognize that this important point should have been mentioned more explicitly in Section 4. We thought this was implicitly clear, but it wasn’t. We will remove this ambiguity if given the opportunity to revise the manuscript.\n\nNow, an interesting follow-up question related to this reviewer’s comment is whether our novel loss function would work equally well (or perhaps even better) if we did not split the training data. Although this alternative implementation was not presented in the paper nor researched in great depth by this work, our intuition strongly suggests that the data-splitting approach is the correct one. Our guiding theoretical result (Proposition 1) says that the conformity scores of a well-calibrated model should be uniformly distributed when evaluated on independent (or hold-out) data. The “hold-out” part of this statement is crucial. Having uniformly distributed conformity scores on the training data is perfectly consistent with the model being grossly overfitted and overconfident, and thus it says nothing about whether the predictions for future test points will be more or less well calibrated. This is why we prepare the model to produce well-calibrated predictions for future test points by training it to produce approximately uniform conformity scores for hold-out data. Data splitting, in combination with a suitable training algorithm, thus becomes a strength, not as a weakness. This explains why our method can achieve better performance compared to its benchmarks despite the more limited amount of data available to the cross-entropy component of its loss function. \n\nNext, all models considered in our experiments have been trained using the same early stopping criteria and the same validation data sets. The lambda parameter (chosen for simplicity to be fixed) could also be tuned using the same validation data set utilized for early stopping, so that no method has access to any additional data source and all comparisons are fair. \n\nFinally, regarding the last reference pointed out in this question, it should be clarified that those authors aim to improve marginal calibration, and that is very different from our goal. Marginal calibration can be achieved exactly with post-hoc conformal calibration, but it is not fully satisfactory by itself because it does not rule out the possibility that the model may be very overconfident for some test data points and under-confident for others. Our goal is much more ambitious: we aim to increase the empirical conditional coverage, while we still rely on post-hoc conformal calibration to mathematically guarantee valid marginal coverage in finite samples. Regarding the choice of the focal loss as a benchmark for our experiments, we would like to refer to our answer to a related question by reviewer Xxp1. The focal-loss has been applied quite widely and it could arguably be seen as representing the state-of-the-art. In particular, it was shown by Mukhoti et al. (2020) to outperform label smoothing (Müller et al, 2019) and other benchmarks in a variety of settings. Of course, it is possible that other existing methods may in some cases perform better than our chosen benchmarks, but it would be impractical and potentially confusing to compare the performance of our method to that of all existing alternatives. This is especially true because our method already distinguishes itself for its novelty and original focus on achieving approximate conditional calibration within a conformal inference framework. ",
" Weakness (1). The 4% gain in conditional coverage for corrupt CIFAR-10 images (Table 1) is not as large in absolute terms as the corresponding gains in other applications, but it is not a weakness. The target coverage here is 90%, and the hybrid method with early stopping already performs well at 86%. Our 4% gain, from 86% to 90%, bridges 100% of the gap between the desired coverage and the empirical coverage achieved by our top competitor. The hybrid method works quite well on the CIFAR-10 data, but not on the credit card data; see Table 2. The latter are more interesting for our purposes and offer more room for improvement because they have more aleatoric uncertainty, see answer to reviewer Xxp1.\n\nRegarding the second part of the question, the credit card data set is both noisy (high aleatoric uncertainty) and imbalanced—only approximately 22% of the labels are equal to 1. Therefore, accuracy is not the most meaningful measure of performance: the trivial model which predicts the label ‘0’ for all samples achieves the highest accuracy. Instead of accuracy, F-scores would be more informative. If you look at NewFigure 1 in the new supplementary document “response_figures.pdf”, you will see the model trained with our method achieves the highest F-score. We will insert this figure into the paper and include a discussion.\n\nWeakness (2). Our method is more expensive to train compared to simpler alternatives, but this should not be surprising. Uncertainty is a subtle concept, and training a machine to learn how to calibrate its own uncertainty from data, without relying on any parametric model assumptions, is a fundamentally challenging task. In fact, it is already remarkable that Resnet18 models can be trained to better understand uncertainty quite successfully on data sets of the size considered in this paper using a learning algorithm as sophisticated as ours. This would not have been possible without the recent ground-breaking advances in fast differentiable sorting and ranking [73,74]. That field is still developing, so it is reasonable to anticipate that uncertainty-aware machine learning models (both our current proposal and more sophisticated future developments) will become cheaper to train with time. \n\nHaving acknowledged that our learning algorithm is more expensive than simpler alternatives, the interesting question is whether its benefits can outweigh its costs. Our answer is a confident yes. First, our extensive numerical experiments demonstrate that the benefits are quite meaningful. We have already explained in the answer to the previous question how the reviewer’s concerns that our method leads only to a small 4% increase in conditional coverage for the CIFAR-10 data and a reduction in accuracy for the credit card data are due to simple misunderstandings. The increases in conditional coverage are actually significant, even for CIFAR-10, as explained above. Similarly, the reduction in accuracy for the credit card application is a fictitious artifact of the intrinsic noise and imbalance in that data. Second, the computational cost of training our method is not extraordinarily high. We have chosen to experiment with the Resnet18 in this paper mostly because it is a very common architecture. Further, the relatively small size of this model allowed us to conduct a thorough evaluation of our method with hundreds of independent experiments with relatively limited academic resources. \n\nIn conclusion, training complex classification models with a well-calibrated understanding of uncertainty is an important problem, which often deserves spending some additional resources on. Will every deep classifier be trained with our method in the future? Realistically, that seems unlikely. Even better methods could be developed by others relatively soon, and some practitioners may just not care much about capturing uncertainty, either because it is not a huge concern in their field, or because they are already working on a very tight time/computational budget and thus they cannot afford to do much about it. But it is also true that many people care enough about machine learning uncertainty to be potentially willing to apply more complex algorithms to deal with it. We have discussed how there is a large and rapidly growing literature on the subject. There are also many applications of deep learning to fields in which the data are noisy and there are lots of practical, legal, or moral reasons why the problems caused by overconfident models need to be addressed urgently.",
" Strengths (1): The hold-out data for post-hoc calibration are distinct from those used during training. We do not recycle data for these two tasks because otherwise the final prediction sets would not be guaranteed to be well-calibrated. This is explained in Section 4, but we can clarify further. That being said, our method is designed to train models that already are approximately calibrated, which tends to make post-hoc calibration less crucial. We have added NewFigures 5-9 in the supplementary file “response_figures.pdf to illustrate this. These results show our method leads to prediction sets with higher coverage compared to models trained by other means, even without post-hoc calibration. This speaks positively as to the performance of our method, but we do not believe it justifies skipping post-hoc calibration, and indeed we never advocate doing that.\n\nWeaknesses (1). Yes, our solution mitigates overconfidence. Our loss targets overconfidence by reducing the statistical deviation from uniformity of the conformity scores. We have provided theoretical justification for this solution (Proposition 1) and extensive evidence of its efficacy (Section 4 and supplement). Other reviewers found our method to be well justified and thoroughly validated, but we are happy to clarify further. The link between the uniformity of the scores and conditional coverage is in Proposition 1, while the link between overconfidence and coverage is in the introduction. We can explain this in more detail. In particular, we can add references in Section 3.1 to Figures A3-A4 and A16-A23. Recall that Figures A3-A4 and A16-A23 demonstrate how models with lower conditional coverage are associated with sub-uniform scores. By contrast, better calibrated models lead to prediction sets with higher conditional coverage and more uniform conformity scores. The reduction in overconfidence is seen in Figure A3: the histograms for the benchmarks are shifted to the left, while ours are closer to uniform. Further, the probabilities estimated by our models are more accurate (Figure A4). Finally, the results on CIFAR-10 confirm our method is less overconfident (Figure 2).\n\nWeaknesses (2). Our loss function is based on novel ideas: we take inspiration from conformal inference, training a model which can later be utilized to construct more reliable and informative prediction sets with higher conditional coverage. It is true that other methods have been proposed to mitigate overconfidence, and the richness of this literature speaks to the importance of the problem. We acknowledged the literature, and we extensively compared our method to some representative benchmarks. It is of course possible that: (a) we might have accidentally omitted a relevant reference; (b) other methods may sometimes perform better than our “state-of-the-art” benchmarks. Point (a) is easy to address: we can add missing references such as Müller et al, 2019. Point (b) is less clearly an issue, as it would be impractical and confusing to empirically compare our method to all existing alternatives. Other reviewers commented we have many benchmarks. We looked at the focal-loss because it is applied quite widely and it was shown by Mukhoti et al. (2020) to outperform label smoothing (Müller et al, 2019) and other methods.\n\nWeaknesses (3). Short answer: (1) we have already considered diverse data sets, which other reviewers found satisfactory; (2) there is relatively little information to be gained from those extra image data sets. We expand upon this below.\nRecall: our goal is to train uncertainty-aware classifiers, and that there are two types of uncertainty: epistemic and aleatory. Epistemic uncertainty may be due to insufficient data, poor training, sub-optimal architecture, or a combination of those. This is what ML has traditionally focused on, as it tends to dominate traditional image classification tasks. Training flexible networks on large data sets is effective at removing epistemic uncertainty. If this were the only type of uncertainty, we could have gained more insight from CIFAR-100 or imagenet. However, we care more about aleatory uncertainty—the intrinsic randomness due to unmeasured variables, which cannot be eliminated so easily.\nOverconfidence is especially concerning in applications with aleatoric uncertainty, and our method is meant to address that. In Section 4.1 we work on synthetic data with aleatoric and epistemic uncertainty, the proportions of which are varied (Figures A7, A11, A15). In Section 4.3 we work on credit card data with aleatoric uncertainty. But the CIFAR-10 data in Section 4.2 mostly involve epistemic uncertainty. This is why we have introduced aleatoric uncertainty by corrupting some images (Figure 2). The same could be done with CIFAR-100 or (Tiny-)imagenet, but it is unclear whether there could be much insight to be gained from such exercise. In a revision, can further expand the discussion of epistemic and aleatory uncertainty.",
" In this paper, a new training strategy which integrates conformal prediction in training stage is proposed. The proposed training algorithm uses an additional regularization term that encourages the conformity scores to follow a uniform distribution to improve the performance of final conformal prediction. Strengths:\n\n1. Previous training stage doesn't consider the performance of conformal prediction, and hence results in sub-optimal problem. The proposed method addresses this issue by encouraging the conformity scores to follow a uniform distribution.\n\n2. The proposed differentiable approximation for the empirical CDF of the conformity score could be efficiently implemented. The validation dataset could be used in both training (for calculating $l_u$) and post-hoc calibration.\n\n3. The experiments with synthetic data show some positive points of the proposed method, and also give some intuitive analyses.\n\nWeaknesses:\n\n1. The authors mentioned that \"The idea is to mitigate overconfidence\" in Abstract. Does the proposed method (i.e., adding conformal loss term $l_u$ in loss function) directly mitigate the over-confidence issue? \n\n2. The proposed method could be a regularization term on the loss function. What about the relationship between it and other regularization methods (which have been used to address the calibration problem) like label smoothing?\n\n3. Other real-world datasets should be used in experiment part. This paper only uses CIFAR-10 for evaluating the effectiveness of the proposed method. As it is not too hard to classify, image augmentation processes are used to generate harder samples. Why not use datasets like CIFAR-100 or (Tiny-)imagenet for better evaluation?\n See weaknesses. The authors have adequately addressed the limitations and potential negative societal impact.",
" This paper proposes a differentiable loss function for training conformal predictors. In conformal prediction, a popular scoring function for classification sets is the APS method of Romano et. al. [1], as if the learned conditional distribution $\\hat \\pi_y(x) \\approx \\pi_y(x) := \\mathbb{P}(Y = y \\mid X = x)$ is exact, then the constructed prediction sets are the smallest randomized prediction sets with the desired conditional coverage, i.e., satisfying $\\mathbb{P}(Y_{n+1} \\in C(X_{n+1}) \\mid X_{n+1} = x) \\geq 1 - \\alpha$. Typically, the way to go about solving this problem is to first learn $\\hat \\pi_y(x)$, and then plug it into the APS method (which is then calibrated using normal conformal techniques). Unfortunately, as the authors point out, predictions based on poorly pre-trained $\\hat \\pi_y(x)$ are hard to correct once $\\hat \\pi_y(x)$ is fixed. This paper links the two steps of conformal prediction into one joint loss, with a focus on improving conditional coverage via better recovering the oracle behavior of the true $\\pi_y(x)$ combined with APS. This joint objective is composed of the normal cross-entropy classification loss, together with a regularizer that encourages conformal scores on held-out data to be uniformly distributed (as they would be if $\\hat \\pi_y(x) = \\pi_y(x)$). The method is empirically validated on both synthetic and real datasets, though the gains are fairly minor depending on the setting.\n\n[1] Classification with Valid and Adaptive Coverage. https://arxiv.org/abs/2006.02544. === Strengths ===\n\n- The paper is well-written and well-evaluated. The method is also well-motivated. In particular, I appreciate the focus on conditional coverage rather than set size, as set size can be gamed when only subject to marginal coverage constraints.\n\n- The proposed algorithm is simple (at least when relying on pre-existing differentiable sorting algorithms). Proposition 1 yields good intuition (and also interestingly appears somewhat related to the fact that the distribution of $Z = F_{Y|X}(Y)$ is uniform for continuous r.v.s $Y$ with CDF $F_{Y|X}$, but here by construction for discrete $y$).\n\n- Though I have some questions about the empirical effectiveness on real data (see below), at least under certain settings the method can lead to smaller prediction sets with better conditional coverage---two important and impactful qualities for real-world deployment of conformal algorithms.\n\n=== Weaknesses ===\n\n- When factoring in early stopping, the empirical gains on real datasets vs. baselines (i.e., Hybrid) appear minor at best (e.g., the best seems to be a +4% gain in corrupted coverage on CIFAR-10 when comparing fully-trained conformal with early-stopped hybrid?). The raw (top-1) accuracy also seems to be negatively on the credit card default task (whereas standard conformal methods that don't modify the base conformal score don't affect the top-1 accuracy).\n\n- This is addressed in the paper, but the proposed method is quite expensive to train (2x compared to cross-entropy loss, as reported in Section 5). As a result, I'm not sure how well this would scale to larger models than a ResNet18. Given the somewhat small gains in efficiency/conditional coverage, the impact of this approach seems likely to be somewhat limited. (That said, I imagine that the \"Hybrid method\" of [1] is similarly slow, as it also involves a differentiable sort.)\n\n- One thing which bothers me is the data inefficiency of the proposed approach. It seems rather wasteful to only use labeled data in $\\mathcal I_2$ for regularization. It seems if the main goal is to simply get a better estimate of $\\pi_y(x)$ by reproducing its behaviour when plugged into conformal APS (e.g., uniform score distr.), what if we just use that data to train a bigger, modern model (e.g., see models in [2]), optionally with better regularization (e.g., calibration objective in [3])? Or, since the method also requires validation of $\\lambda$, early stopping on the same validation set (as done in the experiments, which improves baselines substantially) also seems to be a fair data-wise comparison. \n\n- Likewise, from the paper experiments, benefits become more pronounced with more data, however, one could argue that uncertainty estimation is particularly important for tasks in which we don't have much data (and hence base models are poorly trained). Note that this data inefficiency is also shared by [1], but the objective in [1] can dependent on properties of the set $C$ that may be orthogonal to conditional coverage (and recovering $\\pi_y(x)$ may not yield the oracle).\n\n=== Minor ===\n\n- Notation: In several places $|\\mathcal I_*|$ should be used in place of $\\mathcal{I}_*$ (e.g., L165). \n\n[1] Learning Optimal Conformal Classifiers. https://arxiv.org/abs/2110.09192.\n\n[2] Revisiting the Calibration of Modern Neural Networks. https://arxiv.org/abs/2106.07998.\n\n[3] Trainable Calibration Measures For Neural Networks From Kernel Mean Embeddings. https://proceedings.mlr.press/v80/kumar18a.html. - Low 80% top-1 accuracy seems pretty low for a ResNet-18 on CIFAR-10 (see, e.g., https://github.com/kuangliu/pytorch-cifar). This makes me question if the cross-entropy baseline is indeed well trained? Can you explain this difference? The authors have adequately addressed the limitations and potential societal impact of their work.",
" The authors propose a method of combining the learning and calibration phases\nof conformal inference, reducing overconfidence in the ML model, and giving\nsmaller prediction sets. They minimize a new loss with minimizes the\ndiscrepancy in conformity scores between the current model and an unknown\noracle.\n # Strengths\n\n- The method id well justified and comes equipped with in depth analysis\n- The experiments verify the claims made within the text and also provide ablations on the proposed components.\n- The experiments are thorough.\n\n# Weaknesses\n\n- Equation three seems to give incomplete infromation. $u$ and $\\tau$ are\ndefined but there is no mention of how they contribute to the function\n$\\mathcal{S}$. It is then defined in the appendix, but A2 in the appendix takes\n$t$ (not $\\tau$ but I guess they are the same?) as an argument and equation 3 takes $1 - \\alpha$ leaving $\\tau$ unused.\nThis needs to be cleared up.\n\n- The covariate shift experiments occur on the synthetic data. It would be nice\nto see if these results hold for larger datasets as well. For example, what\nwould happen if a model were trained with regular CIFAR10 data with no\ncorruptions in either the training or calibration set, and then tested with the\nrandom erase corruptions or CIFAR10-C [1] test set.\n\n- L237: states that figure A6 shows typically higher conditional coverage as\ncompared to all benchmarks. It appears there is actually a clear trend which\nshows that the proposed model performs worse on larger numbers of classes, yet\nthere is no mention of this. Can the authors provide any explanation for this?\n \n# References\n\n[1] https://github.com/hendrycks/robustness\n - I am not sure there is enough information in the current version of the paper in order to implement the algorithm. Section 3.3 is easy to intuitively understand, but it leaves me confused as to where to go in order to actually implement what is written in the paper. Could a more detailed explanation of the exact equation needed to reproduce $\\ell_u$ be included?\n\n# Summary\n\nOverall, I think the contribution is solid, but there can be some improvements made as reflected in my comments and questions. The authors have highlighted the computational complexity as a limitation. I am not aware of any adverse societal impacts of their work.",
" The paper combines ideas from conformal prediction and differentiable ranking/sorting to develop a loss function that improves the predictive uncertainty of a classifier by training it to make predictions with a desired conformity score. The paper reports competitive coverage and conformal prediction set sizes on synthetic data, CIFAR10 and a credit card default dataset. Strengths:\n* The method is novel as far as I can tell (although I am not familiar at all with the related literature on frequentist approaches for uncertainty estimation) and appears to be technically sound.\n* The technical setup is well-structured and clear for the most part (although see exceptions).\n* The proposed method seems to perform the best on the synthetic data and is competitive on real data.\n\nWeaknesses:\n* I found section 3.3 quite vague and think the paper would benefit from recapping the specific techniques it uses from the referenced papers. As it stands, the text is not self-contained and I don't think I'd be able to implement the method without working through other works. The algorithm box does not help in this regard as it is quite wordy rather than technical and concise.\n* I was fairly surprised that the proposed method appears to require a post-hoc calibration step (lines 199-201, 259, 305), even though the introduction criticizes this two-step approach as a limitation of conformal learning.\n* The empirical results on the real data seem to be somewhat of a mixed bag with many close scores. Reporting results without early stopping seems unnecessary to me since the classification performance is significantly better with early stopping (although I am surprised by the need for it; see questions).\n* There is no ablation study on the hyperparameter for the weight of the conformal loss. I'd imagine that this parameter allows for trading off more accurate vs better calibrated predictions, but it would be helpful to see this confirmed experimentally. I'm overall not too confident in my review due to being unfamiliar with the related literature and commonly used experimental setups and results, but I'd appreciate the following questions being clarified:\n\n* Are you indeed using a two-level procedure even with your conformal prediction loss? If yes, is it possible to use the loss as a one-level approach and if so, how does it perform? I had a look at the appendix and did not find anything there, but might have missed it.\n* Do you have an explanation for where the need for early stopping arises? I can imagine that with an added regularizer this can happen, but the bad test accuracy for the cross-entropy baseline on CIFAR10 seems extremely strange to me, I don't think I have ever seen the test accuracy of a properly trained Resnet go down over time on that dataset. Is it due to the data corruption? That being said, it would be nice to have results trained on clean data as a baseline to see that the implementation is reasonable. \n* How do different choices of $\\lambda$ affect performance?\n\nOther (minor) notes:\n* I'm a bit uncomfortable with some statements in the introduction. In particular, neural nets trained with proper scoring rules should be calibrated in the infinite data limit. Intrinsically noisy data is not problematic, as this would be reflected in the training set and lead to less confident predictions. The problem in benchmarking on datasets such as CIFAR10 is the opposite, namely that they are highly curated and there is hardly any noise in the data, but once the underlying data distribution shifts the predictions become overconfident. I'm probably making some statements here myself that are not perfectly accurate either, but I'd encourage have another pass over the first paragraph of the introduction and ensure that all statements are technically precise and correct.\n* Exchangeability of the data seems to be a core assumption, it would be nice to have an evaluation of the method under some shift of the test distribution on the real data, e.g. using different degrees of corruption as in (Ovadia et al., 2019. Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. In NeurIPS).\n* Table captions should go above the tables (unless the style instructions have been changed). The computational overhead (roughly doubled training time in the experiments) is discussed explicitly. I did not see discussion around the reliance of the proposed loss on a post-hoc conformalization step."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
2
] | [
"ky-KOEzDU0_",
"luTqRfaHUX",
"TOk5gBkQyGJ",
"9tDeUTly8zI",
"zoWfSQH0yFK",
"U9iDarhnHrg",
"yRVBlTBy6Mm",
"bPKi8Thg6HV",
"5-Mf2nPzZC",
"5D1vE2-isEj",
"DrrdSoQVjb6",
"ZAbPk_ZEsu",
"iAgcSKyV1GN",
"8y9Qx6AyrM3",
"AhPtj6ROenb",
"pPEsGlpA-wZ",
"QPD2xxvu1MZ",
"f3XAPkRJAO_",
"nips_2022_NaZwgxp-mT_",
"k0Sob07SnHi",
"2g4yW8aMdQE",
"6NqaD4rLbOX",
"_e2dThOs-WE",
"pPEsGlpA-wZ",
"Ef1wFM7Mp98",
"fDcaB4Bn8MC",
"yLYWRAsKjLs",
"hTfgQmJgaQx",
"9nPmxpTZO0F",
"ZAbPk_ZEsu",
"iAgcSKyV1GN",
"nips_2022_NaZwgxp-mT_",
"nips_2022_NaZwgxp-mT_",
"nips_2022_NaZwgxp-mT_",
"nips_2022_NaZwgxp-mT_"
] |
nips_2022_aPXMGv7aeOn | Compressible-composable NeRF via Rank-residual Decomposition | Neural Radiance Field (NeRF) has emerged as a compelling method to represent 3D objects and scenes for photo-realistic rendering.
However, its implicit representation causes difficulty in manipulating the models like the explicit mesh representation.
Several recent advances in NeRF manipulation are usually restricted by a shared renderer network, or suffer from large model size.
To circumvent the hurdle, in this paper, we present a neural field representation that enables efficient and convenient manipulation of models.
To achieve this goal, we learn a hybrid tensor rank decomposition of the scene without neural networks.
Motivated by the low-rank approximation property of the SVD algorithm, we propose a rank-residual learning strategy to encourage the preservation of primary information in lower ranks.
The model size can then be dynamically adjusted by rank truncation to control the levels of detail, achieving near-optimal compression without extra optimization.
Furthermore, different models can be arbitrarily transformed and composed into one scene by concatenating along the rank dimension.
The growth of storage cost can also be mitigated by compressing the unimportant objects in the composed scene.
We demonstrate that our method is able to achieve comparable rendering quality to state-of-the-art methods, while enabling extra capability of compression and composition.
Code is available at https://github.com/ashawkey/CCNeRF. | Accept | This paper presents a new NeRF method based on tensor decomposition. The method supports both compression and composability, while achieving similar results compared to standard NeRF models. The method does not use a neural network. Several reviewers found the paper easy to follow, the method novel & sound, and the comparisons comprehensive. Two reviewers mentioned the similarity between the proposed work and TensoRF. The rebuttal addressed most concerns and highlighted the differences between the two works. As TensoRF is a concurrent ECCV submission, the existence of TensoRF should not be used against the proposed work. The AC agreed with most of the reviewers and recommended accepting the paper.
| val | [
"Ik_iKtjrzE9",
"lQTGB3hyhxK",
"llF5ba0ciFh",
"Ihwt1rGrYqd",
"T19cyBMnQNc",
"BkGV6zM11d",
"-VABUxny1dt",
"Xzn4-PdvaEV",
"w2B90SL8YN",
"9HEzTid0moy",
"jGRqqJWY30",
"aXbDFvjTpW",
"pJ-TqZ6mTJH",
"OC89pgR7GFs",
"a3wgB6d1UB0",
"Ljk2nMqWNM5",
"7640BB9fdA6",
"xf4TatTslIS",
"DLds5Sz5WMH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the answer. The authors have addressed my concerns and I will keep my score the same.",
" Dear reviewers, \n\nThank you all for providing valuable comments. The authors have provided detailed responses to your comments. Has the response addressed your concerns?\n\nIf you haven't, I would appreciate it a lot if you could reply to the authors’ responses soon as the deadline is approaching (Tues, Aug 9). \n\nBest, \n\nACs\n",
" **Q1**: TensoRF holds the low-rank approximation property.\n\n**A1**: By low-rank approximation, we refer to the optimal approximation of a high rank matrix with a low rank matrix (i.e., the Eckart–Young–Mirsky theorem of SVD), or at least a near-optimal one. General tensor rank decomposition (CP, Tucker) does not hold such property [1]. TensoRF learns a similar decomposition from data, and also does not hold the low-rank approximation property too. We discussed and demonstrated the performance degeneration when empirically truncating our baseline model that is similar to TensoRF in Figure 3. The major cause of such degeneration is the scattered distribution of rank importance (Figure 4). Therefore, we propose the rank-residual training to force the rank importance to concentrate on the lower ranks. **This rank-residual training is the crucial component to achieve near-optimal low-rank approximation.** \n\n**Q2**: This paper adds some straightforward tricks to improve TensoRF.\n\n**A2**: We consider that \"straightforward\" should not be considered as a disadvantage. Although these ideas seem straightforward, our work first adopted them to achieve dynamic compressibility and composability, which are important for practical applications of NeRF, but not well discussed in previous works.\n\n**Q3**: Change the CP decomposition to Tucker decomposition, which supports the controllability of the model size.\n\n**A3**: We are not changing to the Tucker decomposition, although the CP decomposition can be viewed as a special case of the Tucker decomposition. Instead, we use a mixture of the CP and Tri-plane (TP) decomposition. \n\n**Q4**: This paper has worse performance than TensoRF in terms of the model size and training time.\n\n**A4**: We emphasize again that our aim is not to achieve SOTA, but demonstrate how the extra compressibility and composability can help in practical applications of NeRF.\n\n\n[1] Tensor Decompositions and Applications, Kolda et. al.",
" 1. TensoRF hold the low-rank approximation property.\n2. TensoRF can be easily revised to support the controllability. Both CCNeRF and TensoRF are based on tensor decomposition. Their behaviors should be very similar.\n3. I acknowledge the rank-residual training strategy, but it is straightforward.\n4. TensoRF works well without a MLP renderer.\n\nThe core reason that I currently feel positive about this paper is that TensoRF does not get accepted to ECCV before the NeurIPS submission deadline.\n\nFrom my perspective, this paper adds some straightforward tricks to improve TensoRF:\n1. Change the CP decomposition to Tucker decomposition, which supports the controllability of the model size.\n2. Rank-residual training strategy.\n\nBoth tricks are straightforward. More imporatnt, this paper has worse performance than TensoRF in terms of the model size and training time.",
" Thanks for the response! We greatly appreciate your time and effort in the review and discussion.\n\nWe are not trying to overclaim our advantages, but want to point out some misunderstandings in the last response:\n\n(1) **TensoRF requires a full retraining to change the model size, because their tensor rank decomposition does not hold the low-rank approximation property**. Our contribution is the rank-residual training strategy to ensure the low-rank approximation property for the learnt decomposition, such that **we can compress the trained model without extra optimization**.\n\n(2) For composability, our motivation is to remove the MLP renderers used in hybrid methods including TensoRF. To achieve composition of different models, **TensoRF requires a shared MLP renderer for these models, limiting the potential to extend to other models. By removing the MLPs, our method gets rid of such restrictions in training.**",
" Yes, I have considered the things pointed by you as the paper's technical contributions. So I do not vote to reject this paper.\n\nActually, TensoRF can also dynamically compress a NeRF and adjust the levels of details, because it is also based on the tensor decomposition.\n\nIt seems that the compressibility of the proposed approach is not better than that of TensoRF. The composability of this paper should be the same as TensoRF in theory.\n\nPlease do not overclaim the advantages of the proposed method, which could harm this community.",
" Thanks for the response!\n\nAs acknowledged by the reviewer, we propose a novel method to dynamically compress a NeRF and adjust the levels of details without extra optimization.\n\nOur model further explores the low-rank approximation property based on TensoRF's tensor rank decomposition. The extra compressibility and composability come at the price of slightly worse performance (PSNR -0.77, or -2.3%), which is hard to discern in visual comparisons. ",
" Thank the authors for the detailed responses.\nMost of my concerns are resolved.\n\nIt is ok to accept this paper, but rejecting it would not be too bad, considering that it shares similar ideas with TensoRF on the tensor decomposition and has worse performance.",
" Thanks for the response! We would like to clarify the validity and practicality of our method as following:\n\n(1) The quality for novel view synthesis is well guaranteed when we decrease the model size. The proposed residual-rank learning reaches similar performance to the best-possible model at the same size (Figure 3). As pointed out by the reviewer, **our PSNR is only slightly lower (-0.77, or 2.3%) compared to TensoRF, which does not have our extra compressibility and composability**. **We demonstrate that this PSNR drop is already hard to discern in qualitative comparisons** (Figure 3 of the supplementary materials).\n\n(2) We claim that it's **unfair to simply compare the model size of our method with the vanilla NeRF**. The vanilla NeRF belongs to the implicit NeRFs, which have a small model size (5MB) but take a long time to train (36 hours). Instead, our method belongs to the explicit NeRFs which train much faster (20-40 minutes), at the cost of a larger model size (e.g., Plenoxels at 778MB, DVGO at 612MB, and TensoRF at 72MB). Furthermore, **our dynamic compressibility provides a novel way to alleviate the growth of model size , by allowing us to adjust the model size from a large range of without optimization (e.g., 2MB to 69MB for the HY-S model)**.\n\nIn conclusion, the extra compressibility and composability of our method does come at a price of performance, but we carefully discussed and analyzed these limitations in our paper. **As acknowledged by all reviewers, we propose a novel perspective to explore the dynamic compressibility and compositionality of NeRF representations.** Therefore, we still consider our contributions to outweigh the limitations, and sincerely hope the reviewer can improve the rating.",
" I do agree that the proposed method can dynamically adjust the model levels of detail without extra optimization, while as far as I know, there is no other method that can do this. Also, the method can better support the composition of NeRF. However, I still argue the validity of the method. Although the goal of this method is not to achieve a better quality of novel view synthesis, there should be some guarantee of synthesis quality as the model size decreases. But as I pointed out before, the PSNR value of the proposed method is slightly lower than that of TensoRF at a larger model size. Although compared with TensoRF, the proposed method does not use MLP, the rebuttal shows that the training time is longer than that of TensoRF. It should also be noted that vanilla NeRF can achieve good synthesis quality with a model size of only 5MB, while the Ours-HY-S model is already 68.9MB.",
" Dear AC and all reviewers:\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince the discussion phase has only one day left and we have not heard any post-rebuttal response yet, please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!\n",
" We thank the reviewer for the constructive suggestions. Below are our responses to the questions.\n\n**Q1: Training time of the proposed approach.**\n\n**A1**: The training of our CP, HY-S, and HY model costs 29, 30, and 41 minutes respectively, similar to TensoRF [1]. The convergence speed is considerably faster compared to the vanilla NeRF and Mip-NeRF 360, which take hours or days to converge. This speed advantage is mainly from the no-neural-network NeRF representation we use.\n\n**Q2: Advantages compared to NeRF or Mip-NeRF 360.**\n\n**A2**: (1) As shown in Table 2, our HY-S and HY model achieve better PSNR compared to NeRF on both datasets, and our CP model achieves smaller model size compared to NeRF. (2) Our model supports dynamic compression and natural composition, while the vanilla NeRF and Mip-NeRF 360 cannot be adjusted or composed after training. (3) As discussed in Q1, our model also achieves much faster convergence speed compared to the vanilla NeRF and Mip-NeRF 360.\n\n**Q3: Object-centric NeRFs can also be composed, and Edit-NeRF can manipulate a single NeRF model.**\n\n**A3**: Thanks for the reference! We discussed the limitations of current composition and manipulation methods in Section 2.3. (1) Both object-centric NeRF and Neural Scene Graphs require a shared MLP renderer for the objects to be composed, which means the objects trained in one scene cannot be directly composed with objects trained with another scene, due to the different MLP renderers. Differently, our method supports natural composition without such constraints in training. (2) We focus on rigid transformation and composition of different NeRF models, while Edit-NeRF mainly focus on editing a single NeRF model. Our composition requires no extra optimization, while Edit-NeRF still requires an optimization process to apply the manipulations. We think these two types of NeRF manipulation are complementary, and both contribute to making NeRF manipulatable as triangular meshes.\n\n**Q4: Ablation study and better analysis on the explicit control of model capacity by increasing rank components.**\n\n**A4**: Thanks for the advice! We analyzed the influence of different numbers of rank components to the model capacity in Figure 3. We found that the proposed rank-residual learning successfully controls the growth of model capacity by increasing the number of rank components, achieving near-optimal performance. We further perform an ablation study to analyze the best training strategy to exploit the increasing model capacity. Assume we have a set of rank groups for different model capacities, we experimented with three different settings: (1) sequentially training each rank group until its convergence, freezing the previous rank groups before training a new rank group, so each loss only applies on its corresponding rank group, (2) parallel training all rank groups, but for each group we detach the output from the previous groups, so each loss still applies independently, (3) parallel training all rank groups without detaching, so each loss applies to all the previous rank groups (the setting in our paper).\n\n| Settings | PSNR | Training Time (min) |\n| ------------------------------ | :---: | :-----------------: |\n| (1) sequential | 34.16 | 83 |\n| (2) parallel w/ detach | 33.69 | 28 |\n| (3) parallel w/o detach (ours) | 34.37 | 26 |\n\nThe first setting requires longer training time to assure convergence of each rank group. We find the third setting achieves better performance and faster convergence compared to the first two settings. We think the parallel training without detaching encourages the later rank groups to learn complex details and the earlier rank groups to focus on the fundamentals.\n\n\n\n[1] TensoRF: Tensorial Radiance Fields. Chen et. al.",
" We thank the reviewer for the constructive suggestions. Below are our responses to the questions.\n\n**Q1: Categorization of NeRF methods.**\n\n**A1**: Thanks for the advice! We have renamed the categories in the revised version.\n\n**Q2: More annotations for Figure 2.**\n\n**A2**: Yes, the black lines mean we first project the 3D point to the decomposed line or plane, and then perform interpolations to calculate the features for the 3D point. The dotted lines are auxiliary lines to make the projection more clear. We have added more annotations in the revised version.\n\n**Q3: Training and Inference time of the proposed method.**\n\n**A3**: We compare the average training time of our method with recent methods on the NeRF-synthetic dataset (measured with a V100 GPU):\n\n| Methods | Ours-CP | Ours-HY-S | Ours-HY | NeRF | TensoRF-CP-384 | TensoRF-VM-192 |\n| ------------- | :-----: | :-------: | :-----: | ---: | :------------: | :------------: |\n| Training time | 29 min | 30 min | 41 min | 35 h | 25 min | 17 min |\n\nThe inference speed of our method is highly dependent on the complexity of the scene. We measure the inference time to render an 800x800 image for a scene with different composed objects (as in the teaser image):\n\n| Settings | Hotdog | Hotdog + Ficus | Hotdog + Ficus + 3 chairs |\n| -------------- | :----: | :------------: | :-----------------------: |\n| Inference time | 2.22 s | 3.27 s | 6.62 s |\n\nAlthough efficient training and inference is not the major topic of our method, we still achieve considerably faster training and inference speed compared to the vanilla NeRF, due to the no-neural-network model we use. Compared to TensoRF, our method trains slightly slower due to the extra computation of the rank-residual loss.",
" We thank the reviewer for the constructive suggestions. Below are our responses to the questions.\n\n**Q1: Why can only the proposed method compress the NeRF model?**\n\n**A1**: By 'compressible' we refer to the dynamic adjustment of model levels of detail without extra optimization, instead of the compactness of the model to represent a 3D scene. The main difference of our method from previous methods is that we can dynamically adjust the compression ratio on scenes by truncating the rank components, without retraining new models or additional fine-tuning. This property can be useful in many practical applications, such as the adaptive adjustment of model size to save memory without expensive retraining, and the progressive loading in network streaming.\n\n**Q2: Combining different NeRF models does not pose a particular technical challenge. Why can't PlenOctree achieve composition?**\n\n**A2**: Our motivation on composability is to achieve natural and easy composition of NeRF models like the widely used triangular meshes. Since implicit and hybrid NeRFs use MLPs to encode the scene, it is inconvenient and counterintuitive to record lots of MLPs to achieve composition. Instead, explicit NeRFs encode the scene in 3D volumes, which is natural for performing composition. Furthermore, our method proposes a new perspective on the composition of different models. The composition can be interpreted as the concatenation of different models' rank components. Besides, we can adjust the rank components for these models to achieve better rendering efficiency, especially for scenarios with multiple objects (eg. best illustrated in Figure 1 of the supplementary materials). PlenOctree contains two different stages, the NeRF-SH stage applies MLP but the octree stage only contains evaluated density and SH coefficients. The octree stage contains no neural networks and can be composed. We have corrected this in the revised version. Thanks for pointing this out!\n\n**Q3: What is the difference between the composition that this method can achieve compared to other methods such as NSVF and Plenoxels?**\n\n**A3**: (1) The main difference between the composition of our method and NSVF is that our method doesn't require a shared MLP renderer for different models. Methods like NSVF usually contain an MLP renderer to produce the density and color. As discussed in NSVF, different models (i.e., the sparse voxel volumes) must be trained with the same MLP renderer in order to be composed together. This limits the potential possibility to compose a wide range of models. (2) Although Plenoxels requires no MLP renderers, its model size is significantly larger (778MB on average). This large storage cost can hinder its practical usage when composing lots of models. Instead, our method can dynamically adjust the model size from 2.7MB to 88MB to control the total storage cost of a composed scene.\n\n**Q4: What’s the advantage of the proposed method over TensoRF?**\n\n**A4**: As clarified above, our motivation is to explore a naturally composable and compressible NeRF representation, instead of achieving SOTA performance in novel view synthesis. Compared to TensoRF and other NeRF models, our model can further (1) dynamically adjust the model levels of detail without extra optimization, (2) compose multiple single NeRF models into one scene without constraints in training. We believe these capabilities are important to make NeRF manipulatable as triangular meshes, and facilitate NeRF-based scene representation in practical applications.\n\n**Q5: How does the proposed method solve the problem of worse performance without MLP?**\n\n**A5**: (1) The major motivation for our paper is not to achieve SOTA performance in novel view synthesis. By removing MLP, we enjoy the natural composition of different models. (2) We are not the first to remove MLP and only use Spherical Harmonics (SH) to model the 3D scene. Plenoxels [1] also adopts a no-neural-network formulation. (3) We would like to highlight that the worse performance is only relative. As shown in Table 2, our HY model's performance is better than many works that use MLPs (e.g., the vanilla NeRF, NSVF, and DVGO). (4) In general, our model can use more rank components to improve the performance as a remedy.\n\n\n\n[1] Plenoxels: Radiance fields without neural networks. Sara Fridovich-Keil and Alex Yu et. al.",
" We thank the reviewer for the constructive suggestions. Below are our responses to the questions.\n\n**Q1: Parallel computation's influence on the learned coefficients.**\n\n**A1**: The parallel training cannot lead to exactly the same coefficients as the sequential training. In fact, we find that the parallel training has slightly better performance. We experimented with three settings on the chair dataset: (1) sequentially training per stage until its convergence, freezing the previous stages before training a new stage, (2) parallel training all stages, but for each stage we detach the output from the previous stages so each loss only applies on its corresponding rank group, which can be viewed as training each stage independently in parallel, (3) parallel training all stages without detaching, so each loss applies to all its previous rank groups (the setting in our paper):\n\n| Settings | PSNR | Training Time (min) |\n| ------------------------------ | :---: | :-----------------: |\n| (1) sequential | 34.16 | 83 |\n| (2) parallel w/ detach | 33.69 | 28 |\n| (3) parallel w/o detach (ours) | 34.37 | 26 |\n\nThe first setting is significantly slower to assure convergence of each stage. The second setting's final performance is worse due to optimizing later stages with not fully converged earlier stages. Compared to the first two settings, we think the third setting eases the training of the earlier stages, by letting the later stages model the complex details. Therefore, the earlier stages can focus on the fundamentals.\n\n**Q2: The compressed model struggles to produce photorealistic or smoothly varying specularities.**\n\n**A2**: We would like to highlight that our model with all rank components is capable of producing photorealistic renderings. We are not the first to adopt only Spherical Harmonics (SH) without MLP to model the specularities. Plenoxels [1] only use 9 terms SH without MLP to model complex scenes. As compared in Table 2, our HY model achieves better performance and takes much less storage compared to Plenoxels. For the compressed models, since we perform lossy low-rank approximation by truncating the full model, the specularities do get harmed. However, we still consider it is more favorable compared to harming the diffuse color, as shown in the baseline model.\n\n**Q3: Which of CP and TP decompositions does a better job of modeling specularities?**\n\n**A3**: With enough rank components, both CP and TP decomposition can model the specularities well. In general, the TP method contains more parameters per rank and the model capability is better. Therefore, it requires fewer rank components to model the same level of specularities.\n\n**Q4: Are there special decomposition constraints that can be applied to the decomposition such that view-dependent effects are prioritized and preserved?**\n\n**A4**: This is an interesting idea. We have experimented on some decomposition constraints such as weight normalization and orthogonal regularization, but it is unclear how these general decomposition constraints can be connected to prioritizing view-dependent effects, which is quite task-specific. We consider it as a future direction to explore.\n\n**Q5: Ablation study on the influence of SH degrees to specularities.**\n\n**A5**: Thanks for the advice! We performed an ablation study on the SH degrees with the materials dataset, which contains lots of view-dependent effects:\n\n| SH terms | 4 | 9 | 16 (ours) | 25 |\n| ------------------- | :---: | :---: | :-------: | :---: |\n| PSNR | 28.29 | 28.97 | 28.99 | 28.65 |\n| Training Time (min) | 54 | 44 | 49 | 73 |\n\nWe show that 16 terms SH achieves the best PSNR to model view-dependent effects. With too few or too many terms (e.g., 4 and 25), the model is hard to converge and takes more time to train. Plenoxels [1] only uses 9 terms SH. We found 16 terms SH could slightly improve the performance without making the convergence significantly slower.\n\n\n\n[1] Plenoxels: Radiance fields without neural networks. Sara Fridovich-Keil and Alex Yu et. al.",
" \nThis paper proposes an MLP-free NeRF representation that supports both compression and composability. The representation allows for efficient and convenient manipulation of the scene of interest and resembles the Level of Detail (LOD) concept in computer graphics. The ability to compose multiple scenes or objects comes naturally from this representation.\n\nWith no neural network involved, the model represents the radiance field of a scene using tensors that are the tensor rank decomposition of a full tensor explicitly expressing the radiance field. The authors study two types of tensor decomposition allowing the user to control model sizes. A rank-residual learning strategy is used to support an easy trade-off between model size and quality, without any re-training. \n\nThe authors use a hybrid feature volume decomposition that mixes the more expensive CP decomposition and the more compact TP decomposition. The mixing ratio controls model size vs. quality. \n\nCompared with spatial locations XYZs, viewing directions are less “native” to the voxel grid (or tensor) representation, so the authors represent them using the Spherical Harmonics (SH) functions.\n\nBecause the CP decomposition does not come with the property of “rank importance,” the authors propose a rank-residual learning strategy to learn the coefficient residuals either sequentially or in parallel. \n\n====== POST-REBUTTAL UPDATE ======\n\nI read the authors' rebuttal that addresses my concerns reasonably. I appreciate the extra experiments, too. Overall, I'm willing to raise my rating to Weak Accept, to the best of my non-expert knowledge. This of course is conditioned on that the authors will add these new experiments to their final version and add necessary clarifications. \nThe paper offers a fresh perspective of NeRF, similar to Plenoxels’: radiance fields are not necessarily represented by a neural network. A full voxel grid representation is either low-resolution or expensive, but using tensor decomposition ameliorates this problem. \n\nThe major weakness of this paper is that it has not studied view-dependent effects such as specular highlights. As the figure and video of the “drums” scene show, the compressed model struggles to produce photorealistic or smoothly varying specularities. Since the main use of NeRF is view synthesis where view-dependent effects are the first-class citizen, studying how the tensor decomposition methods affect modeling view-dependent effects. For instance, which of CP and TP decompositions does a better job of modeling specularities? Are there special decomposition constraints that can be applied to the decomposition such that view-dependent effects are prioritized and preserved? Given the authors consider just NeRF as the application, I don’t think this can be neglected by this paper.\n\n \n\nI understand how the rank residual learning works for the sequential case but want the authors to clarify how the parallel computation works for this case. In other words, would a parallel computation lead to exactly the same set of coefficients as a sequential computation does? \n\nWhich of CP and TP decompositions does a better job of modeling specularities?\n\nAre there special decomposition constraints that can be applied to the decomposition such that view-dependent effects are prioritized and preserved? \n\nAlso related to view-dependent effects, I think the higher degree you use for SH, the more accurate view-dependent effects you get since the view directions are more “concentrated” and hence more accurate. But with an SH degree of 4, I don’t think you will be able to get smoothly moving specularities. Have the authors tried higher degrees (I’m aware of the squared growth of the number of SH coefficients)? This sounds like a meaningful ablation study to do.\n\n\n Yes, the authors mention baked-in lighting as a limitation. ",
" This paper presents a compressible NeRF model which also supports the composition of different NeRFs to form a new scene. The proposed method uses a mixture of CANDECOMP/PARAFAC (CP) decomposition and Triple Plane (TP) decomposition, which are vector-based and matrix-based respectively, contributing to a hybrid decomposition method. At the same time, in order to ensure that the selected rank components can achieve near-optimal compression results, the authors propose a rank-residual learning strategy. The entire model does not use the MLP network, and different LODs can be achieved by selecting different numbers of rank components. The rank components of different scenes are concatenated together to achieve the composition of different NeRFs. Strengths\n--The decomposition method is novel, although in some aspects, such as CP decomposition, are the same as the existing work TensoRF. But this work is based on low-rank approximation, the proposed hybrid decomposition strategy can adjust the ratio of vector- and matrix-based rank components, which is something that TensoRF can't do.\n--The method can adjust the model size, and level of detail.\n--The paper is clear and easy to understand.\n\nWeaknesses\n-- Although in terms of method, the authors discuss the differences with TensoRF in the supplementary material, it is undeniable that the motivation of the two works is very similar (both introduce vector-based or matrix-based decomposition into NeRF). And in the comparison of TensoRF, the proposed method does not show superiority. In Table 2, the model size of method ‘TensoRF-CP-384’ is 3.9MB, which is smaller than ‘Ours-CP’ (4.4MB), while the quality is also better than ‘Ours-CP’. What’s more, the quality of ‘TensoRF-CP-384’ (3.9MB) is even slightly better than ‘Ours-HY-S’ whose model size is 68.9MB. Therefore, it is hard to illustrate the superiority of the proposed method in terms of both compression and rendering quality.\n -- As for composition, combining different NeRF models does not pose a particular technical challenge. One can obtain a rough geometric bounding box after the training is completed, and then combine the bounding boxes to render combined images. So in Table 2, why can’t some methods, such as PlenOctree, achieve composition?\n--What is the difference between the composition that this method can achieve compared to other methods such as NSVF and Plenoxels?\n--In Table 2, why only the proposed method can compress the NeRF model? Due to it can change the compression ratio? Similar method, TensoRF, can also achieve a small model size. What’s the advantage of the proposed method?\n-- TensoRF uses the MLP network to solve the problem of worse performance caused by Sphere Harmonics (SH). The proposed method also uses the SH functions, but does not use MLP. How does the proposed method solve the problem of worse performance?\n The authors have discussed the limitations.",
" This paper proposes a rank-residual learning strategy to obtain a radiance field representation that supports compressibility and compositionality with an acceptable diminishment of rendering quality.\n ### Strengths\n\n*Originality*: \n- I think this work is a nice combination of decomposition and Neural Radiance Fields (NeRFs). \n- The capability column of Table 2 clearly illustrates the difference between this work and existing works, demonstrating the new capabilities enabled by the proposed method.\n*Clarity*\n- The submission is well-written and easy to follow.\n- Related works are adequately cited and compared. For example, Line 103 discusses the similarities and differences between this work and TensoRF clearly.\n*Quality*: \n- The proposed approach is technically sound.\n- The claims are well supported by the experimental results. \n- The authors are careful and honest with the limitations (e.g., the model size and rendering time grow linearly with the total number of ranks, baked lighting, and bounded scene).\n\n### Weaknesses\n \n*Clarity*\n- Line 70-90, I think categorizing methods into a) neural network-based, b) hybrid, and c) no neural network is more accurate than a) implicit, b) hybrid, and c) implicit. Some 3D representations are called **implicit** for years even if they are stored in voxels, e.g., TSDF, where the signed distance function is actually an implicit shape function.\n- I think Figure 2 needs more annotations. For example, what’s the meaning of dotted lines in the middle sub-figure? What’s the meaning of black lines in the middle and right sub-figures? I assume the meanings of those lines are weighted interpolations. It would be nice to confirm that.\n\n - What’s the training and inference time of the proposed method?\n No.",
" This paper aims to compress neural field representations. To this end, the authors first propose a hybrid tensor decomposition and learn the decomposition via differentiable rendering. To encourage the primary information be learned in lowewr ranks, they introduce a novel training strategy that gradually increase the rank components to approximate the scene content. Strengths\n\n- Overall, I feel the proposed approach is novel and sound. Compared with NeRF, this approach can explicitly add rank components to increase the model capacity, enabling them to dynamically adjust the model size during training. This is a nice property.\n- The paper is well-written and clearly presented.\n- The comparison experiments and ablation studies are sufficient.\n\nWeaknesses\n\n1. Experiments\n\n- Current experiments do not show that the proposed model has obvious advantages than NeRF or Mip-NeRF 360. What is the training time of the proposed approach.\n- How to better analyze the property that you can gradually increase the rank components to improve the model capacity. An ablation study on this property will significanly bring readers more insights.\n\n2. Writting\n\n- Table 2 says that NeRF is not composable. I do not think so. As shown in [1, 2], object-centric NeRFs can be composed. A single NeRF model can even be explicitly manipulated, as shown in [3].\n\n[1] Object-Centric Neural Scene Rendering \n[2] Neural Scene Graphs for Dynamic Scenes \n[3] NeRF-Editing: Geometry Editing of Neural Radiance Fields Please fix the problems in the weaknesses, which can improve the paper quality.\n\nThe property of this approach that impresses me is enabling the explicit controll of model capacity. Analysis on this property will make me feel better to this method. Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"pJ-TqZ6mTJH",
"nips_2022_aPXMGv7aeOn",
"Ihwt1rGrYqd",
"T19cyBMnQNc",
"BkGV6zM11d",
"-VABUxny1dt",
"Xzn4-PdvaEV",
"aXbDFvjTpW",
"9HEzTid0moy",
"OC89pgR7GFs",
"nips_2022_aPXMGv7aeOn",
"DLds5Sz5WMH",
"xf4TatTslIS",
"7640BB9fdA6",
"Ljk2nMqWNM5",
"nips_2022_aPXMGv7aeOn",
"nips_2022_aPXMGv7aeOn",
"nips_2022_aPXMGv7aeOn",
"nips_2022_aPXMGv7aeOn"
] |
nips_2022_uxc8hDSs_xh | Can Hybrid Geometric Scattering Networks Help Solve the Maximum Clique Problem? | We propose a geometric scattering-based graph neural network (GNN) for approximating solutions of the NP-hard maximum clique (MC) problem. We construct a loss function with two terms, one which encourages the network to find highly connected nodes and the other which acts as a surrogate for the constraint that the nodes form a clique. We then use this loss to train an efficient GNN architecture that outputs a vector representing the probability for each node to be part of the MC and apply a rule-based decoder to make our final prediction. The incorporation of the scattering transform alleviates the so-called oversmoothing problem that is often encountered in GNNs and would degrade the performance of our proposed setup. Our empirical results demonstrate that our method outperforms representative GNN baselines in terms of solution accuracy and inference speed as well as conventional solvers like Gurobi with limited time budgets. Furthermore, our scattering model is very parameter efficient with only $\sim$ 0.1\% of the number of parameters compared to previous GNN baseline models. | Accept | All reviewers agree that the proposed approach to use the geometric scattering transform is simple and effective both computationally and in terms of the ability of the method to identify larger cliques for the max-clique problem (except perhaps for one reviewer on the last point).
The work would have more impact if it could be demonstrated that using the geometric scattering transform yields improvement for other combinatorial optimization problems on graphs, or if it could outperform classical heuristics even if they are run for longer time. Currently the experiments presented in the appendix are more compelling than the experiments presented in the main paper.
Given elements they provided in the discussion with the reviewers, the authors should also emphasize more clearly in the paper how their proposed architecture differs from other scattering GCNs that have been proposed, and I would suggest to do an ablation study to show that the enhancements that they introduced in the architecture are actually useful.
A consensus between all reviewers could unfortunately not be found:
- Two reviewers were satisfied with the way the authors had addressed their concerns and with the additional experiments proposed.
- One reviewer considers that the idea of using the scattering transform in this application is not a sufficient contribution to grant publication.
Given that
- two reviewers find the contribution compelling and their concerns are well addressed
- the use the geometric scattering transform is simple and yet effective both computationally and in terms of the ability of the method to identify larger cliques
- the sole motivation of the reviewer who votes for rejection is a claim that the scientific contribution is not sufficient against the opinion of the two reviewers and that of the AC,
the AC is in favor of acceptance.
### Acknowledging that the proposed loss function is the same as in Karalias and Loukas (2021) !
One element which is very important is that the discussion with one of the reviewers has clearly established that **the loss function introduced in this paper is exactly the same** (up to a constant and a multiplicative factor) **as the loss function $\ell_{\text{clique}}$ obtained in** Corollary 1 of **Karalias and Loukas (2021)**.
In the discussion with the reviewer, the authors wrote
"We are happy to add discussion and clarification of the loss terms to our manuscript. This discussion and clarification can also help readers to understand the model better." (which I entirely agree with) but they did not act upon that, yet...
It would now be more than **absolutely necessary to add that discussion** ! This will add value to the paper as it will show that the proposed loss is less ad hoc than it might seem, given that it can be obtained via at least two routes. Moreover establishing connections between approaches in the literature is clearly a valuable contribution.
Currently, the conclusion says: "We further construct a two-term loss function which [...]" which still strongly suggests that the loss function is novel, and it therefore very problematic ethically. The sentence added in blue on line 186 is not sufficient to address the issue.
**The authors should** at the very least **add a sentence** at the beginning of section 3.4 **saying** something like: "We propose a simple derivation of a multi-objective loss function, and retrieve **a loss function which was also obtained by Karalias and Loukas (2021)** as a natural upper bound to the probabilistic penalty loss that they propose".
And at the end of section 3.4, the authors should add a sentence saying: **"The proposed loss matches the loss $\ell_{\text{clique}}$ obtained in Corollary 1, Section 4.1 of Karalias and Loukas (2021)."** | train | [
"2TcnilNRcL8",
"9C6teTzY1z",
"gR4cjDRb8GB",
"dEdJ7-3fqsi",
"utzLmsWXyCp",
"ht2Y2JgYrM1",
"VkDiWor7Fru",
"7JBZDph2Cyp",
"M2pVf4F5PKs",
"e6s2l5qH6tU",
"rnQbSpk5M-6",
"8Uv5539t2GC",
"K8B_5_k28ves"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that some of your concerns have been addressed. \nAnd we are happy that the reviewer agrees with us on the following points:\n1. Our model is lightweight (~0.1 % parameters count) and performs well compared to previous work.\n2. We get a more noticeable benefit on the hardness dataset.\n3. Our structure addresses the oversmoothing.\n4. The loss is similar\n\n**Regarding the loss:** \n\nthis paper discusses how oversmoothing will affect the performance and how different structures (number of parameters) will affect the inference speed. We do not discuss different types of loss in this paper. __Furthermore, we are glad that the reviewer describes our loss as 'extremely similar to related work.' This further strengthens the claim that our expressive network structure brings substantial improvement to the MC task. (instead of the loss function)__\n\n**Regarding the performance:**\n\n the reviewer says that our model ```'achieves mostly marginal improvements... but also a noticeable benefit on Xu-type instances.'``` As shown in our paper, the approximation ratio of baseline model on the first three datasets is already very high (>90%), which leaves little room for improvement. __To show that our model has a substantial improvement, we introduce Xu-type instances, including experiments with large graphs, see the supplementary material.__ We further explain why our method runs faster than the traditional heuristic with the restart strategy. \n\n**Regarding scattering:**\n\nFirst, our scattering structure is different from scattering GCN. We are introducing new modules, such as read-out and a new type of attention mechanism, etc. \nSecond, one of the primary goals of designing a scattering model is to __'show the expressive power of GNNs can be critical for solving graph CO problems'__, as stated in the introduction and the first line of our conclusion. And we think the reviewer also agrees with that. \n\n**Regarding 'shown to work on one combinatorial problem:'**\n\nDue to the page limit, we are discussing the maximum clique in this paper. Our works show that an efficient structure with expressive power (no oversmoothing) is important. We think this claim holds for any bi-variable graph CO problems because we need to consider that time cost and separate 1 and 0 (True and False). We leave the extension for future work. \n",
" - My point regarding the loss being extremely similar to related work stands. The authors have confirmed that they will update the relevant parts of the submission accordingly.\n- The authors have addressed some of my concerns regarding the model. Indeed it appears that the model is a lightweight solution that performs well compared to previous work and there lies the main contribution of the paper. On the other hand, there exist published versions of scattering models (like the scattering gcn published at neurips a few years ago that also aims to address oversmoothing), and in terms of approximation ratios, the model achieves mostly marginal improvements, with the exception of the Xu-type instances where there is a more noticeable benefit.\n\nGiven the above and that the model is only shown to work on one combinatorial problem, I do not think there's enough of a contribution to the field of Combinatorial Optimization + ML to warrant acceptance so I maintain my score.\n",
" We acknowledge the reviewer to clarify the source of confusion, yes we now agree that we are using a scaled up version. \nIn the updated manuscript, as said in the 1st rebuttal, we already removed the word 'novel'. \n\nSince this paper's main idea is to introduce the scattering model and focus on oversmoothing, parameters-efficiency and approximation performance. Exploring how \"scale up loss V.S. non-scale up loss\" will affect the performance seems a little bit off the track. We suspect that using the KL loss, which is $\\frac{1}{2}$ of our loss, will make the approximation ratio change very slightly but not affect the running speed and the number of parameters. We are happy to add discussion and clarification of the loss terms to our manuscript. This discussion and clarification can also help readers to understand the model better.",
" 1) I used Markov's as well. When I say $P(S \\notin \\Omega) \\leq \\mathbb{E}[\\bar{w}(S)]$, that's a consequence of Markov's inequality....I'm just going a few lines up from the line you quoted in equation 12. The points I wrote to you are not my derivation. I'm copying them from the paper but I stop before the rewriting they do to explain to you that what you have is almost the same. \nNot *exactly* the same because you don't use explicitly the probabilistic method so you have no need to invoke Markov's inequality etc. and you are carrying an extra factor of 2 compared to their derivation. \n\n2) Let me provide the precise derivation. \nYou say that your term is $P(S \\notin \\Omega) = \\mathbf{p}^\\top \\bar{\\mathbf{Wp}}$. We know that $\\bar{\\mathbf{W}} = \\mathbf{1} - (\\mathbf{I} + \\mathbf{W})$, where $\\mathbf{1}$ is an $n \\times n$ all ones matrix.\nSo we have\n\\begin{align*}\n\\mathbf{p}^\\top \\bar{\\mathbf{Wp}} &= \\mathbf{p}^\\top (\\mathbf{1} - \\mathbf{I})\\mathbf{p} - \\mathbf{p}^\\top \\mathbf{Wp} \\newline\n&= \\sum_{v_i \\neq v_j} p_ip_j - 2\\sum_{(v_i,v_j) \\in E} p_i p_j w_{ij} \\newline\n&= \\sum_{v_i \\neq v_j} p_ip_j - 2\\mathbb{E}[w(S)] \\tag{$\\mathbb{E}[w(S)] = \\sum_{(v_i,v_j) \\in E} p_i p_j w_{ij}$, equation 11 in KL} \\newline\n&= 2( \\frac{1}{2}\\sum_{v_i \\neq v_j} p_ip_j - \\mathbb{E}[w(S)]). \\tag{this is the KL expression scaled by 2}\n\\end{align*}\n\nIt is clear that this is the expression in equation 12 of the KL paper scaled by a factor of 2. The one that you wrote in your comment.\nI hope we can agree that one is essentially a scaled up version of the other. \n\n\nP.S.: In my previous comment, to make the similarity more apparent, I suppressed the factor of 2, i.e., I took $\\mathbb{E}[w(S)] = \\mathbf{p}^\\top \\mathbf{Wp}$, when it's technically $\\mathbb{E}[w(S)] = \\mathbf{p}^\\top \\mathbf{Wp}/2$ in the KL paper. Maybe that was the source of confusion? ",
" Dear reviewer, in KL paper, the authors are using additional Markov’s inequality to bound the loss.\n\nsee equation 12 in KL's supplementary material, where they write:\n\n$P(S \\not\\in \\Omega) \\leq \\frac{1}{2} \\sum_{v_i \\neq v_j} p_i p_j$ - $\\mathbb{E} [w(S)]$.\n\nand here we are using:\n\n $P(S \\not\\in \\Omega) = p^T \\overline{W} p$.\n\nThey are not the same term(s).",
" Regarding point 1:\n\nYou claim that the loss is not a simplified rewriting, so I need to understand which of the following you disagree with:\n\n1) Your loss is given by equation 11 and it's $L(\\mathbf{p}) = -\\mathbf{p}^\\top \\mathbf{W}\\mathbf{p} + \\beta \\mathbf{p}^\\top \\bar{\\mathbf{W}}\\mathbf{p} $.\n2) The KL probabilistic penalty loss is described by equation 3 in that paper: \n$ \\mathbb{E} [f(S)] + \\beta P(S \\notin \\Omega) $. \nLet's follow their derivation on appendix D.3.1.\n3) $ \\mathbb{E} [f(S)] = \\gamma - \\mathbb{E} [w(S)] = \\gamma - \\sum_{i,j} w_{ij}p_ip_j =\\gamma - \\mathbf{p}^\\top \\mathbf{W}\\mathbf{p} $\n4) From that same section we have for the constraint $P(S \\notin \\Omega) \\leq \\mathbb{E} [\\bar{w}(S)] $. This is the expected weight on the *complement graph*, i.e., on the complement of the edge matrix as you write.\n5) $ \\mathbb{E} [\\bar{w}(S)] = \\mathbf{p}^\\top \\bar{\\mathbf{W}}\\mathbf{p} $ because this is the expected weight of $S$ on the complement graph. So this quantity can be used to bound $P(S \\notin \\Omega)$. The authors continue a few steps and do some additional manipulations on this expression to arrive at their final loss.\n6) Putting it all together: $ \\mathbb{E} [f(S)] + \\beta P(S \\notin \\Omega) \\leq \\gamma - \\mathbf{p}^\\top \\mathbf{W}\\mathbf{p} + \\beta \\mathbf{p}^\\top \\bar{\\mathbf{W}}\\mathbf{p}. $ \n\nPoint 6 has the Erdos loss while point 1 has the loss proposed in your paper. I believe this illustrates how your loss is essentially a simplification that gets rid of the displacement term $\\gamma$ (it doesn't matter for optimization anyway) and does not do the extra steps the authors of that paper do in the appendix to arrive at the particular final version of the loss in their Corollary 1. Those steps are just rewriting the expression so qualitatively your loss is a simplified rewriting. Maybe you disagree with my characterization but I hope it's clear from points 1 and 6 that the losses are almost the same which is why I challenged the claim of \"novelty\" for the loss function. \n\nDoes this make sense?",
" Please check \"Response to all reviewers\" first:\n\n**Weaknesses and criticism**\n1. […the two loss functions. Using both the original graph and its complement has the inevitable drawback of not exploiting the sparsity of the graph…]: First, this loss term is calculated in the training time, so it will not affect the model’s inference speed. Second, the loss term $L_2$ can be written as $ p^T \\overline{W} p = (\\sum_{v=1}^n p_v)^2 - p^T W p - \\sum_{v=1}^n p_v^2$, so that no dense graph computation is required. \n[…slightly simplified rewriting of the KL loss…]: The initial KL loss contains two parts: the first term is conditioned on the clique, while our loss is not. The second term in KL paper does not depend on the edge matrix, while the second term in our loss term is the complement of the edge matrix, which is based on the edge matrix. So the loss term is NOT just a simplified rewriting. \nOur new loss term and the scattering structure enable our model to successfully outperform KL paper in both time and accuracy, using around 1/1000 of the parameters that KL takes. Considering the number of parameters our model takes, we now change the term ‘novel’ to ‘efficient’.\n2. […I am not entirely convinced that the proposed model offers a substantial benefit…]: Our scattering model is very parameter-efficient and uses around 0.1% parameters of the previous baseline (KL paper). Note that though there are various architectures, like adding skip connections may also works, but introducing skip-connections has no evidence to reduce the model’s complexity, in fact, it may hurt the model’s inference time. (because usually, skip-connection structures are very deep and wide, which increases the time complexity.) Our new draft explains why the scattering model offers a substantial benefit. Because the running time of a GCN-type model is quadratic to the width of the model’s hidden space, that is to say, increasing the width of GNN or using a multi-head mechanism will result in larger time complexity. Previous KL (erdos GNN) paper uses 8 head GIN (Graph Isomorphism Network) with 64 hidden units and the model consists of over 1.8 million parameters. In contrast, our scattering does not use a multi-head mechanism and uses only 8 hidden units and takes around 1,000 parameters. \n3. […I assume that maximum is the term that applies…] We change maximal to maximum. \n4. [...other techniques … greedy heuristic would …]: We add new results and discussion on the difference between our method and the heuristic.\n5. [...significantly differ from works like the KL paper or even RUN-CSP…]: Our model takes only 1/1000 parameters of the previous baseline model and achieves better results and runs faster, we remark our results as highly non-trivial, and our structure is significantly different from the previous one. In this paper, we propose this highly efficient method and run experiments on both natural datasets and tasks with different hardness. In practice, any graph combinatorial problem that can be written as in True versus False fashion, such as mac cut, vertex cover, etc, can also use our baseline. We leave these tasks for future work. \n6. [... on larger graphs….]: we add experiments on large graphs. Note on large graphs, we are not able to get the ground truth solutions, we use Gurobi (0.1s) as baseline.\n\n**Questions:**\n\n7. [...More details about the exact setup …]: we add the code in the github repo.\n8. [...particular benefit …]: our scattering overcomes the over-smoothness problem, also our model is very parameter efficient, which takes only ~0.1 % of previous baseline, since we (or most heuristics) aim to quickly obtain a good approximation of the MC. An efficient structure with expressive power is very important. In conclusion, there are following three benefits: 1.Overcome the oversmoothing 2.Faster running time with higher accuracy 3.Reduce the parameter counts of the previous model by over 99.9% \n\n**Limitations:**\n\n9. [... limitations or the societal impact…] The limitation of our approaches: our GNN model focuses on maximum clique. We don’t see any direct negative societal impact here. When generalizing this method to other graph structures such as social networks or biological data, we should protect personal information/privacy in that dataset. ",
" Please check \"Response to all reviewers\" first:\n\n**Weaknesses:**\n1. [...disentangle the different methods…]: We add more experiments as well as experiments on large graphs. Compared with other models, our scattering GNN can achieve a good solution quickly. The gap between running time becomes more pronounced when it comes to large graphs and graphs with medium and hard hardness. As shown in the data statistics in the supplementary material, the IMDB dataset may be too small and too easy (since every model achieves high accuracy). Except for IMDB, the scattering model always provides a good approximation within a shorter time. We also add discussion between our scattering model and traditional heuristic.\n2. [...I find it difficult to not see it as a way to hide the fact that your method is not the best on the hardest dataset (Twitter)....]: We remove all bolding. This paper aims to obtain a fast approximation of MC, which requires us to consider running time. Note that on the Twitter dataset, RUN-CSP gets high accuracy, but the model’s running time (0.39) is larger than GUROBI (0.34) and GUROBI achieves better performance. Since the goal is to get a good solution quickly, RUN-CSP is less competitive in this case. RUN-CSP gives us the highest accuracy among all neural baselines. However, due to the RUN-CSP model’s complexity, it takes longer time than the existing solver GUROBI. This INSPIRED that we HAVE TO consider the complexity when building these neural-based heuristics. In the updated manuscript, we discuss the model’s complexity. Note that the scattering GNN model takes only ~0.1% of parameter counts of the previous baseline. Our efficient structure is the key to guaranteeing fast approximation time.\n3. [...\"normal\" maximal clique heuristics are included - for example see Grosso et al. (2008)...]: We add a ‘normal’ heuristics baseline in [Grosso 2008]. We further discuss the difference between traditional local search method and our model. See the updated manuscript.\n\n**Questions:**\n\n4. [...Tau parameter. Why not set Tau=infinity? …]: First, the clique number of such graphs is usually very close to 2 log2(n), where n is the number of nodes [Karp 1976]. So there is no need to set \\tau = infinity. Second, \\tau controls the running steps of the decoder, setting \\tau to a threshold saves running time. Note \\tau is also used in local search heuristics, for example [Grosso 2008], where they use the parameter ‘max selection’ to control the length of iterations, which can be regarded as the parameter that controls the time and accuracy trade-off. Reference:\nKarp, Richard M. (1976), \"Probabilistic analysis of some combinatorial search problems\", in Traub, J. F. (ed.), Algorithms and Complexity: New Directions and Recent Results, New York: Academic Press, pp. 1–19.\n5. [... Gurobi are not respected…]: Note that optimization may not stop immediately upon hitting the time limit. It will stop after performing the required additional computations of the attributes associated with the terminated optimization. see \nhttps://www.gurobi.com/documentation/9.5/refman/timelimit.html\n6. [ …competing neural network approaches…]: We add these results. \n7. [A more theoretical question regards non-uniqueness ...], \nIt is possible that the GNN may converge towards an average of the solutions, that is actually why we highlight the oversmoothing problem in our paper. In practice, we notice that even in the hard cases in our paper, our GNN can give you ~ 85% of the MC size in a short time, however, pushing this accuracy higher is very difficult, even for traditional heuristics (because it’s NP-hard). One possible strategy is to design a new model that has more expressive power with non-smooth output and does not have an ‘average solution’. Another strategy is to design a term that discourages ‘average solution’ and add it to the loss function, however, for the first strategy, we need to consider the model’s complexity and for the second one, designing effective loss requires good intuition. \n8. [...a some cherry-picking here …]: We are not choosing different criterions, the \\bold notation may be confusing so we remove it. The purpose of all tables are the same: that is to show that the scattering model gets a good approximation at a very fast speed. We add other baseline models as well.\n\n**Limitations:**\n\n9. [...often not competitive with Gurobi with…]: The key of our method is to approximate the MC size quickly, and the gap between the approximation ratio becomes more significant on larger graphs. We add comparison with human-designed heuristics. We also discuss the difference in our new manuscript. We also upload the heuristics code in the github repo. Note that all the implantation in this paper is based on python.In this paper, we propose an efficient GNN structure to fast approximate the max clique problem, we agree that the claim may be too strong, we remove this assert in the updated manuscript.\n\n\n",
" Please check \"Response to all reviewers\" first:\n\n**Weaknesses:**\n1. [... general ML audience care about MC? Why is it interesting? …]: we add discussion about the importance of MC problems.\n2. [Notation is a bit sloppy sometimes: ] we revise the manuscript according to your suggestions.\n3. [where do u_0, v_0 come from?]: u_0, v_0 come from the support of p.\n4. [.... structured more: Baselines, Evaluation metric, Results …] We update the manuscript\n5. [... only measure the \"size\" of the MC…]: We use the ‘size’ of MC for two reasons. First, we want to be consistent with the previous papers’ evaluation metrics, (erdos gnn, RUN-CSP). Second, when the size grows larger, getting the ground truth is unrealistic because the problem is NP-hard. Actually, in our updated manuscript, we discuss the large graph cases (see the supplement material), where the ground truth is very hard to obtain. Especially when the graph size is larger than 1000, we notice that some cases do not finish in 24 hours. In this case, we can’t use ‘overlap’ as evaluation as we do not have the ground truth (we are comparing two sub-optimal results). Also, the difficulty of getting the right solutions is one of the reasons why we use unsupervised learning instead of supervised learning, as discussed in the introduction section.\n\n**Further details:**\n6. [l.124: we also want..] fixed\n\n**Questions:**\n7. [...only measure the \"size\" of the MC, but not if it's correct or not? …]: 1. We follow previous papers’ evaluation metric 2. It is very expensive to get the ground truth, (note that to make sure we get the ‘maximum’ clique, we need to verify all possible cliques), refer to the discussion before. ",
" We thank the reviewers for their constructive comments. We notice that reviewers are concerned about the following two questions:\n1. What does our scattering model differ from previous baseline and what can we benefit from the scattering model:\nAnswer: Our scattering model helps overcome the over-smoothing problem. It also outperforms the previous baseline model (erdos gnn) in both time and accuracy. Furthermore, the scattering model only takes ~ 0.07% parameters of the previous baseline. We remark such a reduction of parameters is a significant breakthrough. In other words, we can reduce the parameter counts of the previous model by over 99.93% and get better performance. We think our result is highly non-trivial. Efficient GNN structure is critical for solving graph combinatorial problems because we must consider the model’s complexity. In the updated manuscript, we add a comparison between our model's complexity and the previous baseline (Erdos gnn) and explain why our model is faster.\n2. Lack of enough evidence, including does not contain a heuristic:\nAnswer: We add more experimental results, including heuristic and results on large graphs. Our evidence indicates that scattering GNN can find a good approximation of MC at a very fast speed. Note that when the graph size grows larger, getting the ground truth is unrealistic. ( because time complexity grows exponentially) For large graphs, we use the GUROBI solution as the baseline. We also discuss the difference between our methods and the traditional heuristic in the updated manuscript.",
" The paper proposes to tackle the maximal clique problem in graphs with a hybrid method that relies on a scattering step as well as a rule-based decoder step to extract the predicted maximal clique. Overall, the paper is novel, clearly written and self-contained. Strengths:\n\n- Relevant contribution to address oversmoothing of GNNs via scattering approach for MC retrieval\n- Clear paper structure, the reader is taken by the hand\n- The method outperforms the baselines in several datasets\n\nWeaknesses:\n\n- Section 1 misses some motivation: why should the general ML audience care about MC? Why is it interesting?\n- Notation is a bit sloppy sometimes: Concat operation in Eq 7 not introduced, attention scores a become alpha l.109-112. element-wise operation in Eq not defined, shape dims of H_cat not defined, Objective L*(C) is a bit poorly formalized (make explicit: max_{terms to maximize over}, p \\geq 0 in l.136 seems hand wavy, \n- Proof of Lemma 1 is a bit quick/short: where do u_0, v_0 come from? Maybe I just missed something, but this point needs more clarification as it's not obvious to me.\n\n- Result section could be structured more: Baselines, Evaluation metric, Results \n- The approximation score seems to only measure the \"size\" of the MC, but not if it's correct or not? (like an overlap to the ground truth?)\n\nFurther details:\n- l.124: we also want.. \n - The approximation score seems to only measure the \"size\" of the MC, but not if it's correct or not? (like an overlap to the ground truth?)\nplease clarify why you don't measure overlap (like Jaccard) to the ground truth. -",
" The maximum clique problem is a classic NP-hard combinatorial optimization problem with numerous applications. Because of its difficult, heuristics that find good solutions fast are desirable, and there has been a few works that have explored machine learning for designing such algorithms. In this paper, the authors propose a novel approach based on a GNN that is trained by supervised learning to predict the probability of a node to belong to a maximal clique, followed by a greedy algorithm (a decoder) which constructs as large a clique as possible from the probabilities. In departure from previous work, they propose that the model use a geometric scattering transform, which reduces neighbor smoothing. They show improvements on empirical datasets against alternative neural network approaches, and Gurobi with a time limit. Strengths\n\n- The method looks faster than competing machine learning methods, at comparable performance (although this needs to be nuanced by some issues I have about the experimental results, which I detail in the weaknesses section.)\n- The geometric scattering transform idea makes sense, although I am not sure whether there is something specific here about the MC problem: I would feel like the explanation given in this work would apply to many other GNN-based heuristics for graph-based combinatorial problems (e.g. vertex cover, independent set, etc.) But this is a good thing I suppose.\n- The fact that the proposed training loss is unsupervised, yet differentiable, is an advantage, although I wonder what would happen if the GNN was trained in a supervised fashion to approximate pre-computed maximal cliques.\n\nWeaknesses\n\n- I have several issues with the experimental results. First, I find the benchmarks a little too easy: since the approximation ratio is your criterion, it is difficult to disentangle the different methods if the ratios are so high. This is particularly problematic for the IMDB dataset.\n- Second, I don't really find the bolding very fair. Why highlight the two best methods in Table 1? I find it difficult to not see it as a way to hide the fact that your method is not the best on the hardest dataset (Twitter).\n- I don't understand why no \"normal\" maximal clique heuristics are included - for example see Grosso et al. (2008). Instead you only have results against Gurobi with a time-limit, which is not really designed for your objective at hand (finding as good solutions as possible, fast). Although I can understand that it might be difficult to improve over state-of-the-art human-designed heuristics, not including them makes it difficult to assess problem difficulty.\n\n\nGrosso, A., Locatelli, M. and Pullan, W., 2008. Simple ingredients leading to very efficient heuristics for the maximum clique problem. Journal of Heuristics, 14(6), pp.587-612. - I don't really understand why the decoder has a Tau parameter. Why not set Tau=infinity? Isn't the goal to have a large a clique as possible?\n- I don't understand why the time limits given to Gurobi are not respected. Ex. in table 1, how come Gurobi with a 0.1s time limit takes 0.21s?\n- Why are the competing neural network approaches are missing from Table 3?\n- A more theoretical question regards non-uniqueness of solutions: maximum clique problems, especially unweighted, can often have many optimal solutions. When this is the case, won't the unsupervised loss try to steer the GNN towards an average of the solutions, leading to something which is not a maximal clique?\n- I don't understand why approximation ratio is chosen as a ranking criterion in Table 1, but time is chosen as a ranking criterion in Table 3. I feel there is some cherry-picking here to make the method look best.\n - I see that the method is often not competitive with Gurobi with a time limit, which makes me believe that the results would be even less competitive against human-designed heuristics for the problem. Right now, I don't think the authors really address this, which I think they should. At minimum, I think the results are too mixed to assert that the method is \"competitive with commercial solvers in time and accuracy\". ",
" The paper proposes a novel model that is trained without supervision to solve the maximum clique problem. Furthermore, a greedy decoding scheme is proposed to enable the discretization of the continuous output from the neural network. The model architecture relies on the scattering transform which has been proposed to avoid oversmoothing in GNNs. The proposed model achieves competitive results on common experimental benchmarks and on synthetically generated instances of varying difficulty. ## Strengths\n1) The approach is simple and well explained in the paper.\n2) The authors evaluate on an experimental setup from the literature, making comparisons with related work easier.\n3) Results appear to be competitive across multiple datasets.\n\n## Weaknesses and criticism\n1) Parts of this paper's pipeline are not properly attributed to previous work. For example, the loss function includes two terms: \nthe expected weight of the edges in the selected set, and the expected weight of the edges on the complement graph. \nThis is essentially the loss function used in the paper by Karalias and Loukas (henceforth KL for brevity) that the authors cite throughout the paper, but not specifically for the loss. In fact, line 217 in the conclusion claims \"We further construct a novel two-term loss function...\". I think it is fair to say that this construction is not novel.\nThe loss function may appear to be different from the one in KL but is actually extremely similar, minus a few additional steps that are required in KL to secure the theoretical guarantee. To understand the similarity, it suffices to check the proof of corollary 1 in the appendix of that paper. The probability of constraint violation amounts to the expected weight on the complement graph (denoted by the expectation of $\\bar{w}(S) $). \nHowever, that paper rewrites the term as a function of the graph itself and avoids the computation in the complement. This is the main reason behind the apparent differences between the two loss functions. Using both the original graph and its complement has the inevitable drawback of not exploiting the sparsity of the graph since at least one of the two will inevitably be dense. Ultimately, the proposed loss is a slightly simplified rewriting of the KL loss.\n\n2) The authors emphasize the over-smoothing problem, but this can be overcome with various architectures and/or skip connections. Indeed, both the Erdos paper and RUN-CSP do not just use plain GCN architectures and manage to do fairly well. I am not entirely convinced that the proposed model offers a substantial benefit.\nPerhaps some ablations that demonstrate the over-smoothing problem on existing implementations of combinatorial ML papers from the literature would have helped.\n\n3) Minor issue: the authors mention the \"maximal clique\" problem although I think they mean to say \"maximum\". Given that the authors mention the hardness of the problem and approximation ratios, I assume that maximum is the term that applies (maximal cliques can be found easily).\n\n4) Experimentally, it would be good to see how the other techniques (e.g., Erdos or RUN-CSP) perform on Xu instances (table 3). A greedy heuristic would also be nice as well.\n\n5) Apart from the different model, this approach does not significantly differ from works like the KL paper or even RUN-CSP, with the added limitation that it only addresses the maximum clique problem.\n\n6) Scalability is not addressed in the paper. Does the proposed model scale well on larger graphs? The instances that this was tested on have at most a few hundred nodes. Given that the paper places heavy emphasis on the model, more thorough experimental demonstrations are required in that regard as well. 1) Did the authors use the RB model to generate the instances of table 3? Did the authors code it up from scratch? I could not find the implementation in the code provided. More details about the exact setup would be appreciated.\n2) Apart from over-smoothing, does the scattering architecture offer a particular benefit for the maximum clique problem? It is not clear to me whether the choice of this architecture relates to this particular problem or not.\n The authors have not discussed the limitations or the societal impact (arguably not applicable here) of their work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"9C6teTzY1z",
"VkDiWor7Fru",
"dEdJ7-3fqsi",
"utzLmsWXyCp",
"ht2Y2JgYrM1",
"VkDiWor7Fru",
"K8B_5_k28ves",
"8Uv5539t2GC",
"rnQbSpk5M-6",
"nips_2022_uxc8hDSs_xh",
"nips_2022_uxc8hDSs_xh",
"nips_2022_uxc8hDSs_xh",
"nips_2022_uxc8hDSs_xh"
] |
nips_2022_R5KjUket6w | CEIP: Combining Explicit and Implicit Priors for Reinforcement Learning with Demonstrations | Although reinforcement learning has found widespread use in dense reward settings, training autonomous agents with sparse rewards remains challenging. To address this difficulty, prior work has shown promising results when using not only task-specific demonstrations but also task-agnostic albeit somewhat related demonstrations. In most cases, the available demonstrations are distilled into an implicit prior, commonly represented via a single deep net. Explicit priors in the form of a database that can be queried have also been shown to lead to encouraging results. To better benefit from available demonstrations, we develop a method to Combine Explicit and Implicit Priors (CEIP). CEIP exploits multiple implicit priors in the form of normalizing flows in parallel to form a single complex prior. Moreover, CEIP uses an effective explicit retrieval and push-forward mechanism to condition the implicit priors. In three challenging environments, we find the proposed CEIP method to improve upon sophisticated state-of-the-art techniques. | Accept | All three reviewers have elected to accept the paper, with accept ratings of 5,6,7.
The reviews were thorough and demonstrated an understanding of the paper, and the authors have addressed many of the suggested edits. I like that the paper tackles the combination of parametric vs. non-parametric learning. One weakness of the paper, from a reproducibility POV (and also mentioned by the authors in limitations), is that there are a lot of moving pieces in the system (RL, non-parametric dataset lookup, one flow per task + 1 additional one for distilling them). It would seem quite annoying to implement correctly, if starting from scratch (but this is just an aesthetic feedback).
Despite the authors saying that the paper is "not too good to be true", I still find the stark contrast between baselines and the proposed method a bit hard to believe. I believe the code (if released) by the authors would reproduce the stated results in the paper, but what I am more skeptical of is that the baselines couldn't be tuned to perform much better. This is important for this specific paper, given the complexity of the method: a practitioner would want to know whether there is a simpler way to implement the improvements proposed here. For example, authors mention "The key of our strong results are due to our combination of 1-layer flows with explicit prior, which are missing in the baselines. SKiLD and FIST have an LSTM-VAE architecture, which is too heavy with few task-specific trajectories compared to 1-layer flows; PARROT includes neither explicit prior nor flow combination."
This which makes me wonder whether there isn't some simpler way to implement this, i.e. k-NN retrieval paired with contrastive embeddings + small networks for behavior cloning.
A minor nit: The explicit / implicit priors terminology was also confusing to me, as I typically think of this as "amortized inference + retrieval" or "parametric learning + non-parametric learning".
Recommendation: accept.
| train | [
"VTzSNrtfgq",
"HdPgfwAvW4v",
"Y4E7Tc-9go",
"c5VY3K6UApI",
"CDGlNBwuBaD",
"0BN5xEK3BRz",
"FwB1MYBdHhf",
"7xuy38Di2V",
"duz0mc4-AN1",
"ezoNxEp7D3_",
"dGAZ7EfH08g",
"-ymyYOznsIW",
"8M9qlkCbnNw",
"B0L9oRjZ_FG",
"0zlYybwuPBm",
"VJ-TiRi8O6Y"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification. I will keep my rating as a result of the author-reviewer discussion.",
" Thanks for the response. I have updated my rating accordingly. ",
" We thank all reviewers for their valuable and insightful comments. We have updated the pdf which integrates all advice and all new experiments conducted in response to all reviewers. We highlight all modified parts using blue font.",
" ### Q1: Push Forward\n\nPush forward in PARROT+TA does not make a difference in Kitchen-SKiLD-A, Kitchen-SKiLD-B and office because, in these environments, PARROT+TA never reversed to earlier states in the expert trajectory. Thus push-forward does not take effect and does not affect the RL process. Note, this does not mean that PARROT+TA is closest to the expert trajectory, as the distance between a PARROT+TA state and the expert state can be, and is indeed, very large (for FIST [1] in their own experiment setting and CEIP+TS+EX+forward, the distance is usually < 0.01; for PARROT+TA, the distance often diverges and can be greater than 1). It only means that PARROT+TA moves in about the same direction as the expert trajectory (such that it will not refer to the same state twice), albeit with offsets.\n\nPush forward in Kitchen-FIST-B does not change results, because neither PARROT+TA+EX nor PARROT+TA+EX+forward learns to finish the first task – note, this first task is missing in the task-agnostic dataset. This prevents PARROT from picking up this skill.\n\n### Q2: Office Result in rebuttal Table C3\n\nWe apologize for the confusion. We made a mistake in rebuttal Tab. C3 and used CEIP+TS+EX’s initial reward for CEIP+TS+EX+forward, which makes CEIP+TS+EX+forward look worse than PARROT+TA+EX and PARROT+TA+EX+forward. The correct initial reward for CEIP+TS+EX+forward should be 5.36. (See Fig. 5b for approximate initial reward for CEIP+TS+EX and CEIP+TS+EX+forward.) The results are consistent with those in other experiments.\n\n### Q3: GT24\n\nThe details are as follows:\n\n**Merging:** For Kitchen-SKiLD where the number of ground-truth labels is 33, there are exactly 9 labels that have no more than 3 demonstrations. We merge each of them into the label that is next to them in the dictionary order of concatenated task names.\n\n**Splitting:** For Kitchen-FIST where the number of ground-truth labels is x, x<24, we select the 24-x labels with the most demonstrations, and divide them evenly into two halves; each half is a new label. Note, no information is taken into account.\n\nWe have included the details and corresponding experiments in Appendix D.3.\n\n### Q4: Add figure in the revised Manuscript\n\nThanks a lot for the advice. We have modified the figures in the paper and added the curves of the new experiments. All revisions by the reviewers’ advice are marked blue in the updated pdf. The figures that have been modified include Fig. 3, Fig. 4, Fig. 5, Fig. 9, Fig. 10, Fig. 11, Fig. 12 a) c) e), Fig. 13 b) d) g), Fig. 15 and Fig. 16, for which all captions are marked blue. \n\n**References**\n\n[1] K. Hakhamaneshi et al. Hierarchical few-shot imitation with skill transition models. ICLR, 2022.\n",
" ### Q1: Why are the weights assigned to task-specific flows so small?\n\nNote, in the kitchen and office environment there are 24 task-agnostic flows and 1 task-specific flow. A uniform distribution assigns a weight of 1/25 = 0.04 to each flow. As we have mentioned in the last response, the architecture encourages the coefficients to be closer to a uniform distribution so as to avoid over-reliance on one flow.\n\n### Q2: Adding the definition of delta function to the main paper\n\nThanks a lot for the suggestion. We have modified the main paper to add this definition as suggested (marked blue in the updated pdf), as well as all suggestions mentioned in the original review. We are sticking to the 9-page limit for now but will move important experiments to the main paper for the camera-ready version upon acceptance where we have a 10-page limit.\n\n### Q3: Can the authors update the relevant plots with mean return instead, so that we can better see the impact of RL?\n\nIn the original submission, we plotted every figure of the paper to show the undiscounted mean reward. The “discounted return” is only the reason why we observe a “shortened episode length”. In the original Fig. 14, we show the episode length, which is a surrogate for discounted return because undiscounted return, as shown in the paper, is already at its maximum and doesn’t reveal the improvements made by RL.\n\n### Q4: Is CEIP before any RL sufficient for solving the tasks?\n\nThanks a lot for the clarification. As suggested by Fig. 4 and 5, CEIP before any RL is sufficient for solving some of the kitchen environments (but not all), and is not sufficient for solving the office environment (where the maximum reward is 8). In the cases where CEIP before RL can solve the task, RL shortens the episode length and refines the policy.\n\n",
" Thank you for your response. \n\nHere are some further questions/suggestions/clarifications: \n\n1. Do authors have any intuitions on why the weights assigned to task-specific flows are small? Given that the procedure for fitting the task-specific flow and the procedure for determining weights optimizes the same objective (negative log-likelihood on task-specific trajectories), I wonder why the weights assigned to task specific flows are this small. \n2. The definition of the delta function is helpful. Please add it to the paper. Neurips allows you to update the pdf. \n3. Thanks for the clarification regarding mean reward / episode length. I think mean (undiscounted) return is a better metric to plot on RL learning curves (instead of the mean reward that is currently plotted), and is more widely used when comparing RL algorithms (for example, see [1]). Can the authors update the relevant plots with mean return instead, so that we can better see the impact of RL? \n4. When I said imitation learning alone, I was referring to CEIP before any RL, not vanilla behavior cloning. \n\nI will update my rating after the next response. \n\n[1] Haarnoja et al., Soft Actor-Critic: Algorithms and Applications, 2018.",
" Thank the authors for dealing with the comments. I have several concerns and a comment regarding the response.\n\n**[Push Forward]**\n\nAccording to Figure 5b in the main paper, push forward heuristics which force the agent to move forward is a key factor to boost performance in CEIP. Moreover, as shown in Table 5c, it is also effective in PARROT. However, in Table 2, if PARROT also uses task-agnostic data, push-forward does not make any changes. Could you explain the reason?\n\n**[Office Result in Table 3]**\n\nWhy CEIP+TA+EX+Push is significantly worse than others in Office in Table C3? The author should explain and analyze the reason that does not match with results in other environments.\n\n\n**[GT24]**\n\nThe authors mention that GT24 is to make the same number of ground truth that is used in CEIP by merging and splitting the ground truth depending on the environment. Could you describe the details of how to merge or split it?\n\n\n**[Add Figure in the Revised Manuscript]**\n\nPresenting the tables in the response is appreciated. However, since this NeurIPS, paper change is allowed, it would be much better to add the Figure in the paper and refer it to here makes reviewers better understand.\n\n",
" ### Q4: Comparing CEIP using k-means and ground-truth labeling\n\nWe list the results in Tab. C5 and C6. Note we use 24 labels in k-means, but not all task-agnostic datasets have 24 ground truth labels. For a fair comparison, we also show the result using ground truth but pruned to 24 labels; we merge labels with few trajectories (for kitchen-SKiLD) and split labels with more trajectories (for kitchen-FIST) for this.\n \nHere are the meanings of each method in Tab. C5 and C6:\n\n**NFW:** No pushForWard (CEIP+TS+EX)\n\n**FW:** pushForWard (CEIP+TS+EX+forward)\n\n**GT24:** Ground Truth labels, but with merge and split to form 24 labels\n\n**GT:** Ground Truth labels; the number of subtasks differs\n\n**KM:** K-Means labels\n\n**Table C5:** Comparison between ground-truth label and k-means label for CEIP+TS+EX and CEIP+TS+EX+forward before RL.\n\nEnvironment | NFW+GT | FW+GT | NFW+GT24 | FW+GT24 | NFW+KM | FW+KM\n---|---|---|---|---|---|---\nKitchen-SKiLD-A | 4 | 4 | 4 | 4 | 4 | 4\nKitchen-SKiLD-B | 3.96 | 3.95 | 4 | 4 | 3.81 | 3.32\nKitchen-FIST-A | 3.59 | 3 | 3.68 | 3.24 | 3.44 | 3.41\nKitchen-FIST-B | 4 | 4 | 3.76 | 3.75 | 3.8 | 4\nKitchen-FIST-C | 3.94 | 3.81 | 3.80 | 3.85 | 4 | 3.94\nKitchen-FIST-D | 3.6 | 3.4 | 3.96 | 3.9 | 3.75 | 3.76\n\n**Table C6:** Comparison between ground-truth label and k-means label for CEIP+TS+EX and CEIP+TS+EX+forward after RL.\n\nEnvironment | NFW+GT | FW+GT | NFW+GT24 | FW+GT24 | NFW+KM | FW+KM\n---|---|---|---|---|---|---\nKitchen-SKiLD-A | 4 | 4 | 4 | 4 | 4 | 4\nKitchen-SKiLD-B | 3.87 | 3.87 | 4 | 4 | 4 | 4\nKitchen-FIST-A | 3.93 | 3.9 | 3.92 | 3.99 | 3.94 | 3.95\nKitchen-FIST-B | 3.8 | 3.74 | 3.97 | 3.88 | 3.92 | 3.89\nKitchen-FIST-C | 3.94 | 3.96 | 3.99 | 3.95 | 3.93 | 3.92\nKitchen-FIST-D | 3.71 | 3.93 | 3.87 | 3.96 | 3.95 | 3.94\n\nFor kitchen-SKiLD, ground truth (both 24 flows and 33 flows) label works better than k-means label (Tab. C5 shows higher reward). For kitchen-FIST, the reward is similar before and after RL training, suggesting that the precise label doesn’t matter.\n\nThe office environment contains 210 different tasks in the task-agnostic data, making it hard to train using the ground truth label.\n\n### Q5: Typos, language and plotting\n\nThanks for the suggestions, we’ll fix as suggested.\n\n### Q6: Parallelization of training process\n\nWe’ll parallelize training in our code release as suggested.\n",
" Thanks for appreciating our work.\n\n### Q1: The ratio and size of the dataset; and the coefficient of f_{n+1}. \n\nAs the reviewer correctly stated, there is no difference between the task-agnostic flow (f_1 to f_n) and the task-specific flow (f_{n+1}). We listed the ratio and size of the task-agnostic (TA) and task-specific (TS) data in the paper: \n\nFor fetchreach (L695-696): TA data contains 1600 * 8 = 12800 state-action pairs; TS data contains 40 * 4 = 160 state-action pairs\n\nFor kitchen-SKiLD (L709-712): TA data contains 136950 state-action pairs; TS data contains 214 (kitchen-SKiLD-A) or 262 (kitchen-SKiLD-B) state-action pairs\n\nFor kitchen-FIST (Tab. 2 in the appendix): TA data contains around 45K to 65K state-action pairs; TS data contains around 200-250 state action pairs\n \nFor office (L727-732): TA data contains 456033 state-action pairs; TS data contains 1155 state-action pairs\n\nRegarding f_{n+1}: we study the coefficient in Fig. 17 and L783-793.\n\n### Q2a: Difference between CEIP and PARROT+TS+EX+forward\n\nThanks for the suggestion. CEIP and PARROT+TS+EX+forward differ in CEIP’s use of a flow mixture trained on both task-specific and task-agnostic data. In contrast, PARROT+TS+EX+forward uses a single flow. We’ll clarify.\n\n\n### Q2b: Is the dataset size, i.e. whether using both task-specific and task-agnostic data, key for the result difference?\n \nNo, it is not key by itself. A combination of factors leads to improvements. In Fig. 11, we show that PARROT+(TS+TA) is not the best choice among all variants of PARROT. We also conduct experiments for PARROT+(TS+TA) with explicit prior and push-forward in Tab. C1 and C2 to better understand the effect of dataset size. The result shows that 1) dataset size alone makes no difference, and 2) the combination of dataset size and explicit prior improves PARROT, but with both components PARROT is still worse than CEIP and converges slower. \n\nFor both tables, we list CEIP+TS+EX+forward (CEIP with task-specific flow, explicit prior and push-forward) for convenience. \n\n**Table C1:** Results of PARROT+(TS+TA) at beginning of RL\n\nEnv | PARROT+(TS+TA) | PARROT+(TS+TA)+EX | PARROT+(TS+TA)+EX+forward | CEIP+TS+EX+forward \n---|---|---|---|---\nKitchen-SKiLD-A | 1.29 | 4 | 4 | 4\nKitchen-SKiLD-B | 0 | 2.75 | 2.75 | 3.32\nKitchen-FIST-A | 0 | 2.97 | 2.29 | 3.24\nKitchen-FIST-B | 0 | 2.4 | 2.41 | 3.75\nKitchen-FIST-C | 0.5 | 2.14 | 3.5 | 3.85\nKitchen-FIST-D | 1 | 3.8 | 3.94 | 3.9\nOffice | 0 | 4.32 | 4.32 | 3.55\n\n**Table C2:** Results of PARROT+(TS+TA) at end of RL\n\nEnv | PARROT+(TS+TA) | PARROT+(TS+TA)+EX | PARROT+(TS+TA)+EX+forward | CEIP+TS+EX+forward \n---|---|---|---|---\nKitchen-SKiLD-A | 1.56 | 4 | 4 | 4\nKitchen-SKiLD-B | 1.07 | 3.82 | 3.82 | 3.93 \nKitchen-FIST-A | 1.77 | 3 | 3.77 | 3.95\nKitchen-FIST-B | 0 | 3.98 | 3.94 | 3.89\nKitchen-FIST-C | 0.93 | 3.85 | 3.9 | 3.92\nKitchen-FIST-D | 2.55 | 3.99 | 3.99 | 3.94\nOffice | 0 | 5.89 | 5.89 | 6.33\n\n### Q3: How well do TA+EX and TA+EX+forward do?\n\nNot as good as CEIP and PARROT+(TA+TS) with explicit prior and push-forward. The results are listed in Tab. C3 and C4. For both tables, we list CEIP+TS+EX+forward for convenience. \n\n**Table C3:** Results of PARROT+TA+EX and PARROT+TA+EX+forward at beginning of RL\n\nEnv | PARROT+TA+EX | PARROT+TA+EX+forward | CEIP+TS+EX+forward \n---|---|---|---\nKitchen-SKiLD-A | 1.14 | 1.14 | 4\nKitchen-SKiLD-B | 0.71 | 0.71 | 3.32\nKitchen-FIST-A | 2.94 | 2.36 | 3.24\nKitchen-FIST-B | 0 | 0 | 3.75\nKitchen-FIST-C | 2.43 | 2 | 3.85\nKitchen-FIST-D | 3 | 3 | 3.9\nOffice | 4.32 | 4.32 | 3.55\n\n**Table C4:** Results of PARROT+TA+EX and PARROT+TA+EX+forward at end of RL\n\nEnv | PARROT+TA+EX | PARROT+TA+EX+forward | CEIP+TS+EX+forward\n---|---|---|---\nKitchen-SKiLD-A | 3.86 | 3.86 | 4\nKitchen-SKiLD-B | 2.43 | 2.43 | 3.93\nKitchen-FIST-A | 3 | 2.98 | 3.95\nKitchen-FIST-B | 0 | 0 | 3.89\nKitchen-FIST-C | 2.85 | 2.68 | 3.92 \nKitchen-FIST-D | 3 | 3 | 3.94\nOffice | 5.89 | 5.89 | 6.33\n\nThe effectiveness of explicit prior and pushforward is shown in Tab. C3 and C4 as PARROT+TA+EX and PARROT+TA+EX+forward work much better than PARROT+TA. The effectiveness of using task-specific data is also shown by comparing Tab. C3, C4 to Tab. C1, C2: PARROT+TA+EX is generally worse than PARROT+(TA+TS)+EX.\n\nNote 1: PARROT+TA+EX and PARROT+TA+EX+forward have the same reward for some entries, which means PARROT+TA gets stuck less (as discussed in L196-200).\n\nNote 2: In Kitchen-FIST the third / first / second / third task is missing from the task-agnostic data for A / B / C / D, respectively, thus the reward for FIST-B is 0 and the others are limited correspondingly.\n",
" ### Q3: How much does the method rely on precise task-specific demonstrations?\n\nOur experiments follow prior work (SKiLD [1] and FIST [2]), which don’t discuss precision of task-specific data. FIST has a very short discussion on noise in task-agnostic data and concludes with non-robustness to noise.\n\nTo fill this gap, as suggested by the reviewer, we move the items in the office environment for CEIP+TS+EX and CEIP+TS+EX+forward. We choose the office environment instead of the suggested kitchen environment because the former can be changed with one line of code, while the latter would require coordinating the positions of multiple components in xml files (e.g., microwave hitbox, microwave walls, microwave door, microwave door handle and the corresponding goal). \n\nThe original office environment uses a [-0.01, 0.01] uniformly random noise for the starting position of each dimension for each item in the environment. We increase this noise at test time (which the agent never sees in imitation learning) and show the result in Tab. B3. Albeit an improvement on FIST, CEIP is still not robust to imprecise demonstrations, which is a limitation we will add into the discussion.\n\n\n**Table B3:** Results of CEIP+TS+EX, CEIP+TS+EX+forward and FIST with random positioning of items\n\nNoise level | CEIP+TS+EX | CEIP+TS+EX+forward | FIST\n---|---|---|---\n0.01 (original) | 4.17 | 6.33 | 5.6\n0.02 | 4.20 | 4.17 | 3.8\n0.05 | 0.57 | 0.83 | 0.6\n0.1 | 0.05 | 0.1 | 0.1\n0.2 | 0.01 | 0.02 | 0\n\n### Q4: How well will replaying existing demonstrations work?\n\nBelow is the result for replaying the task-specific demonstration, averaged over 3 runs. We observe that the replay can’t solve the task, but works decently in some cases.\n\n**Table B4:** Results of replaying existing demonstrations (mean reward and std.dev); we list CEIP+TS+EX+forward (after RL) for convenience\n\nEnvironment | Replay | CEIP+TS+EX+forward \n --- | --- | --- \nKitchen-SKiLD-A | 1.0(+-0.82) | 4.0(+-0.00) \nKitchen-SKiLD-B | 0.67(+-0.94) | 3.93(+-0.08) \nKitchen-FIST-A | 2.33(+-0.47) | 3.95(+-0.05) \nKitchen-FIST-B | 0.67(+-0.47) | 3.89(+-0.07) \nKitchen-FIST-C | 2.33(+-0.94) | 3.92(+-0.06) \nKitchen-FIST-D | 2.33(+-0.94) | 3.94(+-0.07) \nOffice | 4.67(+-0.83) | 6.33(+-0.30) \n\n### Q5: Is the current and next state sufficient for computing the optimal action?\n\nThey aren’t. The result listed in Tab. B5 suggests that knowing the current and the next states improves policies. However, results are not as good as CEIP and exhibit a large variance.\n\n**Table B5:** Results of behavior cloning (mean reward and std.dev); we list CEIP+TS+EX+forward (after RL) for convenience\n\n Environment | BC | BC+EX | BC+EX+forward | CEIP+TS+EX+forward \n --- | --- | --- | --- | --- \n Kitchen-SKiLD-A | 0.02(+-0.04) | 1.52(+-1.15) | 2.2(+-0.62) | 4.0(+-0.00) \n Kitchen-SKiLD-B | 0.03(+-0.08) | 1.03(+-0.90) | 0.8(+-0.75) | 3.93(+-0.08) \n Kitchen-FIST-A | 0.67(+-0.76) | 2.17(+-0.06) | 3.03(+-0.15) | 3.95(+-0.05) \n Kitchen-FIST-B | 0.4(+-0.59) | 2.13(+-0.47) | 1.87(+-0.29) | 3.89(+-0.07) \n Kitchen-FIST-C | 0.5(+-0.75) | 2.2(+-1.61) | 1.9(+-0.96) | 3.92(+-0.06) \n Kitchen-FIST-D | 0.67(+-0.39) | 1.63 (+-1.42) | 2.17 (+-1.67) | 3.94(+-0.07) \n Office | 0.62(+-0.59) | 0.53(+-0.42) | 1.83(+-0.49) | 6.33(+-0.30) \n\n**References**\n\n[1] K. Pertsch et al. Demonstration-guided reinforcement learning with learned skills. CoRL, 2021\n\n[2] K. Hakhamaneshi et al. Hierarchical few-shot imitation with skill transition models. ICLR, 2022\n\n[3] J. Postels et al. Go with the Flows: Mixtures of Normalizing Flows for Point Cloud Generation and Reconstruction. 3DV, 2021\n\n[4] G. Pires et al. Variational Mixture of Normalizing Flows. arXiv, 2020\n\n[5] R. Giaquinto et al. Gradient Boosted Normalizing Flows. NeurIPS, 2020\n\n[6] R. Cornish et al. Relaxing Bijectivity Constraints with Continuously Indexed Normalising Flows. ICML, 2020\n\n[7] L. Dinh et al. A RAD approach to deep mixture models. ICLR Workshop, 2019\n\n[8] P. Izmailov et al. Semi-Supervised Learning with Normalizing Flows. ICML, 2020\n",
" Thanks for valuable feedback.\n\n### Q1a: Clarity of abbreviations\n\nThanks a lot for this suggestion. We’ll add:\n\n**Table B1:** Revised abbreviations for the ablation study of CEIP. We changed hyphen (“-”) to plus (“+”) for consistency with notations like “TS+TA”\n\n| Method | Task-specific flow | Explicit prior | Push-forward |\n|---|---|---|---|\n| CEIP | | | \n| CEIP+EX | |✓ |\n| CEIP+EX+forward | |✓ | ✓\n| CEIP+TS | ✓ | |\n| CEIP+TS+EX | ✓ | ✓ |\n| CEIP+TS+EX+forward | ✓ | ✓ | ✓ \n\n**Table B2:** Revised abbreviations for the ablation study of PARROT. “2way” and “4way” only appear in fetchreach ablation. See Fig. 11 for more\n\n| Method | Use task-agnostic data | Use task-specific data | Explicit prior | Push-forward |\n|---|---|---|---|---|\n| PARROT+TA | ✓ | | | | \n| PARROT+TS | | ✓ | | | \n| PARROT+(TS+TA) | ✓ | ✓ | | | \n| PARROT+TA+EX | ✓ | | ✓ | | \n| PARROT+TS+EX | | ✓ | ✓ | |\n| PARROT+(TS+TA)+EX | ✓ | ✓ | ✓ | |\n| PARROT+TA+EX+forward | ✓ | | ✓ | ✓ |\n| PARROT+TS+EX+forward | | ✓ | ✓ | ✓ |\n| PARROT+(TS+TA)+EX+forward | ✓ | ✓ | ✓ | ✓ |\n| PARROT+2way+TS | part of (see Fig. 11) | ✓ | | |\n| PARROT+4way+TS | part of | ✓ | | |\n| PARROT+2way | part of | | |\n\n\n### Q1b: Clarity of ablation\n\nThe ablation in the appendix is divided into two subsections, one for fetchreach and one for kitchen (ablation for office is in the main paper). \n\nFetchreach ablation (Sec. D.1) studies five questions:\n\n**1. What’s the effect of each component (task-specific flow, explicit prior and push-forward) in CEIP for fetchreach?** The answer: “unnecessary for a simple environment,” as discussed in L740-744 and Fig. 9\n\n**2. Does the number of flows in CEIP affect results?** The answer: “more flows improve results,” as discussed in L745-747 and Fig. 10\n\n**3. What’s the effect of explicit prior and push-forward technique for PARROT?** The answer: “unnecessary for a simple environment,” as discussed in L753-755 and Fig. 11\n\n**4. Will hand-picked more relevant task-agnostic data help PARROT?** The answer: yes, as discussed in L755-756 and Fig. 11\n\n**5. What do the generated trajectories look like for each method, before and after RL training?** See Fig. 12 for the former and Fig. 13 for the latter. The trajectories show: CEIP works the best\n\nKitchen ablation (Sec. D.2) studies four questions:\n\n**1. Is RL useful when the initial reward is already maximal?** The answer: yes in order to reduce episode length, as discussed in L771-773 and Fig. 14\n\n**2. What is the effect of each component in CEIP?** The answer: “TS and EX are both crucial”, as discussed in L775-778 and Fig. 15\n\n**3. What is the effect of each component in PARROT?** The answer: “explicit prior helps”, as discussed in L780-782 and Fig. 16\n\n**4. What is the coefficient for the task-specific flow?** The answer: “the mixture doesn’t degenerate to one flow”, as discussed in L783-793 and Fig. 17\n\n### Q2: “method mainly builds upon PARROT and includes tricks for fine-tuning”\nWe kindly disagree. While we use the PARROT backbone, there are two major differences between our method and PARROT. Both lead to significant improvements and new directions of research:\n\n**1. Normalizing flow mixture.** Our linear combination improves results and never appears in prior work. As shown in Fig. 5c and 16, compared with PARROT equipped with explicit prior, our mixture significantly improves the initial reward, thus CEIP converges faster. Prior works on normalizing flow mixtures use Gaussian mixture latent distribution [3], variational mixture [4], boosting [5], relaxation of invertibility [6] etc.; however, almost all prior works [4, 5, 6, 7, 8] only use simple datasets (e.g. MNIST / synthetic). [3] addresses point-cloud generation and reconstruction, but aims for stronger expressivity instead of distilling knowledge from different tasks. To our best knowledge, our CEIP flow mixture is the first mixture-of-normalizing-flow work that distills knowledge from multiple tasks in an RL setting, using a never-before-studied combination method.\n\n**2. The explicit prior.** Our explicit prior significantly improves behavior cloning (Tab. B5), CEIP (Fig. 15) and PARROT (Fig. 5c). Though explicit prior was studied recently (L348) in robotics, this component hasn’t found its way to RL literature (L346-348); we adopt it and propose a novel pushforward technique. \n\nWe hence think CEIP goes beyond “PARROT + tricks for fine-tuning.”\n",
" ### Q6b: Comparison to PARROT with task-specific and task-agnostic data\n\nGreat suggestion. The result on fetchreach was listed in Fig. 11, which is no better than other variants of PARROT. For kitchen and office we conduct new experiments, and contrast this variant of PARROT (denoted as PARROT+(TS+TA)) to CEIP (numbers reproduced from Fig. 4 and 5). For both tables, we also list CEIP with task-specific flow, explicit prior and pushforward technique (CEIP+TS+EX+forward) for convenience. \n\n**Table A1:** Results of PARROT+(TS+TA) (reward) before RL training\n \nEnv | PARROT+(TS+TA) | CEIP+TS+EX+forward \n---|---|---\nKitchen-SKiLD-A | 1.29 | 4\nKitchen-SKiLD-B | 0 | 3.32\nKitchen-FIST-A | 0 | 3.24\nKitchen-FIST-B | 0 | 3.75\nKitchen-FIST-C | 0.5 | 3.85\nKitchen-FIST-D | 1 | 3.9\nOffice | 0 | 3.55\n\n**Table A2:** Results of PARROT+(TS+TA) (reward) after RL training\n\nEnv | PARROT+(TS+TA) | CEIP+TS+EX+forward \n---|---|---\nKitchen-SKiLD-A | 1.56 | 4\nKitchen-SKiLD-B | 1.07 | 3.93 \nKitchen-FIST-A | 1.77 | 3.95\nKitchen-FIST-B | 0 | 3.89\nKitchen-FIST-C | 0.93 | 3.92\nKitchen-FIST-D | 2.55 | 3.94\nOffice | 0 | 6.33\n\nStudying both tables, we find that PARROT+(TS+TA) does not work. \n\n### Q7a: Why are curves in Fig. 4 flat?\n\nAs the reviewer correctly stated, CEIP reaches the maximum reward before RL, and continues to solve all subtasks throughout it. This is because the flow-mixture with the help of an explicit prior and a task-specific flow can transform a policy close to a normal distribution to solve all subtasks. This is achieved as the flow learns to transform a normal distribution into an expert policy. For this, both the explicit prior conditioning on s_next and the task-specific single flow are important. The curve won’t be flat without either component.\n\nAlso note, the agent improves during RL training despite the flat reward curve: as shown in Fig. 14, the average episode length decreases during RL training, increasing the discounted reward.\n\n### Q7b: Is imitation learning sufficient for solving the task?\n\nNo. Imitation learning alone, e.g., behavior cloning, is brittle as shown below:\n\n**Table A3:** Results of behavior cloning (mean reward and std. dev); we also list CEIP+TS+EX+forward (after RL) for convenience\n\n| Env | BC | BC+EX | BC+EX+forward | CEIP+TS+EX+forward |\n| --- | --- | --- | --- | --- |\n| Kitchen-SKiLD-A | 0.02(+-0.04) | 1.52(+-1.15) | 2.2(+-0.62) | 4.0(+-0.00) |\n| Kitchen-SKiLD-B | 0.03(+-0.08) | 1.03(+-0.90) | 0.8(+-0.75) | 3.93(+-0.08) |\n| Kitchen-FIST-A | 0.67(+-0.76) | 2.17(+-0.06) | 3.03(+-0.15) | 3.95(+-0.05) |\n| Kitchen-FIST-B | 0.4(+-0.59) | 2.13(+-0.47) | 1.87(+-0.29) | 3.89(+-0.07) |\n| Kitchen-FIST-C | 0.5(+-0.75) | 2.2(+-1.61) | 1.9(+-0.96) | 3.92(+-0.06) |\n| Kitchen-FIST-D | 0.67(+-0.39) | 1.63 (+-1.42) | 2.17 (+-1.67) | 3.94(+-0.07) |\n| Office | 0.62(+-0.59) | 0.53(+-0.42) | 1.83(+-0.49) | 6.33(+-0.30) |\n\n### Q8: Discussing limitations\n\nWe thank the reviewer for pointing out the need to better discuss limitations beyond mentioning computation in L375-376. We’ll add the following:\n\n**Reliance on optimality of expert demonstrations.** Similar to prior work like SKiLD and FIST, our method assumes availability of optimal state-action trajectories for the target task. Accuracy of those demonstrations impacts results. Future work should improve robustness and generality.\n\n**Balance between the degree of freedom and generalization in fitting the flow mixture.** Fig. 9 reveals that more degrees of freedom in the flow mixture improve results of CEIP. Our current design uses a linear combination which offers O(n) degrees of freedom (\\mu and \\lambda), where n is the number of flows. In contrast, too many degrees of freedom causes overfitting. It’s interesting future work to study this tradeoff.\n\nWe invite the reviewer to discuss further limitations which we are happy to include.\n\n### Q9: Are the results too good to be true?\n\nThe results aren’t too good to be true. \n\n1. Our improvements have been validated to be consistent across three benchmarks with systematic ablations. \n\n2. The key of our strong results are due to our **combination** of **1-layer flows** with **explicit prior**, which are missing in the baselines. SKiLD and FIST have an LSTM-VAE architecture, which is too heavy with few task-specific trajectories compared to 1-layer flows; PARROT includes neither explicit prior nor flow combination.\n \n3. We witness great performance improvement upon introducing our components into the baselines. PARROT (Fig. 5c,16) and behavior cloning (Tab. A3) with explicit prior work much better; compared to PARROT with explicit prior, CEIP converges faster with the combination of flows (in Fig. 5c and 16, PARROT+EX works well but converges slower). This again proves the effectiveness of our design.\n\n**References**\n\n[1] K. Pertsch et al. Demonstration-guided reinforcement learning with learned skills. CoRL, 2021\n\n[2] K. Hakhamaneshi et al. Hierarchical few-shot imitation with skill transition models. ICLR, 2022.\n",
" Thanks for valuable feedback.\n\n### Q1a: Complexity of method and number of moving parts\n\nOur method is simpler to train and has no more moving parts than prior work that combines task-specific and task-agnostic data.\n\nE.g., SKiLD [1] trains 1) a VAE with LSTM; 2) two separate prior models that mimic the VAE sequence encoder, one for task-agnostic and one for task-specific data; 3) a binary classifier determining which prior to use; and 4) an RL agent with reward shaping whose coefficient requires to be learned. FIST [2] trains 1) an explicit prior (the retrieval system) using contrastive learning, 2) an LSTM-VAE similar to SKiLD; and 3) the implicit prior structure similar to SKiLD.\n\nIn contrast, our method consists of 1) a non-parametric preprocessing (k-means); 2) training of simple single-layer flows (the mixture is also a single-layer flow); and 3) an RL agent independent of the flow architecture. \n\n### Q1b: Ad-hoc design choices for retrieval\n\nWe used a simple yet effective retrieval. We are excited to see that it yields good results. Surely, a more elaborate system can further improve results. We leave this to future work. \n\n### Q2: Too much space for explaining methods and too little for analyzing results\n\nThanks for the suggestion. We’ll defer details to the appendix and compress method explanation in the paper. We’ll use the space to move ablations from appendix to paper.\n\n### Q3: Definition of “Implicit Priors” and “Explicit Priors”\n\nBoth were explained early in the paper. We defined “implicit prior” in L24-25: “... distills the information within the demonstrations into an implicit prior by encoding available demonstrations into a deep net.” We defined “explicit prior” in L30-32: “enable the agent to maintain a database of demonstrations, which can be used to retrieve state-action sequences given an agent’s current state.” This is consistent with the reviewer’s understanding. We’ll clarify.\n\n### Q4: Do results overly rely on task-specific flow? What coefficients were assigned to the task-specific flow?\n\nResults don't overly rely on the task-specific flow. We empirically studied this in Fig. 17 in the appendix. As stated in L784-786, if the result overly relies on the task-specific flow, the coefficient \\mu_{n+1} for the task-specific flow would be 1 and \\mu_i for all other flows would be 0. However, in Fig. 17, the coefficient for the task-specific flow (orange curve) is far from 1 (below 0.05), and the coefficient for a particular task-agnostic flow (blue curve) is far from 0 (generally above 0.02).\n\nIntuitively, over-reliance in our design (Fig. 6 bottom) is discouraged because of the softplus function and the positive offset applied on \\mu. For over-reliance, all task-agnostic flows f_i should have a coefficient of \\mu_i=0, which is unreachable due to the positive offset of \\mu, and hard to approach due to the softplus.\n\n### Q5: Definition of \\delta=1\n\n\\delta is the indicator function. For a particular s_next in a trajectory \\tau in the task-specific data, \\delta=1 if and only if there exists a state s’_next in \\tau, such that s’_next satisfies the following two properties: 1) s’_next is no earlier than s_next in \\tau; 2) s’_next has been retrieved once in the same RL episode. s’_next is “earlier” than s_next if it has a smaller index in a trajectory \\tau, sorted in ascending order of execution time. This imposes a monotonicity on the retrieved s_next, i.e., a state is hard to be retrieved twice, and it is hard to first refer to later steps in a trajectory and then go back to earlier ones. We’ll clarify and add the following formula:\n\n$$\\delta(s_{next})=\\begin{cases}1 (\\text{if }\\exists \\tau\\in D_{n+1}, s’_{next}\\in\\tau, \\text{s.t. } s_{next}\\in\\tau, s’_{next} \\text{ is no earlier than } s_{next} \\text{ and has been retrieved}) \\\\\\\\0 (\\text{otherwise})\\end{cases}$$\n\nwhere $\\tau$ is a trajectory and $D_{n+1}$ is the task-specific data.\n\n### Q6a: Higher reward at start in Fig. 3\n\nThe higher initial reward of CEIP in Fig. 3 highlights the robustness of CEIP to starting point randomization in fetchreach. Fig. 8 shows 3 ways of starting point randomization. All results in the paper use the most challenging randomization illustrated in Fig. 8c. We find PARROT to work well with simpler randomization methods (Fig. 8a,b), but to struggle with the challenging randomization shown in Fig. 8c. In contrast, CEIP works well regardless of starting point randomization.\n",
" The paper presents a technique for combining \"explicit\" and \"implicit\" priors for reinforcement learning with demonstrations. Implicit priors refer to priors that store the knowledge from demonstration in a neural network of some kind (such as a flow-based generative model), while explicit priors store demonstrations as a database that can be queried when learning a new task. The overall method has several steps: \n\n1. Given a dataset of task-agnostic (TA) demonstrations, cluster them using k-means (where the last state of each trajectory is used as the clustering feature). If the given datasets are already divided into separate tasks, this step can be skipped. \n2. Learn a flow-based prior for each of the cluster from the previous step, and learn a flow-based prior for task-specific trajectories. The number of task-specific trajectories is much less than the total number of trajectories. Often, there's only one task-specific trajectory (one-shot learning). \n3. Learn weights to combine the numerous flow-based models into a single model. These weights are learned by minimizing a loss on the task-specific dataset, and I have some questions about this step below. \n4. Given a new task, train a policy \\pi(z|s) that controls the prior model, which in turn controls the environment. This is similar to prior work (PARROT). \n5. You are also allowed to access a dataset of expert trajectories on the current task. This dataset is accessed via a retrieval procedure wherein you search for a state that is most similar to the current state, and retrieve the next state. The next state is then passed as an additional input to the flow-based models, allowing the flow-based model to act as a sort of inverse dynamics model -- given state and next state, the flow based model predicts an action. This step is crucial for good performance from what I can tell. \n\nThe authors evaluate their method on simulated robot manipulation tasks. \n\nCurrently, my rating for the paper is somewhat low as I am not able to make sense of the results, and I am hoping the authors are able to answer the questions I asked here so that I can revise my rating accordingly. \n\nPost-rebuttal: I have updated my rating to a borderline accept following the author response, as the authors have addressed a most of my concerns. I would encourage the authors to release their code if they can, as it would make the paper reproducible, and would spur future research in this area. One of my concern was that the results seemed almost too good to be true, and this concern can also be put to rest with a code release. Strengths\n- The paper tackles an important problem: how to best make use of task-agnostic and task-specific demonstrations to speed up reinforcement learning of new tasks. \n- The paper obtains some impressive results on an interesting suite of tasks. \n- The paper compares against a number of baselines, competing methods, and ablations. \n\nWeaknesses\n- The overall method is somewhat complex, with a number of moving pieces, and some of the design choices (such as the retrieval system) seem a bit ad-hoc. \n- The experimental results are a bit puzzling, and I will add more details on this in the Questions section below. \n- Most of the analysis / ablations are actually in the Appendix. Overall, the authors spend too much space explaining the method (which could have been shortened, since we don't need all the details on how flow-based models are trained, we can refer to prior work for that), and too little space analyzing the experimental results. \n - Authors use the terms “explicit priors” and “implicit priors” frequently, but these terms are never clearly defined (and they are not standard terms, as far as I know). From my reading, it seems like implicit priors refers to a method that trains a neural network using provided demonstrations, while explicit priors store all demonstrations in memory and can be later accessed using a nearest neighbor style technique. Can you clarify what exactly these two terms mean, and add these definitions towards the beginning of the paper? \n- The authors train several different flows on the task-agnostic and task-specific datasets, and then combine these flows using some coefficients, and these coefficients are learned by minimizing the log-likelihood of the “combined flow” on task-specific trajectories. However, given that one of the flow models was trained on task-specific trajectories alone, this subsequent optimization might result in the coefficient for the task-specific flow model to receive most of the weight. Am I missing something here? Can authors share what coefficients were assigned to the task-specific flow model in their experiments? \n- I understand the motivation behind the push-forward term in the retrieval objective function (Equation 6), but I did not completely follow the definition of the indicator function \\delta. Can the authors add another equation that precisely defines when \\delta = 1? Is it when s_next = s? \n- In Figure 3, why does CEIP have a much higher reward right at the start of training (when steps = 0)? Why do other flow-based methods (such as PARROT) start with a much lower return? In particular, I would expect PARROT-TS to have a good initial performance, given that it’s trained on data for the task at hand (and task is relatively simple). Also, since CEIP trains on both TA and TS data, it would make sense to compare against a version of PARROT that uses both TA and TS. \n- In Figure 4, the learning curve for CEIP for about half the tasks are essentially flat, with maximum performance achieved without any finetuning. Can the authors explain what is happening here? Is imitation learning alone sufficient for solving this task? The results seem very puzzling to me.\n There is almost no discussion on limitations in the paper (the authors only briefly mention the computational cost of training many different flow models), which is disappointing. The lack of discussion around limitations, combined with results that seem almost too good to be true is a bit concerning. ",
" This submission proposes a skill learning algorithm for offline RL that can leverage task-specific demonstrations as implicit and explicit priors. The skill policy consists of a mixture of normalizing flows (trained on task-agnostic demonstrations) where the mixture weights are fine-tuned on task-specific demonstrations. The flows resemble an inverse model, in that they are conditioned on both the current and next state. The next state forms the explicit prior, as it is queried from the task-specific dataset via nearest-neighbor lookup for each state encountered during training. The explicit prior enables very effective few-shot generalization (i.e., utilizing the task-specific demonstrations without with random latent vectors, as this is what I suppose corresponds to the first data point in the learning curves), and learning a high-level policy to predict latent state flow inputs can further improve performance. The paper is very well-written and easy to follow. Clarity in the experimental section could be improved by stating what the abbreviations in ablation studies mean, and the ablations in the appendix are a bit hard to follow. Overall, the idea presented is well-motivated, novel and concerns a topic of high interest. The technique mainly builds upon PARROT and includes tricks for fine-tuning on task-specific demonstrations.\n\nThe experiments clearly show that the proposed method is effective in the settings considered; in fact, there is no setting in which few-shot generalization does not already work much better than all baselines considered. Overall, this suggests a potential limitation that doesn't seem to be acknowledged explicitly in the paper: how much does the method rely on precise task-specific demonstrations being available? What would happen if the environment changes slightly, e.g., the microwave in the kitchen environment would be moved 10cm to the left? I'd be happy to raise my score if this limitation is properly addressed, as I think it would help greatly in judging the method's applicability and impact. - How would simply replaying a task-specific demonstration perform on the benchmarks? Does it already provide optimal reward and trajectories? I assume that fine-tuning the normalizing flow mixture requires ground truth actions of the demonstrations, so the return of the demonstrations themselves could be added as a baseline/topline.\n- How much leverage does the high-level policy actually have over the actions? The flow policy is invertible, but conditioning on current and next state might, depending on the environment, provide all required information to compute an action already. Some analysis on this aspect would be appreciated. The paper is not clear on the quality of demonstrations that are expected for the method to work effectively.",
" The paper proposes a method CEIP that leverages explicit and implicit priors from the demonstration for reinforcement learning. On top of PARROT [45] which uses normalizing flow prior to task agnostic dataset to generate action, CEIP learns coefficients to combine flows to generate one task-specific flow parallel and use (state, predicted next state) pairs as the input of the flow instead of the state. As a result, CEIP achieves better performance than its baselines, PARROT, FIST, and SKiLD. **[Strength]**\n\n1. The paper is well-written and easy to understand. \n\n2. Especially, the detailed description of baselines, environments and other experimental detail make the reader have a better understanding of with the attached codebase.\n\n3. The proposed method clearly outperforms other baselines, PARROT, SKiLD, and FIST in all experiments and the authors also present the contribution of explicit prior by presenting Figure 5(c).\n\n\n**[Weakness]**\n\n1. It seems that there is no difference between task-agnostic flow (f_1 to f_n) and task-specific flow f_{i+1}. The ratio and size of the task-agnostic dataset and task-specific dataset are needed. Moreover, I recommend presenting the coefficient of f_{i+1} in those cases.\n\n2. Another concern is that a detailed description difference between CEIP and PARROT-TS-EX-forward is missing. According to Line 220, PARROT-TS only uses task-specific data while CEIP-TS uses both task-specific and task-agnostic data. Is the dataset size a key factor of the difference or the Few-Shot Adaptation?\n\n\n3. To present the effectiveness of the explicit prior, showing the results of TA-EX, TA-EA-forward in PARROT are better than TS-EX and TS-EX-forward (Figure 5(c)).\n\n\n4. It is good to compare using CEIP with the ground-truth task label (oracle) for not using k-means clustering.\n 1. In Line 202, us -> use\n\n2. In Figure 5 (b) and Figure 9, it is hard to differentiate between solid and dotted lines. It is better to use different markers for the same category of EX instead of two lines.\n\n3. In Lines 164-167, “most useful” is a vague term, it would be better to use other expressions such as making the best performance, etc.\n The authors addressed the limitations and potential negative societal impact of their work. As mentioned in the paper, I am concerned about the training time. I recommend parallelizing the training procedure.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"c5VY3K6UApI",
"CDGlNBwuBaD",
"nips_2022_R5KjUket6w",
"FwB1MYBdHhf",
"0BN5xEK3BRz",
"B0L9oRjZ_FG",
"duz0mc4-AN1",
"VJ-TiRi8O6Y",
"VJ-TiRi8O6Y",
"0zlYybwuPBm",
"0zlYybwuPBm",
"B0L9oRjZ_FG",
"B0L9oRjZ_FG",
"nips_2022_R5KjUket6w",
"nips_2022_R5KjUket6w",
"nips_2022_R5KjUket6w"
] |
nips_2022_lMMaNf6oxKM | Recipe for a General, Powerful, Scalable Graph Transformer | We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks. Graph Transformers (GTs) have gained popularity in the field of graph representation learning with a variety of recent publications but they lack a common foundation about what constitutes a good positional or structural encoding, and what differentiates them. In this paper, we summarize the different types of encodings with a clearer definition and categorize them as being $\textit{local}$, $\textit{global}$ or $\textit{relative}$. The prior GTs are constrained to small graphs with a few hundred nodes, here we propose the first architecture with a complexity linear in the number of nodes and edges $O(N+E)$ by decoupling the local real-edge aggregation from the fully-connected Transformer. We argue that this decoupling does not negatively affect the expressivity, with our architecture being a universal function approximator on graphs. Our GPS recipe consists of choosing 3 main ingredients: (i) positional/structural encoding, (ii) local message-passing mechanism, and (iii) global attention mechanism. We provide a modular framework $\textit{GraphGPS}$ that supports multiple types of encodings and that provides efficiency and scalability both in small and large graphs. We test our architecture on 16 benchmarks and show highly competitive results in all of them, show-casing the empirical benefits gained by the modularity and the combination of different strategies. | Accept | This paper presents a powerful, general, scalable, and linearly complex graph Transformer. Positional encodings and structural encodings are redefined with local, global, and relative categories, and an attempt has been made to include local and global focus attentions in a graph Transformer. All of the reviewers acknowledged the novelty of this work, particularly within the context of the domain, and therefore voted for its acceptance. Please take feedback from reviewers into account when preparing the camera-ready version. | train | [
"Z7ikPmJiNuB",
"WORCGPyI-8Z",
"G9ppKtmRqw3",
"jd0mwxwDK9i",
"fDxtu-BLC8U",
"F1mTGUo1aiw",
"pL0DPsUC8Vr",
"WnutjwaE7yO",
"gxzpCWCPgjI",
"Cvvx24DrLMw",
"Y7t6COO-OLv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 719v,\n\nThank you again for your review and comments! We have tried our best to address your questions and accordingly we revised the paper.\n\nAs we are near the end of the discussion period, we sincerely hope that you could provide us with a feedback on our revision and whether it has addressed your concerns. If so, we would appreciate it if you could consider raising the score. And if not, please let us know your outstanding concerns!\n\nBest regards,\nAuthors",
" Thank you for your clarification on the subquadratic transformers. ",
" after reading this reply, I think this work is helpful for the community of graph learning. Although its theoretical contribution is limited, I appreciate these take-aways for researchers and practitioners and really like the availability of this work through GraphGym. Thus, I just changed the overall score from reject to borderline accept.",
" We thank the reviewer for their review and are happy to hear that they found our paper well structured and well written. We carefully considered their criticism and hope we can convince the reviewer that the paper is worth a higher score by clearing up their concerns.\n\n### Re Q1:\nThe local and global PE/SE together are in a sense “absolute”, they differ in the scope of the frame in which they provide the position (local or global). Thinking of them as opposites may not be the best way. Perhaps we could rephrase our categorization as firstly differentiating absolute and relative encodings, and secondly further dividing the absolute encodings to local and global.\nWith that, we note that our categorization is not definitive as some existing PE/SE do not necessarily fall exclusively under one category (as mentioned in the caption of Table 1, page 4) . For example SignNet, which starts with global PE features (Laplacian eigenvectors) but then employs a learnable 8-layer GIN model that can contribute aspects of relative PE (by computing relative distance) and local SE (as GIN could learn to be e.g. the WL kernel). With more emerging graph positional and structural encodings, a different categorization or taxonomization may become needed. Yet we believe the categorization as we put it forward provides a useful frame of reference and axes along which to analyze graph PE/SE properties.\n\n### Re Q2:\nRelative SE is indeed an uncommon category, it nevertheless exists. In Table 1 we list one such example based on a recently published method, the CW Networks by Bodnar et al. [7] that perform sub-structure aware message passing, equivalent to a 1/0 weighting of the edges belonging to the same/different sub-structures. Further, it is easy to imagine using relative node distances based on graph kernels (e.g., RW or k-WL kernels) as relative SEs. Thus it certainly has a place in our PE/SE categorization.\n\n### Re Q3:\nWe stand behind our 5 main contribution points that are clearly listed in the Introduction and summarized in the Conclusion. Here we expand on the contribution points (i), (iii), and (v):\n\n**(i) point:** Our main contribution is the general “GPS” blueprint which incorporates 3 principal blocks: positional and structural encodings, local message passing and global attention. To the best of our knowledge, such a blueprint has not been investigated in the existing literature. Although some related works have been done, which we discuss in Section 2. A key distinguishing aspect of our work is the clear separation of these 3 blocks and recognition of their principal inductive biases. The current literature typically focuses on one kind of PE/SE, there is no consensus around a universal PE/SE. Next, the differentiation of the local and global attention components, emphasizes the importance of the “locality bias” of MPNNs and the necessity for more efficient information propagation across a graph. This yields a design space that is considerably more flexible and applicable to a variety of graph learning datasets, as demonstrated by our extensive experimental results: outperforming every graph Transformer on 10 out of 11 benchmarks.\n\n**(iii) point:** All published graph Transformers utilize a quadratic attention mechanism. We are the first to accomplish linear (in the number of nodes) scalability of global attention-enabled GNN and demonstrate it on the MalNet-Tiny dataset with graphs of several thousands of nodes. While we achieve this in part thanks to the modularity of our design (the (i) contribution), we understand it as a separate contribution worthy of dedicated attention. In the response to Reviewer Md1W we further discuss the pros and cons of linear attention via Performer kernel approach.\n\n**(v) point:** We provide the GraphGPS package which is built on top of GraphGym – a design space for GNNs. In a similar vein, GraphGPS is a codebase for a design space of graph Transformers, with support for a variety of MPNNs, graph Transformers, and PE/SEs. We take this opportunity to emphasize that the package is not simply a code release of the paper for reproducibility purposes, which is implicit. But, beyond that, we reimplemented SAN [2] in GraphGPS as well as the original Graph Transformer [6], both of which are noticeably faster than their original implementations. As such, our fifth contribution is the open-source package that implements the very modular GPS blueprint, provides a testbed for new positional and structural encodings (irrespective of the actual GNN used, e.g. a standard MPNN without any global attention is perfectly well supported too), MPNNs and global attention mechanisms. We believe GraphGPS to be a convenient resource for researchers given its modular implementation and support for over a dozen of existing benchmarking datasets, which is easily extensible as well.",
" ### Re Q4:\nWe believe our work provides a set of contributions to the graph learning community worthy of NeurIPS; summarized in the 5 points at the end of the Introduction section. Our approach is well motivated and the fact that the 3-part recipe turns out to be relatively straightforward is only to its benefit. We evaluated GPS on the largest set of benchmarking datasets of any single graph Transformer paper (outperforming every graph Transformer on 10 out of 11 benchmarks), and provided crucial insights in ablation studies. \n\n### Re Q5:\nThank you for catching this formatting issue, it is fixed in the revised version!\n\n\n[2] Kreuzer, D. et al. \"Rethinking graph transformers with spectral attention.\" NeurIPS 2021\n\n[6] Dwivedi, V.P., and Bresson X. \"A generalization of transformer networks to graphs.\" arXiv:2012.09699 (2020).\n\n[7] Bodnar et al. “Weisfeiler and Lehman Go Cellular: CW Networks”. NeurIPS 2021",
" Our GPS empirically benefits from global attention, however we observed the magnitude of this performance gain to be dataset-dependent, with most pronounced benefit in datasets with long-range dependencies. Please see our answer to Question 1 of Reviewer **719v**.\n\nThe pros and cons of linear attention mechanism vs. $O(N^2)$ vanilla Transformer attention is an interesting discussion point that we would like to expand on. \n- In the majority of current datasets we do not observe the performance benefits of linear attention. The average size of graphs in the majority of current benchmarking datasets does not surpass a few hundreds, which is manageable for an $O(N^2)$ Transformer. Here it is important to mention that in practice, even though the asymptotic complexity remains quadratic, our GPS design still empirically benefits from not having to explicitly condition the Transformer attention. This is in contrast to SAN or Graphormer, that need to explicitly construct the dense attention matrix conditioned on other graph properties, such as shortest-path-distances or edge types and attributes. As we mention in the main text, GPS with local MPNN and Transformer is in practice much faster than SAN despite the same asymptotic complexity. Therefore our design choice of decoupling local processing from the global attention not just allows for a plug-and-play linear attention (or many other x-former models), it also significantly improves wall-clock run time when $O(N^2)$ Transformer is used.\n- Performer (linear attention) starts to provide meaningful speedup once the graphs are several thousands of nodes large. We demonstrated this point in our ablation studies on MalNet-Tiny, where GPS with a Performer global attention module is approximately twice as fast as with a vanilla Transformer (Table B.3).\n- In practice, we observed that as long as the $O(N^2)$ Transformer is not prohibitively expensive (i.e., it would be graphs with above ~10k nodes) it remains the best choice for a global attention module. The Transformer tends to yield better prediction performance than Performer in all our experiments. This is in line with the recent findings in NLP applications of Transformer-like models by Tay et al. [4]. Linear transformers (or x-formers in general) have a different inductive bias and scale differently at varying dataset size, model size, and compute budget scenarios, often leaving “vanilla” Transformer as the best choice [4,5].\n\nWe did not include this discussion in the current revision of our paper. Upon acceptance, we would include it in a potential camera ready version that allows an additional space to accommodate additional content.\n\n\n[4] Tay, Yi, et al. \"Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?.\" arXiv:2207.10551 (2022).\n\n[5] Dehghani, M. et al. \"The efficiency misnomer.\" ICLR 2022, arXiv:2110.12894 (2022).\n",
" We would like to thank the Reviewer for the valuable feedback and comments on the limitations and questions.\n\n### W1: The overall algorithm can be well presented, like a presudo algorithm can be helpful to understand the implementation.\n\nPlease see Appendix D for exact formulation of the GPS layer and Figure 1 for the flowchart of the whole GPS pipeline. Additionally, in the current revision, we extended the Appendix D with a GPS pseudocode algorithm, please see the new Appendix D.2 (page 24).\n\n### W2 / Q1: In graph Transformer, Do we really need to allow nodes to attend to all other nodes in a graph (global attention).\n\nWe asked ourselves the same question, i.e. whether and when is global attention beneficial. We investigated it in two ways. 1) We conducted ablation studies, where we disabled the global attention module in GPS. These results are part of the original submission, Table 2A, with all the details in Appendix B. Indeed, the global attention is not always necessary, and its usefulness is dataset-dependant. 2) We suspect that global attention is particularly important in datasets that contain long-range dependencies. Therefore we utilized a recently proposed Long-Range Graph Benchmark (LRGB) [1] set of 5 such datasets. These results are part of our revised paper, see Appendix E. In all these benchmarks, GPS with global attention outperformed MPNN baselines by a large margin. Further, additional ablation studies (Tables E.2 and E.3) confirm that disabling of the global-attention module in GPS leads to notable performance degradation on these datasets with long-range dependencies.\n\n### W3 / Q2: The universal function on graph is not clear. A more detailed analysis why the proposed framework can achieve universal funtion approximator is required.\n\nAlthough we mostly refer to the theoretical result of Kreuzer et al. for SAN [2] to justify the universality of GPS in the main text, we also provide more details about theoretical expressiveness in Appendix C.\n\nIn particular, the seminal work of Xu et al. [3] showed that under the assumption of exponential increase of hidden dimension, the sum over a countable multiset is universal. It is also known [3] that if all node representations $h_u$ are unique and countable for every node $u$, then there exists an injective permutation-invariant function as long as this hashing includes *all* information about the edges. The intuition behind universality of the GraphGPS architecture lies in that: (1) the uniqueness of $h_u, \\forall u \\in G$ is achieved via structural and/or positional encodings, e.g., using the Laplacian eigenvectors as PEs that can be aggregated by any set function like DeepSets and added to node features. And (2), the unique hashing is achieved by the self-attention mechanism over an expressive enough function that can be, for instance, a tensor product of one-hot encoding unique for each edge with its edge feature (Appendix C.2). Such a function requires an exponential increase of the node representation size with each added layer, but so does the original proof by Xu et al [3], so we are not further relaxing their assumptions.\n\nWe agree that the argumentation in the original paper revision might feel superficial, but we commit to improve that in the final version, adding an extended section based on the explanation provided herein.\n\n\n[1] Anonymous et al. Long range graph benchmark. Under review in NeurIPS Dataset and Benchmarking track, 2022. [Note: the PDF is included in the revised Supplementary Material ZIP file.]\n\n[2] Kreuzer, D. et al. \"Rethinking graph transformers with spectral attention.\" NeurIPS 2021\n\n[3] Xu et al. How Powerful are Graph Neural Networks? ICLR 2019\n",
" We thank all reviewers for their time and reviews! We carefully considered the points they raised and we answer them in direct replies. Please note that we have uploaded a revised version of the paper. This revision has minimal changes to the main text, a few minor fixes/typos. However we extended the Appendix, which is now included in the revised main paper PDF (instead of a separate file in the supplementary ZIP):\n- Added a new section, Appendix E, with additional results on Long range graph benchmark [1].\n- Extended Appendix D by a GPS pseudocode algorithm, now in Appendix D.2.\n\n[1] Anonymous et al. Long range graph benchmark. Under review in NeurIPS Dataset and Benchmarking track, 2022. [Note: the PDF is included in the revised Supplementary Material ZIP file.]\n",
" In this work, the authors propose a recipe on how to build a general, powerful, scalable graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks. Strengths:\n\n1. The proposed model is scalable due to its linear comlexity.\n\n2. The work considers lots of message: positional and structural encodings with local message passing and global attention.\n\n3. The code is available and the performance of this work is good. \n\nWeaknesses:\n1. The overall algorithm can be well presented, like a presudo algorithm can be helpful to understand the implementation.\n\n2. Many recent studies show that the dense attention map is not necessary in the Transformer. In graph Transformer, Do we really need to allow nodes to attend to all other nodes in a graph (global attenion).\n\n3. The universal function on graph is not clear. A more detailed analysis why the proposed framework can achieve universal funtion approximator is required. \n* Many recent studies show that the dense attention map is not necessary in the Transformer. In graph Transformer, Do we really need to allow nodes to attend to all other nodes in a graph (global attenion).\n* The universal function on graph is not clear. A more detailed analysis why the proposed framework can achieve universal funtion approximator is required. Yes",
" The authors present a way how to efficiently use transformers on graph data. They report modular architecture, containing of graph positional encodings, structural encodings, and graph features, that further passed to an ensemble of arbitrary transformer block and arbitrary message passing neural network. The paper deals with an important and trending task. It's a nice and interesting read overall, the paper is technically sound, and well structured and describes the problem well. The authors provided performance testing on sufficient amount of datasets, and show the performance deviation with different random seeds. The authors provided a package built on top of graphGym which is a big plus.\n\nOn the other hand, the solid proof to use transformer block is missing. The authors claim that their solution has linear complexity, however from the table 2 is clear that linear attention block gives only marginal performance improvement, when for standard tasks (like LRA benchmark) linear transformers are on par, or even outperform vanilla transformer. How the model performs if we use another architecture instead of transformer block? Say simple MLP? The authors addressed the potential societal impact and limitations in the collusion section.",
" This paper provides a general, powerful, scalable (GPS) graph Transformer with linear complexity. The authors redefine positional encodings (PEs) and structural encodings (SEs) with local, global, and relative categories and try to incorporate PEs and SEs with local and global attention in a graph Transformer. With the proposed GPS layers, the authors show the competitive result on several datasets. The proposed definition of PE and SE is general, and the paper has a good presentation of writing logic. However, I have several problems about the methods and experiments. 1. The authors summarize the previous works about positional encodings and propose novel categories of structural encodings.\n2. The authors explain the rationality of definition and categories by the 1-Weisfeiler-Leman test and Circular Skip Link (CSL) graph.\n3. Based on PEs and SEs, the authors propose the GPS layer and present its characteristics with theoretical analysis. \n4. The paper has good writing and logical structure. refer to \"limitations\" 1. The PE and SE categories of local, global and relative are confusing. The local and global are opposite, but relative is another dimension, and it should be more suitable compared with the absolute PE.\n2. The description of relative SE is inconsistent with its example. The authors claimed that description allows two nodes to understand how much their structures differ, but examples are not enough to illustrate. Relative SE lacks support from related work, and this paper has no novelty on this point. Relative SE is more like a definition forcibly created to correspond to relative PE.\n3. The contribution of this paper is unclear. The first point should be merged with the third point, and the fifth point should not be regarded as the contribution of this paper. \n4. All methods in the paper are summary and induction, which lack novelty. There is no key innovation part of the paper. \n5. The result of SAN on ogbg-molhiv dataset misses the decimal point and zero in Table 4."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"pL0DPsUC8Vr",
"F1mTGUo1aiw",
"jd0mwxwDK9i",
"Y7t6COO-OLv",
"Y7t6COO-OLv",
"Cvvx24DrLMw",
"gxzpCWCPgjI",
"nips_2022_lMMaNf6oxKM",
"nips_2022_lMMaNf6oxKM",
"nips_2022_lMMaNf6oxKM",
"nips_2022_lMMaNf6oxKM"
] |
nips_2022_NjeEfP7e3KZ | Revisiting Heterophily For Graph Neural Networks | Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption). While GNNs have been commonly believed to outperform NNs in real-world tasks, recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory. Heterophily has been considered the main cause of this empirical observation and numerous works have been put forward to address it. In this paper, we first revisit the widely used homophily metrics and point out that their consideration of only graph-label consistency is a shortcoming. Then, we study heterophily from the perspective of post-aggregation node similarity and define new homophily metrics, which are potentially advantageous compared to existing ones. Based on this investigation, we prove that some harmful cases of heterophily can be effectively addressed by local diversification operation. Then, we propose the Adaptive Channel Mixing (ACM), a framework to adaptively exploit aggregation, diversification and identity channels to extract richer localized information in each baseline GNN layer. ACM is more powerful than the commonly used uni-channel framework for node classification tasks on heterophilic graphs. When evaluated on 10 benchmark node classification tasks, ACM-augmented baselines consistently achieve significant performance gain, exceeding state-of-the-art GNNs on most tasks without incurring significant computational burden. | Accept | In this submission, the authors revisit the existing homophily metrics and point out the limitations of existing metrics in analyzing the performance of GNN. Then the authors propose a novel homophily metric that specifics harmful heterophily, and further propose Adaptive Channel Mixing (ACM) framework to handle the harmful heterophily.
Although there exist some concerns about the novelty of the idea (as pointed out by 9hm2 and Y2Du), overall, the proposed metric and framework are well-motivated, interesting, and effective (as pointed out by icHY, Y2Du, and S2KT), and the experiments are comprehensive and convincing (as pointed out by icHY, Y2Du, and vQE7). Due to these, here, I recommend accepting this submission.
This submission also can be improved based on the suggestions by reviewers (such as writing and typesetting), and hope they find the discussion useful and make this submission a better one.
| val | [
"MZkSJe3OQ0S",
"TR2PXzxoVAy",
"ZYDvj584PDZ",
"NENn38_ftnP",
"0UTujAXuPo",
"tPxNZ1zrlF",
"OD-BvmeR1Yg",
"eFqy0OjZfUmS",
"eoAPjWeJ3PK",
"0T3p2SJtJ6X",
"ohrJRr5JTs",
"EwRKKMIkumM",
"YrSSrsZH2vk",
"GggAs0KKnv2",
"vWFdN9AHFED",
"4XVcI25MMH",
"DbZnBG5YmPX",
"6zOWqKyMDIE"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 9hm2,\n\nThanks for spending your time evaluating our paper. Since you have negative rating on our paper, we would like to know if you still have any question left to discuss. If your concerns are addressed, we respectfully request a raise of your rating. We will appreciate that.\n\nAuthors",
" \n\nThanks so much for your recognition of our proposed node-wise channel mixing mechanism. And we find that those general questions are quite interesting and valuable for the whole GNN community and we are glad to share our opinions here.\n\n\n### Q1.\nDoes the high-pass + low-pass + identity (full-pass?) channels together improve the expressiveness of GPRGNN/BernNet?\n\n### R1.\nWe do not have a rigorous theoretically proved answer at this moment, but we believe the answer is yes and to prove it, we need to find a different perspective beyond the expressiveness of graph filters. GPRGNN/BernNet and many other GNN models share the common belief that \"expressiveness of GNN == expressiveness of graph filters\". This might be true in many cases but not always because we have other parts besides the filters in GNN that can be improved. So at the first glance, the 3-channel architecture might look ordinary from the perspective of graph filter. But when we feed distinct information to each \"channel\" and use node-wise channel mixing mechanism to combine the filtered information, this simple architecture becomes powerful.\n\nFor the proof, one angle might be that ACM can make baseline GNNs fit more complex node label patterns (distribution), but we might need to take the parameter matrix and non-linearity into consideration.\n\n\n### Q2. \nWhat is the ideal information (or say local patterns) should be the sufficient information for determining the coefficients?\n\n### R2.\n\nIn our opinion, the coefficients $\\alpha\\_L,\\alpha\\_H,\\alpha\\_I$ do not only depend on the local patterns of each nodes, but also depend on the relation between its local patterns and its neighbors' local patterns and label distribution. It will be interesting to formulate it into an optimization problem and solve it directly in the future, but the way we have on hand is to optimize it end-to-end by gradient descent with backpropagation.\n\n### Q3.\nHave you compare the corresponding spectral filters learned against that learned by GPRGNN?\n\n### R3\n\nWe did not compare the spectral filters against GPRGNN, because our analysis and derivation of ACM are from node level instead of spectral domain. Also,\nACM-GCN is not learning a spectral filter because each channel has different input information rather than the same information. But we will consider visualizing the output layer on spectral domain and compare it with GPRGNN as you suggest. What we have for now is the t-SNE visualization of the output layer (Figure 4).\n\nIn addition, we would like to share an opinion that are against the mainstream opinion about the spectral analysis of the filters in GNNs: The spectral analysis is based on the eigensystem of graph Laplacian and those eigenvectors are functions with variant smoothness defined on the given graph structure, e.g. connected nodes share similar values. So when the given graph structure is \"bad\" or \"trivial\", the smoothness of those eigenvectors becomes trivial as well. The traditional spectral analysis is valid for \"good\" graph structures, e.g. Cora, Citeseer, PubMed, but for \"bad\" graphs, we might need to consider more, e.g. label distribution, post-aggregation node similarity. This is just one of our opinions and not conclusive. We share it here for your interest.\n\n\n\n\n#### We are very glad to discuss these general questions with you and we find them pretty interesting. If your concerns on our paper are addressed, we politely request a raise of your rating. We will appreciate it. If you still have any question left, please let us know. Thanks.\n\nAuthors\n",
" Thanks for these detailed explanations!\n\nTo my knowledge, it has been a consensus that diverse local assortativity or say homophilic levels are the pain for node classification, no matter whether the GNN is a low/high-pass filter. Meanwhile, there are some recent works, e.g., GPRGNN and BernNet, that can fit arbitrary graph filters. Thus, I believe the proposed mixing mechanism does work in some circumstances, particularly a graph with diverse local assortativity levels. However, I still want to discuss about the following questions:\n\n- Does the high-pass + low-pass + identity (full-pass?) channels together improve the expressiveness of GPRGNN/BernNet?\n- What is the ideal information (or say local patterns) should be the sufficient information for determining the coefficients?\n- Have you compare the corresponding spectral filters learned against that learned by GPRGNN?",
" Thanks for you valuable comments. We will add the suggested references and keep improving the writing.\n\nAuthors",
" ### Q6.\nThe intuition behind the mixing matrix is unclear. Could authors elaborate more on the necessity of the mixing matrix W_Mix? Why the attention is insufficient to mix the channel outputs?\n\n### R6.\n\nWe want to grant our model the flexibility to learn more diverse weight values for each channel. We compare ACM with and without $W_\\text{mix}$ and here are the results. \n\nModels | With $W_\\text{mix}$ || Without $W_\\text{mix}$ ||\n ------------ | :-----------: | :-----------: | :-----------: | :-----------: |\nDatasets | ACM | ACMII | ACM | ACMII |\n Cornell | 94.75 $\\pm$ 3.8 | **95.9 $\\pm$ 1.83** | 93.61 $\\pm$ 2.37 | 90.49 $\\pm$ 2.72\nWisconsin | 95.75 $\\pm$ 2.03 | 96.62 $\\pm$ 2.44 | 95 $\\pm$ 2.5\t | **97.50 $\\pm$ 1.25**\nTexas | 94.92 $\\pm$ 2.88 | **95.08 $\\pm$ 2.07** | 94.92 $\\pm$ 2.79 | 94.92 $\\pm$ 2.79 \nFilm | 41.62 $\\pm$ 1.15 | **41.84 $\\pm$ 1.15** | 40.79 $\\pm$ 1.01\t| 40.86 $\\pm$ 1.48 \nChameleon | **69.04 $\\pm$ 1.74** | 68.38 $\\pm$ 1.36 | 68.16 $\\pm$ 1.79\t | 66.78 $\\pm$ 2.79\nSquirrel | **58.02 $\\pm$ 1.86** | 54.53 $\\pm$ 2.09 | 55.35 $\\pm$ 1.72 | 52.98 $\\pm$ 1.66\nCora | 88.62 $\\pm$ 1.22 | **89.00 $\\pm$ 0.72** | 88.41 $\\pm$ 1.63 | 88.72 $\\pm$ 1.5\nCiteseer | 81.68 $\\pm$ 0.97 | **81.79 $\\pm$ 0.95** | 81.65 $\\pm$ 1.48 | 81.72 $\\pm$ 1.58\nPubMed | 90.66 $\\pm$ 0.47 | **90.74 $\\pm$ 0.5** | 90.46 $\\pm$ 0.69\t | 90.39 $\\pm$ 1.33\n\nWe can see that ACM with $W_\\text{mix}$ shows superiority in most datasets, although it is not statistically significant on some of them.\n\nOne possible explanation of the advantage is that $W_\\text{mix}$ could help alleviate the dominance and bias to majority: Suppose in a dataset, most of the nodes need more information from LP channel than HP and identity channels, then $W_L, W_H, W_I$ tend to learn larger $\\alpha_L$ than $\\alpha_H$ and $\\alpha_I$. For the minority nodes that need more information from HP or identity channels, they are hard to get large $\\alpha_H$ or $\\alpha_I$ values because $W_L, W_H, W_I$ are biased to the majority. And $W_\\text{mix}$ can help us to learn more diverse alpha values when $W_L, W_H, W_I$ are biased.\n\nAttention with more complicated design can be found for the node-wise adaptive channel mixing mechanism, but we do not explore this direction deeper in this paper because investigating attention function is not the main contribution of our paper.\n\n[1] Evtushenko, Anna, and Jon Kleinberg. \"The paradox of second-order homophily in networks.\" Scientific Reports 11.1 (2021): 1-10.",
" ### Q3.\nThe proposed metric seems to be correlated with model performances. However, the correlation between the performances of real-world datasets and the proposed metric hasn’t been shown.\n\n### R3.\n\nThe results of proposed metrics on real-world datasets are reported in Table 8 in Appendix H and we also provide an explanation to the results. We elaborate the explanation in the following paragraphs for you.\n\nThere are three key factors that influence the performance of GNNs in real-world tasks: labels, features and graph structure. The (modified) aggregation homophily tries to investigate the consistency of graph structure and labels from post-aggregation node similarity perspective with given features.The advantage is verified through the synthetic experiments. \n\nIn real-world datasets, besides graph-label consistency, we need to consider feature-label consistency and aggregated-feature-label consistency as well to fully investigate the performance of NNs and GNNs. With aggregation similarity score of the features $S_\\text{agg}\\left(S(I,X)\\right)$ and aggregated features $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ listed in Table 8, our methods open up a new way on analyzing and comparing the performance of graph-agnostic models and graph-aware models in real-world tasks. Here are two concrete explanations to the results from Table 8.\n\nExplanation 1: It is observed that GCN (graph-aware model) underperforms MLP-2 (graph-agnostic model) on $\\textit{Cornell, Wisconsin, Texas, Film}$ and people commonly thinks that the bad graph structure is the reason for performance degradation. But based on the proposed aggregation homophily, the graph-label inconsistency is not the main cause of it. Furthermore, from Table 8 we can see that the $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ for the above 4 datasets are lower than their corresponding $S_\\text{agg}\\left(S(I,X)\\right)$, which implies that it is the aggregated-feature-label inconsistency that causes the performance degradation, i.e. the aggregation step actually decrease the quality of node features rather than making them more distinguishable.\n\nExplanation 2: For the rest 5 datasets $\\textit{Chameleon, Squirrel, Cora, Citeseer, PubMed}$, we all have $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ larger than $S_\\text{agg}\\left(S(I,X)\\right)$ except $\\textit{PubMed}$. We can see that the proposed metrics are much more instructive than the existing ones.\n\nWe also need to point out that (modified) aggregation similarity score, $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ and $S_\\text{agg}\\left(S(I,X)\\right)$ are not deciding values because they only capture linear relations and a low score does not mean the GNN models will definitely perform worse than NNs. In practice, we also need to consider the non-linear relation among labels, features and graph structure, which is lacking in the existing metrics and our metrics (this can explain the failure of our metrics on $\\textit{PubMed}$). But our proposed metrics can be a good starting point for future research.\n\n\n### Q4.\nTwo variants of the adaptive channel mixing mechanism are proposed. From a practical point of view, which model should be used?\n\n### R4.\nWe design those two different options to allow our model to have the flexibility to extract linear or non-linear information from features before feeding them into each channel. It depends on the nonlinearity structure in the features and the relation between feature and graph. Although in most applications, we did not find big differences between those two options, we encourage the users to try both in their own tasks. We do not have a concrete instruction for now.\n\n\n### Q5.\nThe new metric can be extended to the multi-hop aggregation setting. Was there any further finding when the authors consider the multi-hop aggregation matrix?\n\n### R5.\n\nWe haven't got interesting findings on multi-hop aggregation setting for now. \n\nAlthough it is easy to extend our metric to higher order neighborhood, it is found that higher order homophily is not just a simple extension of 1-order homophily [1]. So we just keep the discussion within 1-hop neighborhood in this paper. In the future, we might need to find out a new principle beyond post-aggregation node similarity for multi-hop aggregation.\n\n",
" ### Q1.\nAlthough the metric shows a strong correlation with many existing GNN models, the analysis of post-aggregation is based on SGC and cannot be generalized to the other models directly.\n\n### R1.\n\nJust like what we mentioned in the limitation part (section 7), all the existing homophily metrics and our metrics only consider the linear feature-independent relation between graph structure and labels. Although the proposed post-aggregation similarity principle shows advantages over homophily principle, people can consider designing a non-linear feature-dependent metrics in the future which can be generalized to other models with non-linear activation functions. We believe our paper can be a good starting point for this future research.\n\n\n### Q2.\nRepresentation of some parts can further be improved. For example, the connection between the diversification distinguishability and the adaptive channel mixing framework seems vague. The definition of diversification distinguishability and the following theorem seem not necessary to introduce the necessity of the proposed framework. Also, the connection between the metric part and the model part seems not very clear although both parts are inspired by the post-aggregation similarity.\n\n### R2.\n\n#### (1) \"the connection between the diversification distinguishability and the adaptive channel mixing framework seems vague\"\n\nDiversification distinguishability is proposed to show the effectiveness of high-pass filter, which leads us to the 3-channel GNN architecture. The node-wise adaptive channel mixing mechanism is based on the observation from the example in Figure 3, not diversification distinguishability. Diversification distinguishability and adaptive channel mixing are two different contributions in the ACM framework.\n\n#### (2) \"The definition of diversification distinguishability and the following theorem seem not necessary to introduce the necessity of the proposed framework\"\n\nAs mentioned in our paper, Theorem 1, which is based on the definition of diversification distinguishability, theoretically shows the effectiveness of high-pass filter on addressing the heterophily problem, which leads us to the 3-channel GNN framework.\n\n#### (3) \"the connection between the metric part and the model part seems not very clear\"\n\nAs you mentioned in your question, the key part to connect the investigation section (heterophily; new metrics) and the methodology section (new model) is the post-aggregation node similarity matrix. The metrics are the by-products of the similarity matrix.\n\nLike the existing homophily metrics, the purpose of designing aggregation homophily is just to measure whether the aggregation (message passing) step will help $\\textbf{uni-channel}$ graph-aware model outperform graph-agnostic model. Our proposed 3-channel architecture is beyond the uni-channel framework, and thus its performance cannot be directly measured by the proposed metric. But based on the post-aggregation node similarity matrix, we can show and prove the effectiveness of the high-pass filter on addressing heterophily, which is one of the main reasons that we design the 3-channel architecture.\n\n\n",
" ### Q2.\nThe homophily metric on real-world datasets are indistinguishable. In Table 8, homophily metrics on most datasets are above 0.8, and seem not correlated with GNNs’ performances. (e.g. chameleon, film, squirrel) I wonder if the proposed metric is still instructive in real-world datasets. \n\n### R2.\nThanks for going through Table 8 carefully. The proposed metrics are still instructive in real-world datasets and the explanations are given in Appendix H. For your interest, we will collect them here:\n\nFirstly, we need to clarify that, for each curve in the synthetic experiments, the node features are fixed and we only generate graphs with different homophily levels, i.e. only change graph structures. But in real-world tasks, different datasets have different features and aggregated features. Thus, to get more instructive information for different datasets and compare them, we need to consider more metrics, e.g. feature-label consistency and aggregated-feature-label consistency. With the similarity score of the features $S_\\text{agg}\\left(S(I,X)\\right)$ and aggregated features $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ listed in Table 8, our methods open up a new way of analyzing and comparing the performance of graph-agnostic models and graph-aware models in real-world tasks. Here are two concrete explanations to the results displayed in Table 8.\n\n- Explanation 1: It is observed that GCN (graph-aware model) underperforms MLP-2 (graph-agnostic model) on $\\textit{Cornell, Wisconsin, Texas, Film}$ and people commonly think that the bad graph structure (low $H_\\text{edge},H_\\text{node},H_\\text{class}$ values in Table 4) is the reason for performance degradation. But based on the high aggregation homophily values, the graph-label inconsistency is not the main cause of it. Furthermore, from Table 8 we can see that the $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ for the above 4 datasets are lower than their corresponding $S_\\text{agg}\\left(S(I,X)\\right)$, which implies that it is the aggregated-feature-label inconsistency that causes the performance degradation, i.e. the aggregation step actually decrease the quality of node features rather than making them more distinguishable.\n\n- Explanation 2: For the rest 5 datasets $\\textit{Chameleon, Squirrel, Cora, Citeseer, PubMed}$, we all have $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ larger than $S_\\text{agg}\\left(S(I,X)\\right)$ except $\\textit{PubMed}$, which means the aggregated features have higher quality than raw features. We can see that the proposed metrics are much more instructive than the existing ones.\n\nWe also need to point out that (modified) aggregation similarity score, $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ and $S_\\text{agg}\\left(S(I,X)\\right)$ are not deciding values because they only capture linear relations and a low score does not mean that the GNN models will definitely perform worse than NNs. As we mentioned in limitation part, in practice, we also need to consider the non-linear relation among labels, features and graph structure, which is lacking in the existing metrics and our metrics (this might explain the failure of our metrics on $\\textit{PubMed}$). But our proposed metrics can be a good starting point for future research.\n\n### Q3.\nWhy are there some bumps in figure 2(c), 2(d) when h(G)=1.0 and h(g)=0.0 ? \n\n### R3.\nThe bumps in figure 2(c) are because of numerical perturbation and lack of ability to capture correct relation between graph structure and GNN performance.\n\nThe bumps in figure 2(d) are because of small numerical perturbation.\n\n### Q4.\nThere seems to be too many overfull lines. \n\n### R4.\nWe will make changes in the revised version.\n\n### Q5.\nIn Fig.2, please explain the performance drop in the interval [0.0, 0.2] and why (d) does not include this interval.\n\n### R5.\n1). For Figure 2(a,b), just as what we show in the example in Figure 1, when the homophily value is extremely low (near 0), the node features are actually distinguishable after aggregation step. When the homophily value starts to get higher (0 --->0.2), node features from different classes will actually be mixed and become indistinguishable. Figure 2(a,b) verify this phenomenon and in Appendix B, we theoretically prove this and calculate the homophily value to reach the lowest point for regular graphs.\n\n2). This is because, for all the generated graphs, their modified aggregation homophily values are larger than 0.2. Here is a simplified example to help you understand how we plot Figure 2.\n\nAs the graph generation process mentioned in our paper, suppose we generate 3 graphs with $H_\\text{edge}=0.1,0.5,0.9$, the test accuracy of GCN on these 3 synthetic graphs are $0.8,0.5,0.9$. For those graphs, we calculate their $H_\\text{agg}^M$ and suppose we get $H_\\text{agg}^M=0.7,0.4,0.8$. Then we will draw the performance of GCN under $H_\\text{agg}^M$ with ascend x-axis order $[0.4,0.7,0.8]$ and the corresponding reordered y-axis $[0.5,0.8,0.9]$. Other figures are drawn in the same way.",
" ### Q1. \nThe proposed diversification operation is not novel since many previous works have utilized high-frequency components directly or indirectly. The authors have emphasized the difference between ACM and GPRGNN/FAGCN in node-wise channel mixing, but GPRGNN also contains node-wise feature transformation before the propagation step, which could cause γ_k to be different for each node. And the attention mechanism in FAGCN could be seen as a node-wise mixing as well. While the superiority of ACM over other mixing mechanisms is revealed in the experiment, I would be glad to see a more intuitive explanation or example of why ACM is better. \n\n### R1.\nWe have discussed the differences between our proposed model and GPRGNN and FAGCN in Appendix J. We will elaborate them for you here.\n1) Difference with GPRGNN: \n\n- GPRGNN does not feed distinct $\\textbf{node-wise feature transformation}$ to different \"multi-scale channels\"\n\nWe first rewrite GPRGNN as \n$$\\mathbf{Z} = \\sum\\limits\\_{k=0}^{K} \\gamma\\_{k} \\mathbf{H}^{(k)} = \\sum\\limits\\_{k=0}^{K} \\gamma\\_{k} I \\mathbf{H}^{(k)} = \\sum\\limits\\_{k=0}^{K} diag(\\gamma\\_{k}, \\gamma\\_{k},\\dots,\\gamma\\_{k}) \\mathbf{H}^{(k)}, \\text{ where } \\mathbf{H}^{(k)} = \\hat{A}\\_{\\text{sym}}\\mathbf{H}^{(k-1)}, \\mathbf{H}^{(0)}\\_{i:} = f\\_\\theta(X\\_{i:}).$$ \nFrom the above equation we can see that $\\mathbf{Z} = \\sum\\limits\\_{k=0}^{K} \\gamma\\_{k} \\hat{A}\\_{\\text{sym}}^k f\\_\\theta(X\\_{i:})$, i.e. the $\\textbf{node-wise feature transformation}$ in GPRGNN is only learned by the same $\\theta$ for all the \"multi-scale channels\". But in the ACM framework, different channels extract distinct information with different parameters separately.\n\n- GPRGNN does not have node-wise mixing mechanism.\n\nThere is no node-wise mixing in GPRGNN. The mixing mechanism in GPRGNN is $\\mathbf{Z} = \\sum\\limits\\_{k=0}^{K} diag(\\gamma\\_{k}, \\gamma\\_{k},\\dots,\\gamma\\_{k}) \\mathbf{H}^{(k)}$, i.e. for each \"multi-scale channel $k$\", all nodes share the same mixing parameter $\\gamma_{k}$. But in the ACM framework, the node-wise channel mixing can be written as $\\mathbf{Z} = \\sum\\limits_{k=0}^{K} diag(\\gamma_{k}^1,\\gamma_{k}^2,\\dots,\\gamma_{k}^N) \\mathbf{H}^{(k)}$, where $K$ is the number of channels, $N$ is the number of nodes and $\\gamma_{k}^i, i=1,\\dots,N$ are the mixing weights that are learned by node $i$ to mix channel $k$. ACM and ACMII allow GNNs to learn more diverse mixing parameters in diagonal than GPRGNN and thus, have stronger expressive power than GPRGNN.\n\n2) FAGCN: This question is similar to the Q1 from Reviewer 9hm2, we will elaborate the answer here for you.\n\n- The targets of node-wise operations in ACM (channel mixing) and FAGCN (negative message passing) are different.\n\nInstead of using a fixed low-pass filter $\\hat{A}$, FAGCN tries to learn a more powerful aggregator $\\hat{A}'$ based on $\\hat{A}$ by allowing negative message passing. The node-wise operation in FAGCN is similar to GAT [3] which is trying to modify the $\\textbf{node-wise filtering (message passing) process}$, i.e. for each node $i$, it assigns different weights $\\alpha_{ij} \\in [-1,1]$ to different neighborhood nodes (equation 7 in FAGCN paper). The goal of this node-wise operation in FAGCN is $\\textbf{to learn a new filter during the filtering process node-wisely}$. But in ACM, the node-wise operation is to $\\textbf{mix the filtered information}$ from each channel which is processed by different fixed filters. The targets of two the node-wise operations are actually different things.\n\n- FAGCN does not learn distinct information from different \"channels\". FAGCN uses simple addition to mix information instead of node-wise channel mixing mechanism\n\nIn addition, the learned filter $\\hat{A}'$ can be decomposed as follows: $\\hat{A}'=\\hat{A}_1' + (-\\hat{A}_2')$, where $\\hat{A}_1'$ and $-\\hat{A}_2'$ represent positive and negative edge (propagation) information, respectively. But FAGCN does not feed distinct information to $\\hat{A}_1'$ and $-\\hat{A}_2'$. Moreover, the aggregated $\\hat{A}_1' X$ and \"diversified\" information $(-\\hat{A}_2') X$ are simply added together instead of using any node-wise channel mixing. In ACM, we learn distinct information separately in each channel with different parameters and add them adaptively and node-wisely instead of just adding them together. In section 6.1, the ablation study empirically shows that node-wise adaptive channel mixing is better than simple addition.\n\nAlso, as we mentioned in the contribution part, we do not try to facilitate learning filters with high expressive power, e.g. FAGCN, GPRGNN, BernNet. The goal of ACM is that, when given a filter with certain expressive power, we can extract richer information from additional channels in a certain way to address heterophily. This makes ACM more flexible and easier to implement.\n\nFrom the above arguments we can see that ACM is different from GPRGNN and FAGCN.\n",
" ### Q3.\nThe novelty of the idea is not enough. \n\n### R3.\n\nWe have summarized the main contribution of this paper in the contribution part and we will elaborate it here for you.\n\n1. To our knowledge, we are the first to analyze heterophily from the post-aggregation node similarity perspective. Based on the proposed similarity matrix, we derive novel homophily metric which is verified to be superior to the existing metrics. The effectiveness of high-pass filter is also proved based on the similarity matrix, which is novel as well.\n\n2. The proposed ACM framework is highly different from adaptive filterbank and existing GNNs for heterophily: 1) the traditional adaptive filterbank uses a scalar weight for each filter and this weight is shared by all nodes. In contrast, in our method different nodes can learn different weights to utilizes the $\\textbf{filtered information from different channels}$ adaptively to account for heterophily; 2) Unlike existing methods that leverage the high-order filters and global property of high-frequency signals, ACM successfully addresses heterophily by $\\textbf{considering only the nodewise local information adaptively}$.\n\n3. Unlike existing methods that try to facilitate learning filters with high expressive power, e.g. FAGCN, GPRGNN and BernNet etc., the goal of ACM is that, when given a filter with certain expressive power, we can extract richer information from additional channels in a certain way to address heterophily. This makes ACM more flexible and easier to implement.\n\n### Q4.\n\nThe improvement in Table 4 does not seem statistically significant because of high variance. \n\n### R4.\n\n\nCompared to the SOTA models, the variance of our model is not large. This is consistent with some recently published papers, e.g. [1,2]. To reduce the possibility that the high variance on certain dataset would affect the model evaluation and comparison, we calculate the average rank of the performance over all datasets. From the average rank we can see that the proposed model outperforms the SOTA model.\n\n\n\n### Q5.\nThere is a problem with the typesetting of the paper.\n\n### R5.\n\nWe will modify it in the revised version.\n\n\n\n\n[1] He, Mingguo, Zhewei Wei, and Hongteng Xu. \"Bernnet: Learning arbitrary graph spectral filters via bernstein approximation.\" Advances in Neural Information Processing Systems 34 (2021): 14239-14251.\n\n[2] Li, Xiang, et al. \"Finding Global Homophily in Graph Neural Networks When Meeting Heterophily.\" arXiv preprint arXiv:2205.07308 (2022).\n\n[3] Veličković, Petar, et al. \"Graph attention networks.\" arXiv preprint arXiv:1710.10903 (2017).",
" ### Q1. \nContribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing. \n\n### R1.\n\nWe have discussed the differences between ACM and FAGCN in Appendix J. We would like to elaborate them here for you:\n\n- The targets of node-wise operations in ACM (channel mixing) and FAGCN (negative message passing) are different.\n\nInstead of using a fixed low-pass filter $\\hat{A}$, FAGCN tries to learn an aggregator $\\hat{A}'$ based on $\\hat{A}$ by allowing negative message passing. The node-wise operation in FAGCN is similar to GAT [3] which tries to modify the $\\textbf{node-wise filtering (message passing) process}$, i.e. for each node $i$, it assigns different weights $\\alpha_{ij} \\in [-1,1]$ to different neighborhood nodes (equation 7 in FAGCN paper). The goal of this node-wise operation in FAGCN is $\\textbf{to learn a new filter during the filtering process node-wisely}$. But in ACM, the node-wise operation is to mix the $\\textbf{filtered information}$ from each channel which is processed by different fixed filters. The targets of two the node-wise operations are actually different things.\n\n- FAGCN does not learn distinct information for different \"channels\". FAGCN uses simple addition to mix information instead of node-wise channel mixing mechanism\n\nThe learned filter $\\hat{A}'$ can be decomposed as follows: $\\hat{A}'=\\hat{A}_1' + (-\\hat{A}_2')$, where $\\hat{A}_1'$ and $-\\hat{A}_2'$ represent positive and negative edge (propagation) information respectively. But FAGCN does not feed distinct information to $\\hat{A}_1'$ and $-\\hat{A}_2'$. Moreover, the aggregated $\\hat{A}_1' X$ and \"diversified\" information $(-\\hat{A}_2') X$ are simply added together instead of using any node-wise mixing mechanism. In ACM, we learn distinct information separately in each channel with different parameters and add them adaptively and node-wisely instead of just adding them together, because different nodes need information from different channels. In section 6.1, the ablation study empirically shows that node-wise adaptive channel mixing is better than simple addition.\n\nAlso, as mentioned in the contribution highlights, we are NOT facilitating learning filters with high expressive power, e.g. FAGCN, GPRGNN, and BernNet. Given a filter with certain expressive power, ACM can extract richer information from additional channels in a certain way to address the heterophily issue.\n\nFrom the above argument we can see that ACM and FAGCN are different.\n\n### Q2.\nThere is a gap between the proposed metric and method. Based on post-aggregation node similarity, they propose an aggregation similarity metric. However, the final 3-channel filterbank has nothing to do with the above metric.\n\n### R2.\n\nLike the existing homophily metrics, the purpose of designing aggregation homophily is just to measure whether the aggregation (message passing) step would help $\\textbf{uni-channel}$ graph-aware model outperform graph-agnostic model. Our proposed 3-channel architecture is beyond the uni-channel framework, and thus its performance cannot be directly measured by the proposed metric.\n\nThe key part to connect the investigation part (heterophily; new metrics) and the methodology part (new model) is the post-aggregation node similarity matrix, not the metrics. The metrics are the by-products of the similarity matrix.\n\nIn methodology part, based on the post-aggregation node similarity matrix, we can show the effectiveness of the high-pass filter on addressing heterophily, which is one of the main reasons that we design the 3-channel architecture.\n\n",
" ### Q3.\nThe theoretical analysis is based on the random walk Laplacian. It would be better if the authors can extend it to the widely used symmetric Laplacian.\n\n### R3.\n\nThe definitions of the similarity matrix, (modified) aggregation similarity score and diversification distinguishability value can be extended to symmetric normalized Laplacian or other aggregation operations. But we cannot extend Theorem 1, because we need a condition that the row sum of $\\hat{A}$ is not greater than 1 in the proof (see Appendix E). This condition is guaranteed for random walk normalized Laplacian but not for symmetric normalized Laplacian.\n\nIn practice, the answer is yes. We evaluate our models with symmetric filters and compare them with random walk filters. From the following table we can see that, there is no big differences between these two filters.\n\nModels | *Random Walk* || *Symmetric* ||\n ------------ | :-----------: | :-----------: | :-----------: | :-----------: |\nDatasets | ACM | ACMII | ACM | ACMII |\n Cornell | 94.75 $\\pm$ 3.8 | **95.9 $\\pm$ 1.83** | 94.92 $\\pm$ 2.48 | 94.1 $\\pm$ 2.56 \nWisconsin | 95.75 $\\pm$ 2.03 | **96.62 $\\pm$ 2.44** | 95.63 $\\pm$ 2.81 | 96.25 $\\pm$ 2.5\nTexas | 94.92 $\\pm$ 2.88 | **95.08 $\\pm$ 2.07** | 94.75 $\\pm$ 2.01 | 94.59 $\\pm$ 2.65\nFilm | 41.62 $\\pm$ 1.15 | **41.84 $\\pm$ 1.15** | 41.58 $\\pm$ 1.3 | 41.65 $\\pm$ 0.6 \nChameleon | **69.04 $\\pm$ 1.74** | 68.38 $\\pm$ 1.36 | 67.9 $\\pm$ 2.76 | 68.03 $\\pm$ 1.68\nSquirrel | **58.02 $\\pm$ 1.86** | 54.53 $\\pm$ 2.09 | 54.18 $\\pm$ 1.35 | 53.68 $\\pm$ 1.74\nCora | 88.62 $\\pm$ 1.22 | **89.00 $\\pm$ 0.72** | 88.65 $\\pm$ 1.26 | 88.19 $\\pm$ 1.38 \nCiteseer | 81.68 $\\pm$ 0.97 | 81.79 $\\pm$ 0.95 | **81.84 $\\pm$ 1.15** | 81.81 $\\pm$ 0.86 \nPubMed | 90.66 $\\pm$ 0.47 | **90.74 $\\pm$ 0.5** | 90.59 $\\pm$ 0.81 | 90.54 $\\pm$ 0.59 ",
" Thanks for your constructive comments and suggestions. Here are our answers to your questions.\n\n### Q1. \nWhat's the correlation between the high-pass filter and diversification? It would be better to define diversification and describe how a high-pass filter can extract such information. A synthetic example is not clear enough.\n\n### R1.\n\nHigh-pass filter and diversification are essentially the same operation described in matrix and node form. To be more specific\n$$\\text{HP filter: } (I-\\hat{A})X, \\ \\ \\ \\text{Diversification on node $i$: } [(I-\\hat{A})X]\\_{i,:} = X\\_{i:} - \\sum\\limits\\_{k\\in \\mathcal{N}(i)} \\frac{1}{d\\_i} X\\_{k:}$$\n\nAs we mentioned in our paper, diversification operation extract neighborhood dissimilarity by a subtraction of node features $X\\_{i:}$ and aggregated features from neighbors $\\sum\\limits\\_{k\\in \\mathcal{N}(i)} \\frac{1}{d\\_i} X\\_{k:}$.\n\n\n### Q2.\nSome experiments are missing: (1) The comparison between the existing homophily metrics and the proposed one on real-world datasets. (2) The case study of $\\alpha$ score on different nodes. These analysis could better help readers understand the effect of architecture and hyper-parameters of the architecture.\n\n### R2.\n\n(1)\nThe results of existing metrics are reported in Table 4 in Appendix A. The results of proposed metrics on real-world datasets are reported in Table 8 in Appendix H and we also provide an explanation to those results and some comparisons with the existing metrics. We elaborate the explanations about the results in Table 4 and 8 in the following paragraphs for you.\n\nThere are three key factors that influence the performance of GNNs in real-world tasks: labels, features and graph structure. The existing metrics and (modified) aggregation homophily tries to investigate the consistency of graph structure and labels from different principles with $\\textbf{given features}$. And the post-aggregation node similarity principle is verified to be advantageous over homophily principle through the synthetic experiments. \n\nIn real-world tasks, different datasets have features with variant quality. Thus, besides graph-label consistency, we need to consider feature-label consistency and aggregated-feature-label consistency as well to fully investigate the performance of NNs and GNNs. With aggregation similarity score of the features $S_\\text{agg}\\left(S(I,X)\\right)$ and aggregated features $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ listed in Table 8, our methods open up a new way of analyzing and comparing the performance of graph-agnostic models and graph-aware models in real-world tasks. Here are two cases of result comparison.\n\nCase 1: It is observed that GCN (graph-aware model) underperforms MLP-2 (graph-agnostic model) on $\\textit{Cornell, Wisconsin, Texas, Film}$ and people commonly thinks that the bad graph structure (low $H_\\text{edge},H_\\text{node},H_\\text{class}$ values in Table 4) is the reason for performance degradation. But based on the proposed aggregation homophily, the graph-label inconsistency is not the main cause of it. Furthermore, from Table 8 we can see that the $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ for the above 4 datasets are lower than their corresponding $S_\\text{agg}\\left(S(I,X)\\right)$, which implies that it is the aggregated-feature-label inconsistency that causes the performance degradation, i.e. the aggregation step actually decrease the quality of node features rather than making them more distinguishable.\n\nCase 2: For the rest 5 datasets $\\textit{Chameleon, Squirrel, Cora, Citeseer, PubMed}$, we all have $S_\\text{agg}\\left(S(\\hat{A},X)\\right)$ larger than $S_\\text{agg}\\left(S(I,X)\\right)$ except for $\\textit{PubMed}$, which has two close values. But the existing metrics give extremely low values to $\\textit{Chameleon, Squirrel}$, which implies that graph-aware models would outperform graph-agnostic models. This is far from the observations in experiments.\n\n\n(2)\nIn Figure 4, we have plotted the learned $\\alpha$ values in output layer of ACM-GCN trained on Squirrel. The $\\alpha$ values show that the additional channels play a nontrivial role for most of the nodes. We have put the pictures for other datasets in Appendix I2 in the revised version.\n\n\n",
" This paper first points out that existing homophily metrics cannot precisely reflect the performance of GNN in some cases, and develops a new one based on the similarity comparison between the local neighbors in the same and different class. Next, based on the proposed metric, the paper shows the case that a high-pass filter can address the harmful heterophily, and further propose a node-wise mixing filter that combines low-pass and high-pass filters. Pros\n1. The motivation for the proposed homophily metric is clear and theoretically guaranteed, although the analysis of one-layer SGC is relatively simplistic. The observations from Table 8 show that the metric can accurately determine whether the additional graph information is harmful.\n2. Node-wise aggregation is reasonable since the local structure differs between different nodes.\n3. The experimental results on many different GNNs shown in the Appendix are complete and convincing.\n\nCons\n What's the correlation between the high-pass filter and diversification? It would be better to define diversification and describe how a high-pass filter can extract such information. A synthetic example is not clear enough. 1. Some experiments are missing: (1) The comparison between the existing homophily metrics and the proposed one on real-world datasets. (2) The case study of \\alpha score on different nodes. These analysis could better help readers understand the effect of architecture and hyper-parameters of the architecture.\n\n2. The theoretical analysis is based on the random walk Laplacian. It would be better if the authors can extend it to the widely used symmetric Laplacian.",
" This paper presents an analysis of existing homophily metrics, and proposes a new metric more informative for the performance. Based on the analysis, they design a 3-way filterbank, enabling adaptive filtering (high-pass, low-pass or identity) at different nodes. Experiments validate the effectiveness of the proposed method. Strength:\n1.\tThis paper is the first to analyze heterophily from post-aggregation node similarity.\n2.\tThe proposed filterbank is plug-and-play for backbone GNNs, which extracts richer information from additional channels.\nWeakness:\n1.\tContribution is not convincing. They argue that the traditional adaptive filterbank uses a scalar weight shared by all nodes, and their proposed method learns different weights for different nodes. However, in my opinion, FAGCN can do the same thing.\n2.\tThere is a gap between the proposed metric and method. Based on post-aggregation node similarity, they propose an aggregation similarity metric. However, the final 3-channel filterbank has nothing to do with the above metric.\n3.\tThe novelty of the idea is not enough. In addition to the limitations pointed out above, both new metric and method are relatively straightforward.\n4.\tThe improvement in Table 4 does not seem statistically significant because of high variance.\n5.\tThere is a problem with the typesetting of the paper.\n 1. More comparisons with FAGCN.\n2. There is a gap between the proposed metric and method.\n3. The improvement in Table 4 does not seem statistically significant because of high variance.\n In addition to the limitations mentioned in the paper, the intrinsic relationship between the proposed metric and method should be taken into consideration. No potential negative societal impact.",
" This paper investigates the relationship between heterophily and the performance of current GNNs. First, the paper proposes a novel homophily metric that specifics harmful heterophily. The metric is shown to be more correlated with the GNNs’ performances than traditional metrics. To handle with harmful heterophily, based on the metric, the paper proposes Adaptive Channel Mixing (ACM) Framework. Extensive experiments are conducted on real-world datasets that verify the superiority of ACM framework. 1.\tThe proposed homophily metric is interesting and makes sense. The given example in Fig. 1 and comparison with other metrics in Fig.2 nicely illustrate the advantages of this new metric. \n2.\tThe experiments including detailed ablation study are clear and comprehensive. \n refer to \"limitations\" 1.\tThe proposed diversification operation is not novel since many previous works have utilized high- frequency components directly or indirectly. The authors have emphasized the difference between ACM and GPRGNN/FAGCN in node-wise channel mixing, but GPRGNN also contains node- wise feature transformation before the propagation step, which could cause γ_k to be different for each node. And the attention mechanism in FAGCN could be seen as a node-wise mixing as well. While the superiority of ACM over other mixing mechanisms is revealed in the experiment, I would be glad to see a more intuitive explanation or example of why ACM is better. \n2.\tThe homophily metric on real-world datasets are indistinguishable. In table 8, homophily metrics on most datasets are above 0.8, and seem not correlated with GNNs’ performances. (e.g. chameleon, film, squirrel) I wonder if the proposed metric is still instructive in real-world datasets. \n3.\tWhy are there some bumps in figure 2(c), 2(d) when h(G)=1.0 and h(g)=0.0 ? \n4.\tThere seems to be too many overfull lines. \n5.\tIn Fig.2, please explain the performance drop in the interval [0.0, 0.2] and why (d) does not include this interval.\n",
" This paper first addresses the limitation of the previously proposed metrics in analyzing the performance of GNNs on heterophilic graph datasets. To solve the limitation, a new metric based on the post-aggregation node similarity is proposed. The newly proposed metric better reflects the performances of GNNs on node classification tasks. To further improve the GNNs, the authors propose an adaptive channel mixing mechanism that uses both high and low-frequency graph signals. The adaptive channel mixing mechanism is employed in existing GNN models and shows improved performances on various node classification datasets. Strengths\n\n- The paper addresses the limitation of previously suggested metrics well and proposes a new metric that can better reflect the behaviors of GNN models.\n- The adaptive channel mixing mechanism is intuitive and can be applied to many existing GNN models without adding too much computational complexity.\n- Extensive experimental results are shown on various datasets to show the performance of the proposed framework.\n\nWeaknesses\n\n- Although the metric shows a strong correlation with many existing GNN models, the analysis of post-aggregation is based on SGC and cannot be generalized to the other models directly.\n- Representation of some parts can further be improved. For example, the connection between the diversification distinguishability and the adaptive channel mixing framework seems vague. The definition of diversification distinguishability and the following theorem seem not necessary to introduce the necessity of the proposed framework. Also, the connection between the metric part and the model part seems not very clear although both parts are inspired by the post-aggregation similarity.\n- The proposed metric seems to be correlated with model performances. However, the correlation between the performances of real-world datasets and the proposed metric hasn’t been shown. - Two variants of the adaptive channel mixing mechanism are proposed. From a practical point of view, which model should be used?\n- The new metric can be extended to the multi-hop aggregation setting. Was there any further finding when the authors consider the multi-hop aggregation matrix?\n- The intuition behind the mixing matrix is unclear. Could authors elaborate more on the necessity of the mixing matrix W_Mix? Why the attention is insufficient to mix the channel outputs? Although the limitation of the proposed approach is addressed in the appendix, the limitations are shown based on a curated example and fail to show general conditions where the model fails.\n",
" This paper considers heterophily in GNN for node classification. It has two major contributions to the community. First, it pointed out that mainstream homophily measures (i.e. edge, node, class) does not align with GCN/SGC classification accuracy, since those measures does not distinguish harmless/harmful heterophilty. It proposes a new mesaure that inspired by SGC gradient updates, which shows good alignment between homophily and classification accuracy on synthetic datasets. Second, this paper propose ACM which uses low pass high pass and identity channels, combine together with softmax operation, ACM is based on the intuition that high pass filter would help distinguish nodes with harmful heterophily. ACM is empirically verified to be good. Strengths\n - the new homophily is novel and practical connection with SGC gradient is very intuitive\n - this is a very notable contribution to the community, although homophily and SGC performance has been discussed in the past by many, it is a known issue that the edge homophily is not fully correlated with SGC performance. the analysis and new hopophily measure solved this problem\n - ACM is novel effective and intuitive\n\nWeakness\n - writing could improve N/A Suggestions\n - in abstract, make clear this paper is about node classification, (real-world tasks -> nodeclassification)\n - line 137, i assume you meant to write [-1, 1]\n - line 142, I am not sure what you mean here\n - line 554, i assume you meant to write 1_{N}^{T}\n - my experience is that graphsage typically works well for heterophily graphs, adding that as a baseline to acm would be useful\n - add citations\n - (gnn heterophily) Residual correlation in graph neural network regression\n - (gnn heterophily) Beyond Homophily in Graph Neural Networks\n - (gnn heterophily) New benchmarks for learning on non-homophilous graphs\n "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3,
5
] | [
"0T3p2SJtJ6X",
"ZYDvj584PDZ",
"eFqy0OjZfUmS",
"6zOWqKyMDIE",
"tPxNZ1zrlF",
"OD-BvmeR1Yg",
"DbZnBG5YmPX",
"eoAPjWeJ3PK",
"4XVcI25MMH",
"ohrJRr5JTs",
"vWFdN9AHFED",
"YrSSrsZH2vk",
"GggAs0KKnv2",
"nips_2022_NjeEfP7e3KZ",
"nips_2022_NjeEfP7e3KZ",
"nips_2022_NjeEfP7e3KZ",
"nips_2022_NjeEfP7e3KZ",
"nips_2022_NjeEfP7e3KZ"
] |
nips_2022_2vYmjZVT29T | Hamiltonian Latent Operators for content and motion disentanglement in image sequences | We introduce \textit{HALO} -- a deep generative model utilising HAmiltonian Latent Operators to reliably disentangle content and motion information in image sequences. The \textit{content} represents summary statistics of a sequence, and \textit{motion} is a dynamic process that determines how information is expressed in any part of the sequence. By modelling the dynamics as a Hamiltonian motion, important desiderata are ensured: (1) the motion is reversible, (2) the symplectic, volume-preserving structure in phase space means paths are continuous and are not divergent in the latent space. Consequently, the nearness of sequence frames is realised by the nearness of their coordinates in the phase space, which proves valuable for disentanglement and long-term sequence generation. The sequence space is generally comprised of different types of dynamical motions. To ensure long-term separability and allow controlled generation, we associate every motion with a unique Hamiltonian that acts in its respective subspace. We demonstrate the utility of \textit{HALO} by swapping the motion of a pair of sequences, controlled generation, and image rotations. | Accept | This paper proposes a novel type of variational auto encoder, referred to as HELO. The latent space is decomposed into a content space and a motion space, and the main contribution is the proposal to model the motion space using Hamiltonian dynamics. All reviewers agree that the idea of using Hamiltonian dynamics is interesting and novel. One main critique, that the authors agreed on, was that the operator does not contain any stochasticity and that this might be a limitation when applying the idea to model more complex data. Another remark was that the experiments are limited and experiments on less constraint data are missing. A quick look at the baseline methods revealed that they also use the same kind of data sets to evaluate their methods, so this latter concern might be of minor importance.
All in all, the potential positive outcomes of this paper outweight its current limitations, so we recommend acceptance at this point, while urging authors to address the remaining concerns in the final version.
| train | [
"VvDIC94tiV4",
"zEsy_ewT0F",
"TEThRlTKZqA",
"PmzjpO3D_Dy",
"lFN4oq4ycv",
"jg-48zG-hvU",
"vYmmj1SQHsM",
"4Hkyzaumbu",
"522HjYJyrpx",
"AZ6bD7BOLBa",
"SB_BBFqS0Jn",
"Xy5b3_kPEjz"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are very thankful for your detailed feedback on the paper and for responding to the rebuttal. We appreciate it a lot. We replied to your reviews to the best of our effort and promise to incorporate feedback in the final version. Due to limited time and computational issues pointed out in our rebuttal, we consider more complex data scenarios as the future scope of the work. Please let us know if you have any other remaining concerns or need clarification; we will be happy to provide further details. Thanks again!",
" Thank you for your valuable feedback on our paper. We really appreciate your positive comments. To the best of our effort, we responded to your reviews. As the discussion period is ending soon, we are wondering whether our responses helped address your concerns. Please let us know if you need any further clarification; we will be happy to provide further details. \n\n",
" Thank you for your valuable feedback on our paper. To the best of our effort, we responded to your reviews. The discussion period is ending soon we are wondering whether our responses helped address your concerns. Please let us know if you need any further clarification; we will be happy to provide further details.",
" Thank you for your valuable feedback. We included your helpful suggestions for improving the presentation of the results. As the rebuttal phase has a 9-page limit, we have noted the suggestions and will incorporate all the rebuttal phase's clarifications in the final version of the paper. The dataset we selected for the experiments was employed in the baseline methods DSVAE, S3VAE, etc. We indeed agree it would be more interesting to look at challenging real-world problems. In this work, we wanted to demonstrate the Hamiltonian formulation provides a more principled way to think about content and motion disentanglement. The Hamiltonian formulation has many attractive properties such as reversibility of dynamics, symplectic structure in latent space and a bilinear form of energy. We demonstrated the benefits of these properties on commonly used datasets. In our appendix, we conducted extensive ablation studies to study the effect of constant energy in the motion space.\nFurthermore, we want to note our dynamical model is linear, which makes it easy to interpret (unlike existing approaches using non-linear models). The block-diagonal structure of Hamiltonian makes it easy to scale. We hope you consider our contribution in your final rating. We will be happy to address any remaining concerns in our existing validation. Extending to more complex data comes with an additional computational cost and challenges, such as handling injection/removal of energy in stochastic environments. We note these more exciting applications and will definitely extend our framework to such scenarios in future work.",
" I appreciate the updated manuscript and detailed response. I can see an improvement in some of the areas pointed out in my original review (e.g., structure/clarity of presentation) and the provided clarifications about motion/content separation, sequence length/deterioration, and the underlying datasets are helpful. I would suggest to include parts of this response in the paper as well. The rebuttal does not fundamentally change my view of the paper and I still believe that HALO is a technically interesting approach with weaknesses in its experimental validation; however, I do appreciate the additional context provided by the authors. I feel compelling experiments on less constrained data (Sprites is a very controlled setting and even the evaluation on real-world face data is still in the realm of toy data given near-perfect alignment and orientation) could make this a much stronger paper. I’m not opposed to acceptance but also do not see changes to the manuscript that are significant enough to raise my initial rating.",
" vii) **Dynamics in rotating balls dataset**\n\nFor the rotating balls experiment, we restrict the centre of a ball in sequences to move along an orbit of a fixed radius $c$ from the centre of a frame. We first sample an initial frame containing a ball whose $(x,y)$ centre lies in orbit. Next, the future steps are drawn by rotating the center of a ball in an anticlockwise direction that is sample $T$ angles in $[0, 2\\pi]$ and place $(x,y)$ center of ball at $(c\\cos (\\theta), c \\sin (\\theta))$. We generate multiple sequences by adding a small random noise to the initial location and varying the radius $c$. This dataset can be viewed as a pendulum with a known conserved quantity. The results on the toy dataset demonstrate we can use our approach to swap the content of sequences rotating with a different conserved energy.\n\nviii) **Length of sequences used. Scalability to higher resolution images**\n\nThe length of sequences used for training and evaluation is chosen to be consistent with the baseline comparison methods. As you observed in Appendix Figure 3, we demonstrate longer length sequences $32$. The computational resources are the main bottleneck in the training and evaluation high-resolution sequences. In each training batch, the model's input is of size $B\\times T\\times C\\times W\\times H$. A large value of $T$, $W$ and $H$ increase the cost of storing intermediate network outputs and the model. We think our approach can be scaled up for high-resolution images by incorporating more sophisticated encoder-decoder architectures and using large GPUs. We want to remark our main claim is to demonstrate the benefits of symplectic geometry for disentangling motion from content in image sequences. In this work, we present the results on three commonly used datasets. Our model is more general and can be combined with other developments on scaling VAEs to high-fidelity images.\n\n\nix) ***Deterioration in decoder output***\n\nOur experiments did not observe deterioration even when used for long trajectory prediction. As also demonstrated in Figure 3, the overall energy stays constant over time. We use an explicit solution of a dynamical system in the form of matrix exponential to unroll the trajectories, unlike other models where dynamics tend to deviate from the data manifold due to the accumulation of error over timesteps. This accumulation error results in visual distortions or dynamics' static nature. However, this is not the case in our formulation. We also remark the deterioration in images does appear when sampling a new content variable from the prior in a latent space and using that with the motion variables to unroll the trajectory. This effect is due to the mismatch between aggregated posterior and prior. Consequently, there are regions in latent space with low density. Such issues can be addressed using a $\\beta$ formulation of VAE and carefully tuning the $\\beta$ parameter. \n\n\nx) ***Details of pretrained classifier***\n\nWe have provided architecture details of a pretrained classifier in the Appendix with references in the main paper. We have also fixed the typos. Due to limited space, we couldn't include further details on ELBO in the main paper at this point. We promise to include intuition of ELBO in the final version and address any further readability issues.",
" We thank the reviewer for their valuable assessment of our work. We are glad our work interests you. Our response to your concerns are below:\n\ni) ***Clarification on learning $H_k$ matrices***\n\nThe Hamiltonian operator $H=JM$, where $J$ is a fixed matrix as stated in definition 1. The matrix $M$ is parameterised as a learnable symmetric-matrix as $M = 0.5(A+A^T)$, where entries in $A$ are learnable real-valued parameters. Many thanks for your comments. We have updated it in the paper. We will rectify any further clarification needed in the final version.\n\nii) ***Sample efficiency of Hamiltonian inductive bias***\n\nIndeed, sample efficiency is an interesting question. In our current work, we did not investigate it. We consider it will be interesting to examine the applicability of Hamiltonian operators in model-based reinforcement learning where an environment is composed of various complex physical dynamics. Hamiltonian inductive bias can help learn a model of the world in such scenarios. We consider it as the future scope of the work. \n\n\niii) ***Handling complex stochastic videos***\n\nGiven the initial coordinate $(p_0, q_0)$ and its Hamiltonian energy, Hamilton's equation uniquely determines the path in phase space. For highly stochastic dynamics can introduce a mechanism to inject/remove energy from the system. This can be possible, for instance, by making $H$ dependent on time. But this would imply a need for a more sophisticated integration scheme. Thank you for pointing out an interesting question. We indeed consider this as a potential limitation in the current formulation and would like to address it in future work.\n\n\niv) ***How does the method guarantee that no motion information leaks in the global content vector?***\n\nThe point of our model is to capture the content and motion across multiple different sequences with different motions. The neural network is shared between these sequences, and the only information about any individual image in the sequence or the dynamic process is in the representation. Indeed the neural decoder will capture information that is common across all frames in all images. However, because it is common for each frame it cannot capture any motion in itself. Furthermore, because the content part is the same for all parts of the motion, it too cannot contain information that distinguishes the current motion position information. The optimisation does nothing in these terms. This decomposition is enforced by the representational structure itself.\n\n\nThe global vector $z$ is a summary statistics of a sequence. A naive way would be to take an average of the frame level encodings. We used LSTM for a fair comparison as it was used in the baseline DSVAE paper. For any two timesteps, say 0 and 1, let the respective motion components be $q_0$ and $q_1$. Then the combined latent coordinates as an input to the decoder are $[z,q_0]$ and $[z, q_1]$. For the decoder the right temporal information in $x_0$ and $x_1$ can only come from $q_0$ and $q_1$ as $z$ is common. Thus preventing any motion from slipping to the $z$ space. Furthermore, the symplectic form (due to matrix $J$) ensures the volume element is preserved in the phase space. As a result, the vector field in phase space has a zero divergence, implying there are no points where dynamics can converge/diverge, ensuring no static information in phase space. \n\nv) ***References to the related work***\n\n Thank you for pointing out the related paper. In comparison, we included methods with similar architecture and parameters. We will include the additional references in the final version. Our main objective was to demonstrate a formulation that can leverage Hamiltonian structure for disentangling content and motion. We indeed agree would be interesting to investigate the application in challenging scenarios with multiple moving objects. We appreciate the suggestions and would like to generalise our approach in future work.",
" We thank the reviewer for their valuable assessment of our work. Our responses to your concerns are below:\n\ni) **Text flow in results** \n\nThank you for your helpful comments on the presentation. We have polished the results section per your feedback and fixed the typos. \n\nii) **How does the optimization process ensure that not all relevant information is pushed to the decoder**\n\nThe point of our model is to capture the content and motion across multiple different sequences with different motions. The neural network is shared between these sequences, and the only information about any individual image in the sequence or the dynamic process is in the representation. Indeed the neural decoder will capture information that is common across all frames in all images. However, because it is common for each frame it cannot capture any motion in itself. Furthermore, because the content part is the same for all parts of the motion, it too cannot contain information that distinguishes the current motion position information. The optimisation does nothing in these terms. This decomposition is enforced by the representational structure itself.\n\nSpecifically, consider global $z$ shared across all timesteps. For any two timesteps, say 0 and 1, let the respective motion components be $q_0$ and $q_1$. Then the combined latent coordinates as an input to the decoder are $[z,q_0]$ and $[z, q_1]$. For the decoder the right temporal information in $x_0$ and $x_1$ can only come from $q_0$ and $q_1$ as $z$ is common. Thus preventing any motion from slipping to the $z$ space. Furthermore, the symplectic form (due to matrix $J$) ensures the volume element is preserved in the phase space. As a result, the vector field in phase space has a zero divergence, implying there are no points where dynamics can converge/diverge, ensuring no static information in phase space. \n\n\niii) **How is the Hamiltonian structure preserved**\n\nIndeed, the matrix $M$ and $J$ ensure the Hamiltonian nature of the latent space. Specifically, the Hamiltonian operator $H=JM$, where $J$ is a fixed matrix as stated in definition 1. The matrix $M$ is parameterised as a learnable symmetric-matrix as $M = 0.5(A+A^T)$, where entries in $A$ are learnable real-valued parameters. Many thanks for your comments. We have made updates to the paper. We will rectify any further clarification in the final version.\n\n\niv) **Design of results in Table 1**\n\nWe want to clarify the central theme of our paper is using Hamiltonian operators for disentanglement of motion from content variables. We do a comparison with baseline methods on the disentanglement task in Table 2. Table 1 investigates the effect of different geometry by evaluating the reconstructed and predicted sequences. The Hamiltonian operators give rise to a symplectic geometry in the latent space. Introducing further restrictions on the matrix $H$ results in a different geometry in phase space; for instance, restricting to skew-$H$ will confine the operator to rotations which are easy to interpret. For this purpose, we sampled a starting step from a ground-truth sequence and predicted the future trajectory, compared against the available ground truth. We observe that imposing extra constraints on $H$ did not improve perceptual scores. For the rest of the experiments, we only consider $H$ and evaluate commonly used disentanglement metrics as done in baseline methods. \n\nIndeed the evaluation can be done directly on the disentanglement task using evaluation metrics similar to Table 2. We chose to look at perceptual scores and select the best operator so our central disentanglement evaluations could be more concise, especially the number of qualitative figures added to the paper.\n\nv) **Missing reference to positional encoding**\n \nDue to limited space, we discuss the positional encodings in appendix Section 3.3. We have added a reference in the main paper. \n\nvi) **Diversity of sequence and clarification on Figure 4**\n\nIn Figure 4, we compare the original reconstruction and the generated sequences. We use the first two timesteps to obtain the initial position and momentum coordinates and unroll the trajectory in latent space. Since the momentum is determined from previous frames, we can compare the unrolled trajectory with the known target trajectory. For generating diverse sequences, we can sample different initial momentum variables in latent space and combine that with position to unroll the trajectory. We do that in the image to sequence results. ",
" We thank the reviewer for their valuable assessment of our work. Our responses to your concerns are below:\n\ni) **SSIM and PSNR in Table 1**\n\nWe want to clarify the central theme of our paper is using Hamiltonian operators for disentanglement of motion from content variables. We do a comparison with baseline methods on the disentanglement task in Table 2. Table 1 investigates the effect of different geometry by evaluating the reconstructed and predicted sequences. The Hamiltonian operators give rise to a symplectic geometry in the latent space. Introducing further restrictions on the matrix $H$ results in a different geometry in phase space; for instance, restricting to skew-$H$ will confine the operator to rotations which are easy to interpret. For this purpose, we sampled a starting step from a ground-truth sequence and predicted the future trajectory, compared against the available ground truth. We observe that imposing extra constraints on $H$ did not improve perceptual scores. For the rest of the experiments, we only consider $H$ and evaluate commonly used disentanglement metrics as done in baseline methods. \n\nii) **Choice of evaluation metrics**\n\nThe choice of disentanglement metrics is based on their use in various baseline methods: DSVAE, S3VAE, and MoCoGAN. We want to remark that intra-entropy and inter-entropy are fairly more informative and, when combined, are equivalent to the Inception score. The log of inception score is equal to the mutual information between variable $y$ and $x$ [1]. Specifically, $\\log (IS) = MI(y;x)$, the $ MI(y;x) = H(y) - H(y|x)$ Using this we can write $IS=e^{(H(y) - H(y|x))}$, where $H(y)$ is an inter-entropy and $H(y|x)$ is an intra-entropy term reported in the paper. Looking at two scores provides a better view of generated samples. We can add IS in the final version. \n\niii) **Comparison of conditional and unconditional models**\n\nWe compare both conditional and unconditional versions of our model. The improvement in accuracy under Hamiltonian dynamics without\nincorporating action variables is 5\\% over the best S3VAE,\n11\\% over MoCoGAN and 21\\% over DSVAE, which is still significant over the baselines. We want to note that the Hamiltonian dynamical model is linear, making it simple and easy to interpret, which is not the case with other methods. \n\niv) **Ablation Study**\n\nIn the ablation study for the motion transfer, we swap the motion variables of two sequences obtained using an encoder network. This evaluation compares the representation of the encoder and doesn't evaluate the generative model of dynamics. In the qualitative valuation on image to sequence, we map an image to a latent space and use the RNN to unroll the future trajectory in the latent space where it seems not to work. We hypothesise this could be because RNN relies on a history of frames to predict the future. During training, the temporal structure of timesteps is helpful in learning representation. However, this failed on the image to sequence task as there is no history of frames to produce the dynamics.\n\nv) **Clarification on $H$ and skew-$H$ operators**\n\nBy $H$ we refer to the Hamiltonian of the form $H=JM$ where $M$ is a symmetric matrix, and in skew-$H$, we further restrict $H$ to be a skew-symmetric matrix. We discuss it in lines 222-228 of the paper. We apologise for the lack of clarity. We have updated the description in Section 4.1.\n\n\nvi) **Handling large number of actions**\n\nIn our formulation, the full Hamiltonian matrix takes the block-diagonal form where each block is a Hamiltonian of action of a respective subspace. The block-diagonal structure of the matrix makes it easy to parallelise and scale for a large number of actions. The knowledge of action space provides a valuable notion of disentanglement of a dataset with diverse dynamics. It offers the potential for exciting applications like (1) generation of controllable dynamics and (2) modelling complex motion as a composition of primitive motions. Our unconditional model shows such dependence is not a strict requirement. \n\n\n[1] Barrat, Shane and Rishi Sharma, \"A note on the inception score.\"",
" This paper proposes a deep state space model for videos. The dynamics are defined by linear Hamiltonian Dynamics, and the motion matrix is further assumed to be block diagonal in order to separate different categories of actions. Like previous works, a latent variable z is introduced for explaining content and kept fixed for all frames. Experiments are carried out on Sprites and MUG to demonstrate the efficacy. Strengths:\n\nThe paper is well written and easy to follow. The idea of introducing Hamiltonian dynamics as an inductive bias for explaining repetitive or cycled motions in videos are reasonable and natural. The theoretical derivation is technically sound.\n\nWeaknesses:\n\nMy main concern is that the proposed method is not well supported by the experimental results:\n- I don't understand how SSIM and PSNR can be used for evaluating \"generation quality\", as generated samples are supposed to be different from the training datasets. I can only imagine that the numbers in Table 1 are reported for reconstruction, in which case it is not for generation quality as described by the paper. Also, no baseline methods are compared in terms of reconstruction.\n- There's no commonly used metrics reported that are designed for really evaluating sample quality, such as FVD or Inception scores.\n- For disentanglement evaluation, it is not fair to use the conditional Halo model to compared with the baselines which are trained unconditionally. The only fair way is the compare unconditional Halo models with the baselines where the performance of the proposed model does not stand out. \n- In the ablation study section, it makes me confused that the paper mentioned RNN or linear dynamic model cannot make image move but it also showed the results of swapping motions of the two baselines where the video sequences are changing over time. Also it sounds wired to me that the linear/RNN dynamics cannot do image-to-seq as those have been applied by many classical state space models. \n\nIn experiments, two operators H and skew-H are compared but there's no official definition of skew-H in the previous sections. \n\nThe model assumes that the action space can be divided into subspaces where each subspace represents a unique action. This representation can be highly ineffective if the number of actions goes huge. \n Please see comments above. Yes. ",
" The paper proposes Halo, a novel type of variational autoencoder with structured latent space and demonstrates its applications to different types of (controlled) video generation tasks. The main contribution is a principled decomposition of the latent space into a content space and a motion space, where the motion space is modeled using Hamiltonian dynamics. The structural constraints (e.g, symmetries) imposed by these dynamics induce desirable properties like reversibility and volume-preservation, enabling the conservation of (learned) quantities. **Strengths**\n+ Video generation is an important and challenging task with a rich history. The proposed approach takes a fresh perspective on this topic and explores the benefits of inductive bias based on principles rooted in the physics community.\n\n+ Sections 1-3 (introduction, related work, method) are well-organized and easy to follow: the main contributions are clearly formulated, the figures are helpful in understanding architectural details, and the mathematical notation is (mostly) consistent. The related work section is commendable and provides a comprehensive overview of the field. The paper does have a fairly strong physics flavour and I would recommend to provide stronger guidance for an audience which may not be familiar with topics that are not part of core ML, such as group action, phase space, conservation law, and symplectic geometry.\n\n+ The Hamiltonian design of the latent space is interesting and novel, and the advantages of reversible and symplectic latent dynamics make intuitive sense. I also appreciate the principled derivation of the dynamical model $f$ from a constant-energy perspective (l.206-l.214). The variational inference section is less clear and I would encourage the authors to move at least some intuition about the ELBO from the Appendix into the main paper.\n\n**Weaknesses**\n\n- The weakest part of the paper are its experiments, both in terms of their presentation and design.\n\t- Presentation: \n\t\t- The structure of the experiments is confusing throughout section 4. For example, the description of the Sprites and MUG datasets starts in the middle of the “Rotating Balls” paragraph (l.262). Likewise, the description of the baseline comparison starts in the middle of the “Quantitative Evaluation” section (l.302). Grammar and text flow in the experiment section also feel unpolished. Finally, none of the figures in this section have proper axes/labels and the reader needs to count rows and infer the content from the caption or even the main text (l.348-350 for Figure 5 (left)).\n\n\t- Design: \n\t\t- Since Table 1 does not include a comparison to other baselines it is not possible to assess whether the presented SSIM/PSNR/MSE scores are competitive or not. Why not use the same metrics as in Table 2?\n\t\t- Table 3 is not mentioned in the text and seems to be based on the single example of Figure 5 (right), which is not enough to make any general statements. The positional encoding mentioned in this table is not explained and not supported by any qualitative evidence.\n\t\t- In Figure 4 (left) the reconstructed and generated sequences look fairly similar, which can be an indication of low diversity.\n\t\t- It is unclear how the sequences of the rotating balls dataset were generated as the mentioned constraint does not specify any temporal pattern. What is the dynamic model used here?\n\t\t- The sequences are very short (8/16 frames) and small (64 x 64). What is the main bottleneck that prevents application to high-fidelity image sequences?\n\n**Minor comments**\n\n- Typos: Figure 1 (“alongwith”), l.224 (“long term term”), l.234 (“(6))”), l.258 (as”blue”), l.283 (“EvaluationWe”)\n\n- The Appendix provides valuable information about the ELBO objective, terminology, and network structures, but the main paper does not refer to it often enough (e.g., content/position/momentum network).\n\n- The paper follows a top-down approach, first introducing high-level structures and then filling in the details. While that is a reasonable approach, it does mean that readers will have to read the paper twice (or go back to previous paragraphs), because the motivation for some design choices remains initially unclear. One example is the structure of the phase space.\n\n**Summary**. I appreciate the technical formulation of this paper but am on the fence due to the weak and unconvincing experiments. I encourage the authors to address the concerns above as well as the questions below. - l.201f: *“without significant loss of generality, we propose a linear Hamiltonian system in the latent layer, relying on the deep neural network mapping to data space to handle all nonlinear aspects.”* This comment raises a larger question: how does the optimization process ensure that not *all* relevant information is pushed to the decoder, while the carefully designed latent space models only trivial dependencies? In the same vein, how does the optimization process guarantee that no motion data is being pushed into ${\\bf z}$, thus completely circumventing the phase space?\n\n- It is not entirely clear to me how the Hamiltonian nature of ${\\bf H}_k$ is preserved during the optimization process. My understanding is that the symmetry of ${\\bf M}_k$ ensures this, but I would appreciate a confirmation.\n\n- l.224: *“The symplectic geometry proves useful for long term term *(sic!)* sequence generation.”* The sequences in the main paper are all relatively short and even the experiments on longer sequences shown in the Appendix are only 32 frames, i.e., a little over a second at 25fps. What is the limit before the generated sequences visibly deteriorate or become static?\n\n- What architecture does the pre-trained action prediction classifier use and how was it trained? - The paper flags potential misuse in the area of fake video data generation. \n- The paper does not contain a limitations section.",
" This paper deals with the task of generating image sequences. Specifically, the authors propose a method called Halo that allows to disentangle the content from the motion in image sequences, in the VAE framework. They do so by separating the latent space in two spaces: 1) the content space, a global content vector that summarizes the image sequence; 2) the motion space, a sequence of time-dependent vectors that capture the dynamics of the sequence. The main contribution of the authors is to model the motion space with Hamiltonian dynamics. The authors claim that Hamiltonian dynamics have good inductive biases for sequence generation, such as reversibility of the motion. Experiments on simple image sequences are performed to prove the quality of their model. Strengths\n* 1) The latent Hamiltonian operator is quite generic. It could be extended and used with other families of deep generative models for sequences, and thus be of great interest for practitioners. \n* 2) Halo achieves SOTA scores on motion/content disentanglement metrics. Ablations with similar architecture and other sequence models are convincing (especially Table 1 of Supplementary Material). \n\n\nWeaknesses\n* 1) Stochasticity is only allowed by the gaussian sampling, but there is no stochasticity in the Hamiltonian operator. Thus, Halo can only generate one motion vector given input frames. However, trajectory prediction is a highly stochastic process. This could be a limitation that makes scaling to more complex environments difficult.\n* 2) How does the method guarantee that no motion information leak in the global content vector? Since the encoder + LSTM that generate the global vector see the whole input sequence, it could also capture some information about the motion.\n* 3) Limited evaluation: the model is tested on three simple datasets. Could be interesting to see how it performs on more complex datasets, with different/moving backgrounds.\nIt also misses comparisons with recent works. E.g.:\na) Franceschi et al., \"Stochastic latent residual video prediction\". ICML 2020.\nb) Wang et al., \"G3AN: Disentangling appearance and motion for video generation\". CVPR 2020.\n* 4) Clarity. While the paper is well written, there lacks some implementation details on the main component of the paper: the hamiltonian operator (see questions). This affects the understandability. 1) Are $H_k$ matrices learned, or are they kept as drawn from the initialization with symmetric matrix $M_k$ and $H_k = J M_k$. If learned, how is $H_k$ constrained during optimization?\n2) Does the inductive bias of the Hamiltonian operator help in a setting with few training examples? The authors addressed the limitations."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"PmzjpO3D_Dy",
"Xy5b3_kPEjz",
"AZ6bD7BOLBa",
"lFN4oq4ycv",
"4Hkyzaumbu",
"SB_BBFqS0Jn",
"Xy5b3_kPEjz",
"SB_BBFqS0Jn",
"AZ6bD7BOLBa",
"nips_2022_2vYmjZVT29T",
"nips_2022_2vYmjZVT29T",
"nips_2022_2vYmjZVT29T"
] |
nips_2022_upuYKQiyxa_ | Optimizing Relevance Maps of Vision Transformers Improves Robustness | It has been observed that visual classification models often rely mostly on spurious cues such as the image background, which hurts their robustness to distribution changes.
To alleviate this shortcoming, we propose to monitor the model's relevancy signal and direct the model to base its prediction on the foreground object.
This is done as a finetuning step, involving relatively few samples consisting of pairs of images and their associated foreground masks. Specifically, we encourage the model's relevancy map (i) to assign lower relevance to background regions, (ii) to consider as much information as possible from the foreground, and (iii) we encourage the decisions to have high confidence. When applied to Vision Transformer (ViT) models, a marked improvement in robustness to domain-shifts is observed. Moreover, the foreground masks can be obtained automatically, from a self-supervised variant of the ViT model itself; therefore no additional supervision is required. Our code is available at: https://github.com/hila-chefer/RobustViT. | Accept | Initially, this paper received positive reviews. The rebuttal addresses the remaining concerns. All reviewers feel that the contributions of this work are sufficient to merit its acceptance. The area chair agrees with the reviewers and recommends it be acecpted at this conference. | train | [
"GQjSo_9NJJ8",
"Ebst6DGCMB3",
"NrdACp8KqaY",
"-hjJkg92TTE",
"U4SEw6We6U9",
"MFmddy_DvHa",
"OQR48A2JnZ",
"Xs2vHRAosS-",
"Kx1M1YXIIzL",
"FsFA3lL8ei1",
"8WqioCggj9t",
"NKfGWmTJ0jD",
"U4C0GC2XR5w",
"-1brURHx6r0",
"sXv9iTSb2PM",
"U0e_CX4xBQ",
"l4I5IgaCUxp",
"ZTjsOyDf4fa",
"t2r6oVx4fI"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for an extended discussion. \n\n1. Whether supervising GAE faithfully changes the inner mechanisms of the Transformer\n\nAfter going through the references [8] on GAE and the ICML 2022 paper [35] evaluating multiple explanation methods on attention-based models, I'm convinced that GAE is indeed faithful. I'm also convinced that supervising GAE will improve the faithfulness of the underlying model, based on the shown results in the paper. \n\nAfter the discussion, I think the source of my dissatisfaction has to do with the relevance maps themselves and not the submission. I still do not understand the reasoning behind the pointwise multiplication between the activation and gradient maps and the pointwise positive clipping that follows. This often works well in practice, but that does not mean that the particular set of operations is very interpretable to me. Having said that, I understand that this is way beyond the scope of the submission and therefore this shall not affect my score for this paper.\n\n2. Hyperparameter tuning\n\nThanks a lot for clarifying the resources used for HP tuning and pointing to the parts that I have missed or failed to recall. I agree that the additional information leakage is not great and the method is robust against the choice of HP. \n\nThanks again for the clarification.",
" __Re. hyperparameter tuning__\n\nWe thank the reviewer for the question, and agree that it is highly important to disclose all the details of the hyperparameter tuning process and to count the number of segmented images correctly.\n\nWhile the validation set was indeed used to tune the hyperparameters, please note that:\n1. The validation set only contains 414 examples (L. 166-167), such that even if we consider these examples as additional segmentation supervision, the validation set adds less than a single example per class. \n2. As we mentioned in our previous reply, the hyperparameter tuning was only conducted on ViT-B. The parameters were then applied without further modification to all other models. \n3. The exact same hyperparameters were applied as is to the unsupervised tests (with TokenCut), further indicating the stability of our method.\n4. It should be mentioned that our method is not sensitive to the specific selection of hyperparameters, and small changes to the selection maintain a similar improvement in robustness (L. 171-173).\n5. Assuming that only 3 labeled segmentation maps are available per class, the hyperparameter tuning process could be performed by splitting the training set to a train and validation set, where the training set contains two segmentation maps per class and the validation set consists of a single segmentation map per class. This is based on the results in Appendix G that indicate that even 2 segmentation maps per class suffice to achieve a significant improvement in robustness, in addition to the fact that currently we use 414 validation samples, therefore such a split should produce very similar hyperparameters to the ones selected by our full method.\n",
" We are grateful to the reviewer for the detailed feedback and for taking the time to engage in discussion. We apologize for the slight delay in our response which is due to the runtime of the experiments we present in our answers below.\n\nTo address the remaining questions:\n\n__Re. using GAE to improve recognition__\n\n> “The model could simply learn to overfit its GAE score map to the segmentation GT, while actually not changing the actual inner workings of the model. What prevents this from happening?”\n\nTo our understanding, and please correct us otherwise, the reviewer asks whether it may be possible that the method improves the loss by changing the explainability map without affecting the classifier itself. While the reviewer notes that the robustness scores improve (and so the classifier has changed), it is still not clear whether applying a loss on the explainability map would not move to detach it from the underlying classifier.\n\nAs we mentioned in our original response, works on evaluating faithfulness of Transformer explainability such as [35] found GAE to be most faithful, therefore the GAE mechanism produces maps that are highly correlated with the explanation of the model. Assuming this link remains after applying our method, optimizing the maps produced by GAE directly impacts the reasoning by the model.\n\nWe, therefore, conduct tests to demonstrate that GAE indeed constitutes a faithful explanation after applying our method. By testing the faithfulness after applying the method, we show that our fine-tuning modifies the underlying explanation of the prediction, which is mirrored by GAE, thus the improved explanations presented in the paper indeed reflect the reasoning by the model.\n\nWe conduct positive and negative perturbation tests for the base models (ViT-B, DeiT-B, and AugReg-B) before and after applying our method and present the area under the curve for both tests. These tests follow a two-stage setting. First, the model is used for extracting visualizations for the validation set of ImageNet. Second, we gradually mask out the pixels of the input image and measure the mean top-1 accuracy of the network. In positive perturbation, pixels are masked from the highest relevance to the lowest, while in the negative version, from lowest to highest. In positive perturbation, one expects to see a steep decrease in performance, which indicates that the masked pixels are important to the classification score. In negative perturbation, a good explanation would maintain the accuracy of the model, while removing pixels that are not related to the class. In both cases, we measure the area-under-the-curve (AUC), for erasing between $10$%-$90$% of the pixels.\n\nThe results of the perturbation tests are presented in Appendix N of the latest revision and indicate that GAE is still faithful after the finetuning we apply, and in some cases even achieves better scores than on the unchanged ViT models. Thus it is unlikely that optimizing our loss on the GAE relevance map detaches it from the fine-tuned model.\n\n> “How does the regularisation of GAE improve the actual recognition mechanism of the model? … It is not very intuitive how regularising such a derivative score map of the Transformer architecture leads to the change in the actual mechanism of the original model.”\n\nAssuming that the question is specifically about GAE and the intuition behind it, we would like to note that we have revised Appendix B of the paper with an in-depth explanation and intuition for the GAE method. We would be happy to discuss this further and answer any questions the reviewer may have.\n\nAdditionally, we kindly note that GAE is a supportive method we use in order to achieve our goal, and it can be replaced with other methods that extract faithful explanations for the model’s prediction. While GAE seems to be the most accurate explainability method for Vision Transformers, other explainability methods can be used to improve the salient behavior of models. This is demonstrated by the ablations added in Tab. 12 in Appendix I, which show that while other explainability methods fall short in comparison to GAE, our method is able to improve robustness even when using less reliable methods.\n",
" We appreciate the careful consideration of our response and acknowledge the need to discuss the ECCV'18 work and other work of the same family.\n\nWe notice that while you wrote \"I'm happy to increase my rating\" the score has not been raised yet.",
" I appreciate the efforts made by the authors to address my concerns and am sorry for the late participation in the discussion. \n\n**Re: training with gradients (Eq. 6,7 in the original submission, Eq. 8,9 in the revised version)**\n\nFor this point, I actually didn’t know that second-order derivatives can be computed by neural network frameworks, as they require the computational graph even for the gradient computation. But according to the authors’ response, this is doable, and so my concern is addressed. \n\n**Re: comparison to debiasing methods**\n\nI think both this paper and ECCV’18 paper share the idea of forcing the relevance map (or attention) to focus on a relevant region (for this paper, it’s foreground regions, and for ECCV’18 paper, it’s people regions). But as in the authors' response, the way it is optimized is very different, and I agree with the authors that it’s not straight-forward and not trivial to adopt the ECCV’18 method to classification tasks (for example, one may occasionally remove the foreground region by blocking its bounding box so that the shape doesn’t tell what it is and setting the ground-truth to a new label “none of them” or the uniform distribution, which seems not informative for training). \n\nMy point here was that, at least for me, ECCV’18 looked to give something like contrastive supervision that inherently told where to see in the image, while directly optimizing relevance maps might still have a chance to superficially optimize them, leading to less generalizability. But again, the experimental results show its generalization performance, so I think adding some discussions on this family of work is sufficient.\n\n**Re. TokenCut for entirely different distributions (e.g. X-rays)**\n\nThis conclusion is surprising to me. I appreciate the effort. \n\nFor the other points, I think the authors' responses are satisfactory for me. I'm happy to increase my rating. \n",
" Thanks for your explanation and additional experiments.\n\nThe authors provide an additional comparison with DINO and kNN results on novel classes, which strengthens the paper and addresses my main concerns. Thus, I increase my rating.",
" Thank you for the detailed response. \n\nThere were two major remaining questions that I have brought up:\n1. How does the regularisation of GAE improve the actual recognition mechanism of the model?\n2. How is the HP tuning performed? What was the objective for the HP tuning? If the objective is the robustness objective, then the HP tuning is leaking the final objective on top of the allowed resources.\n\nI do not think the authors have fully answered the questions.\n\n1. The authors explain how and why **other** attribution/explanation methods are **not** working. While this is relevant and informative, they would have answered the question more directly if they had also explained the mechanism behind their method in a greater detail. It is not very intuitive how regularising such a derivative score map of the Transformer architecture leads to the change in the actual mechanism of the original model. The model could simply learn to overfit its GAE score map to the segmentation GT, while actually not changing the actual inner workings of the model. What prevents this from happening? Indeed, the proposed regularisation of GAE is already producing improved robustness performances - which in itself tells us that something must be happening inside. But it would have been more illustrative if the authors could further *show* the mechanism more directly.\n\n2. The main concern behind the question is that one could introduce information leakage during HP tuning. For example, by tuning HP wrt the \"the saliency maps of the validation set samples\", one is introducing more number of segmentation GTs than claimed in the paper. In principle, the method is using 3 segmentation GTs per class during training + X segmentation GTs per class in the validation set during HP tuning. From the response, I can infer that the HP tuning procedure leaks certain amount of additional segmentation GTs. This is fine, but I believe the HP tuning procedure should be clearly stated and shared with the readers. \n\nGiven all that, I still believe the paper is strong and should be accepted. I'm retaining my score.",
" Thank you again for your detailed feedback and useful ideas.\n\nWe would respectfully like to follow up to see if our response addresses your concerns. We would appreciate the opportunity to discuss our work further if the response has not already addressed all concerns. \n\n\n",
" __Re. hyperparameter tuning and robustness on other datasets__\n\nTo expand over L. 170-173, the following process was followed due to resource limitations:\n1. The batch size of $8$ was the maximal size applicable for the amount of computing we have.\n2. All the hyperparameters (besides the learning rate, which was determined per model as described in L. 170-173) were determined by a grid search only on ViT-B and then applied to the other models without additional hyperparameter tuning.\n3. First, we performed a grid search between $[0, 1]$ (with jumps of $0.1$) on pairs of $\\lambda_{\\text{relevance}}$, $\\lambda_{\\text{classification}}$ (without using the foreground loss, i.e. only with $\\mathcal{L}\\_{\\text{bg}}$). Our rule of thumb was to use the highest $\\lambda_{\\text{relevance}}$ and the lowest $\\lambda_{\\text{classification}}$ possible such that the validation set accuracy did not decrease by more than $2$% (similarly to the description in L. 170-173).\n4. Finally, we grid searched between $[0, 1]$ (with jumps of $0.1$) on pairs of $\\lambda_{\\text{bg}}$, $\\lambda_{\\text{fg}}$ with the same objective. $\\lambda_{\\text{fg}}$ could not be increased beyond $0.3$ without harming the validation accuracy by more than $2$%, and since $\\lambda_{\\text{bg}}=1$ produced visual results that still contained a lot of relevance on the background, we increased it to $\\lambda_{\\text{bg}}=2$, where the accuracy was not harmed significantly, but the visualizations of the saliency maps for samples from the validation set improved.\n\nThe main objective of the grid search was to find hyperparameters such that the saliency maps of the validation set samples are improved while maintaining a similar accuracy to the original ViT-B model.\n\nWe note that due to the large variety and the size of some of the datasets we used, it is impractical to perform a grid search based on the robust results of the datasets evaluated in our paper.\n\nAdditionally, following the reviews and demonstrating our method’s improvement on completely unrelated datasets, we added a $k$-NN experiment in Appendix L of the revised paper, where we use datasets with classes that do not appear on ImageNet-1k. \n\nWe experiment with 3 such datasets. First, the iNat 2021 mini dataset [Van Horn et al. Benchmarking Representation Learning for Natural World Image Collections. CVPR, 2021] tests the improvement of natural world image classification. Secondly, the Pneumonia detection dataset of X-ray images [Kermany et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning, 2018] is used to test the improvement of our method for medical data. Finally, the CIFAR-100 dataset is used as an additional classification dataset [Krizhevsky et al. Learning multiple layers of features from tiny images. 2009]. \n\nThe X-rays benchmark was selected simply since it is the first one that comes up on Google for the search “x-rays dataset deep learning”. \n\nWe compare the baseline Transformers to the ones finetuned by our method on the TokenCut data, i.e., without any ground truth supervision in the form of manually extracted segmentation masks. \n\nThe main conclusions from the experiment are summarized as follows:\n1. Our method improves the $k$-NN accuracy across different settings ($k=1, k=10$) and across the Transformer models for all datasets, with the only exception being ViT AugReg small with $k=1$ on the Pneumonia detection dataset.\n2. For challenging datasets where the models achieve under $50$% such as the iNat dataset (which contains $10k$ classes), our method improves over the original models by a very significant margin ($+5.37$% for $k=1$, $+5.28$% for $k=10$, averaged across all 7 models).\n\nPlease refer to Appendix L for the full results.\n",
" We thank reviewer FGu4 for the very positive and comprehensive comments. We appreciate the reviewer’s in-depth analysis of our work and attention to detail, including information and experiments provided in the appendices. \n\nPlease note that all references to the text refer to the revised version of the paper\n\n __Re. intuition for the explainability method__\n\nWe thank the reviewer for bringing this point to our attention. Following the review, we have revised Appendix B to include further explanations and intuitions. \n\nDue to the 9-page limit, we were unable to move details from Appendix B to the main paper in the revised version of the paper. Section 3 of the paper will be edited with more details from Appendix B for the camera ready version, which will have an extra page.\n\nIn a nutshell, using pure attention weights as an explanation raises two issues:\n1. Each attention layer contains several attention heads. Previous works such as [Voita et al. Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned. ACL, 2019] demonstrates that different attention heads have different purposes, and not all heads contribute equally to the model's prediction. \nTherefore, performing aggregation over attention heads in each attention layer is not trivial, and simple averaging will account for irrelevant heads.\n2. The Transformer architecture is built on several self-attention layers; each further contextualizes the input data and exchanges information between tokens. As such, it is unclear if the last attention layer tokens still represent the original input or a mixture of the input with context added by the previous layers.\n\nTo mitigate both issues, GAE proposes to:\n1. Use gradients as weights for the different attention heads. Highly important heads will receive a positive weight, while the unimportant ones will receive a negative or very small weight. The ReLU operation eliminates the negative contributions to avoid considering irrelevant heads. As a result, the head averaging is based on a weighting that considers the relevance of each head.\n2. The integration of different layers is done using matrix multiplication of the relevances per layer. \n\nWe chose to employ GAE specifically since previous works on the faithfulness of Transformer explainability [Liu et al. Rethinking Attention-Model Explainability through Faithfulness Violation Test. ICML 2022] found it to be the most faithful among all tested methods (including methods based on raw attention values). While we agree that attention has some relation to explanations, we believe that considering just raw attention is simplistic and, therefore, sub-optimal for the task of correcting the salient behavior of Transformers.\n\nTo further substantiate our reasoning, we added two new ablations to our ablation study (Tab. 12 in Appendix I). The first replaces GAE with raw attention weights, and the second replaces it with attention Rollout [Abanar et al. Quantifying attention flow in transformers. ACL, 2020], which combines the attention maps in all layers linearly using matrix multiplication.\n\n\n\nThe main conclusions from the experiment are summarized as follows:\n1. For all datasets containing shifted distributions (i.e., all datasets except for INet val, INet-v2), our method with GAE outperforms the other variants.\n2. For ViT-B, the use of raw attention (labeled “Attention instead of [9]”) harms some datasets significantly (e.g., INet-A, SI-rot.), while for DeiT-B, the use of Rollout (labeled “Rollout instead of [9]”) harms some datasets significantly (e.g., ObjNet, SI-loc.). This indicates that both variants are inconsistent and cannot be used as reliable explanations.\n\nDue to the current 9-page limit, the results are enclosed in Appendix I and will be moved to the main text for the camera-ready version, which will have an extra page.\n\nWe hope this answers the reviewer’s concern but would be happy to further clarify otherwise.\n\n---\n\n__Re. application for CNNs__\n\nWe completely agree that this technique may be useful for CNN-based classifiers as well. We also concur that CAM-based explanations such as Grad-CAM [Selvaraju et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. ICCV, 2017] can be applied in such a case. We opted to focus on ViTs as they are rapidly becoming the default model of choice for classification tasks.\n\n---\n\n__Re. training from scratch__\n\nIndeed, limited resources played a large role in the selection to perform fine-tuning over training from scratch. While we believe that training with this regularization from scratch may produce an even higher increase in robustness, we note that there’s an advantage in a fine-tuning-based method as it can be applied to any model with relatively modest resources, and it produces results quickly. \n",
" __Re: TokenCut training distribution and robustness__ \n\nTokenCut employs a pre-trained DINO network [Caron et al.. Emerging properties in self-supervised vision transformers. ICCV, 2021], which is trained on the ImageNet-1k data without labels via self-distillation. \n\nFollowing the reviews, we have added Appendix K to the revised version of the paper, where we present comparisons between DINO and our unsupervised method. To maintain a fair and unbiased comparison, we benchmark DINO against DeiT and our fine-tuned version of it, since as mentioned in their paper, DINO training and fine-tuning are based on DeiT.\n\nWe employ 2 variants of DINO in our comparison. The first is a linear probing version, where a linear classification head was trained on top of a frozen network, and the second is a fine-tuned version of DINO where the entire network was modified. The application of our method to DINO after the authors’ ImageNet fine-tuning process could be beneficial, since their fine-tuning is done in a supervised manner, and could impact the salient behavior of DINO for the worse (i.e. eliminate some of the robustness benefits that arise from self-supervision training). \n\nIn both cases, the application of our method does not use supervision in the form of manually labeled segmentation masks. This way, our method is applied in a way that is congruent with self-supervised learning.\n\nThe main conclusions are summarized as follows:\n1. The linear probing version of DINO is not able to measure up against the performance of even the original, unchanged DeiT model.\n2. The fine-tuned version of DINO significantly improves accuracy and robustness over the original, unchanged DeiT model.\n3. Even when comparing DINO's fine-tuned version with our version of DeiT, our model outperforms DINO in $5$ out of the $7$ robustness datasets that are not from the ImageNet distribution (INet-A, ObjNet, SI-loc., SI-rot., SI-size), while the two others (INet-R, INet-Sketch) are datasets that contain sketches, cartoons, and art, for which our method is less effective (due to the absence of background information).\n4. Our method improves robustness for the fine-tuned version of DINO, indicating that the supervised fine-tuning process could cause changes for the worse in the salient behavior of the method, which are rectified by our method.\n\nPlease refer to appendix K of the revised version for additional details and the full results.\n\n---\n\n__Re. TokenCut for entirely different distributions (e.g. X-rays)__\n\nWhile TokenCut was trained on ImageNet, the improved representations by our method have a significant positive influence even on data from distributions that are inherently different than the ImageNet-1k distribution. \n\nWe demonstrate this point by adding a $k$-NN experiment in Appendix L of the revised paper, where completely different datasets are used. The Pneumonia detection in X-ray images dataset [Kermany et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning, 2018] is used to test the improvement by our method for medical data, the iNat 2021 mini dataset [Van Horn et al. Benchmarking Representation Learning for Natural World Image Collections. CVPR, 2021] tests the improvement on natural world image classification, and CIFAR-100 is an additional classification dataset [Krizhevsky et al. Learning multiple layers of features from tiny images. 2009]. \n\nNote that the X-rays benchmark was selected based on the reviewer’s example and that this specific dataset is simply the first one that comes up on Google for the search “x-rays dataset deep learning”. \n\nWe compare the baseline Transformers to the ones finetuned by our method on the TokenCut data, i.e., without any ground truth supervision in the form of manually extracted segmentation masks. \n\nThe main conclusions from the experiment are summarized as follows:\n1. Our method improves the $k$-NN accuracy across different settings ($k=1, k=10$) and across the Transformer models for all datasets, with the only exception being ViT AugReg small with $k=1$ on the Pneumonia detection dataset.\n2. For challenging datasets where the models achieve under $50$% accuracy such as the iNat dataset (which contains $10k$ classes), our method improves over the original models by a very significant margin ($+5.37$% for $k=1$, $+5.28$% for $k=10$, averaged across all 7 models).\n\nPlease refer to Appendix L for the full results.\n\n",
" We thank reviewer y4aF for the detailed feedback and the useful suggestions. \n\nPlease note that all references to the text refer to the revised version of the paper.\n\nAddressing items 1–3 mentioned as weaknesses, and the additional questions:\n\n__Re: training with gradients (Eq. 6,7 in the original submission, Eq. 8,9 in the revised version)__ \n\nAssuming we understood the question correctly, and please correct our understanding otherwise, the reviewer is asking how the gradients in Eq. 6,7 (Eq. 8,9 in the revised version) are used in the fine-tuning process described in the paper.\n\nThese equations, which appear in Appendix B, describe how the explainability method of GAE computes the relevancy maps. This computation relies, among other components, on the attention gradients.\n\nOur method employs the relevancy maps calculated in Eq. 6,7 (Eq. 8,9 in the revised version) to construct relevance-based losses. We kindly refer the reviewer to Section 3 of the main paper (Eq. 1,2) which specifies the two losses constructed using the relevance maps ($\\mathcal{L}\\_{\\text{fg}}$, $\\mathcal{L}\\_{\\text{bg}}$).\n \nTo optimize these loss terms using SGD, a gradient is calculated on top of the relevance maps. Since the relevance maps involve gradients themselves, this means that second-order gradients are calculated during SGD in our fine-tuning. The losses we apply are calculated directly on the relevance maps, which are derivable as a combination of pure attention weights and attention gradients.\n\nPlease see the revision of Appendix B of the paper for extended details on the calculation of the relevancy maps, and an intuitive explanation of the method. \n\nWe would be happy to further clarify or answer any other questions the reviewer may have about our method.\n\n---\n\n__Re: comparison to debiasing methods__ \n\nWe thank the reviewer for pointing us to the ECCV’18 work. \nThe goal of the ECCV’18 work is to eliminate gender bias in a CNN-based image captioning system. The method is based on masking the person in the image (the person's segmentation map is provided). For the masked samples, the method reinforces the decision not to distinguish between a man and a woman through a loss term that is called the Appearance Confusion Loss. \n\nThe two methods use segmentation maps, but there are key methodological differences beyond the completely different goals (classification vs. captioning, improving robustness vs. removing a single specific bias) and settings (Transformers vs. CNNs, fine-tuning vs. training from scratch). Most notably: our method optimizes relevancy maps directly, while the ECCV’18 work uses losses that are applied to the output distribution of the model.\n\nExporting their idea from the task of eliminating gender bias in image captioning to image classification is not trivial.\nIt is possible to mask the object using the segmentation map as a straightforward adaptation but concealing an object inherently differs from masking a person. \n\nBy masking a person, gender is obscured (the silhouette of a woman and a man is indistinguishable). Masking an object, however, still reveals significant information about the object through its shape. For example, after masking a snake, it would still be clear that the class is not \"table\", \"cat\" or \"dog\". \n\nThe second challenge is defining the confusion loss (e.g., the original loss involves confusion between men and women). In the case of classification, an alternative loss can require a uniform class distribution given the masked image. However, this will probably lead to a severe accuracy hit, since, as mentioned, the masked image should not receive uniform scores across different classes. \nIn the snake example, we would not expect the model to output a uniform distribution, as this would assign \"table\" the same probability as \"water snake\" for example. We would still expect the distribution to be peaked with snake classes receiving a high probability while the other classes receive a probability close to 0. Therefore, this adaptation of the ECCV’18 method to classification is counter-intuitive and probably harmful.\n\nWhile it is not directly applicable in our setting, we appreciate the reviewer for bringing this work to our attention. Inspired by this discussion, we have added a section referring to debiasing methods similar to that of ECCV’18 (see Sec. 2 L. 82-86).\n\nPlease let us know if this discussion does not address your concern regarding the ECCV’18 work.\n\n---\n\n__Re. Unsupervised fine-tuning setting__ \n\nThe unsupervised fine-tuning setting is identical to the setting of the supervised version, i.e. we use 3 examples from 500 ImageNet-1k classes. The only difference is that in the unsupervised case, the segmentation maps for the fine-tuning examples are tagged using TokenCut, as opposed to manual human tagging in the supervised setting.\n",
" We thank reviewer KJ1F for the positive comments and useful suggestions. We believe the experiments suggested by the reviewer will have a significant positive impact on the quality of our work, and for that, we express our sincere gratitude. \n\nPlease note that all references to the text refer to the revised version of the paper.\n\n__Re. baseline experiment using DINO__ \n\nWe concur that comparing to DINO is intriguing, especially since the unsupervised version of our method employs TokenCut, which is based on DINO’s attention maps. \n\nIn Appendix K of the revised version of the paper, we present comparisons between DINO and our unsupervised method. To maintain a fair and unbiased comparison, we benchmark DINO against DeiT and our fine-tuned version of it, since as mentioned in their paper, DINO training and fine-tuning are based on DeiT.\n\nWe employ 2 variants of DINO in our comparison. The first is a linear probing version, where a linear classification head was trained on top of a frozen network, and a fine-tuned version of DINO where the entire network was modified. The application of our method to DINO after the authors’ ImageNet fine-tuning process could be beneficial, since their fine-tuning is done in a supervised manner, and could impact the salient behavior of DINO for the worse (i.e. eliminate some of the robustness benefits that arise from self-supervision training). \n\nIn both cases, the application of our method does not use supervision in the form of manually labeled segmentation masks. This way, our method is applied in a way that is congruent with self-supervised learning.\n\nThe main conclusions are summarized as follows:\n1. The linear probing version of DINO is not able to measure up against the performance of even the original, unchanged DeiT model.\n2. The fine-tuned version of DINO significantly improves accuracy and robustness over the original, unchanged DeiT model.\n3. Even when comparing DINO's fine-tuned version with our version of DeiT, our model outperforms DINO in $5$ out of the $7$ robustness datasets that are not from the ImageNet distribution (INet-A, ObjNet, SI-loc., SI-rot., SI-size), while the two others (INet-R, INet-Sketch) are datasets that contain sketches, cartoons, and art, for which our method is less effective (due to the absence of background information).\n4. Our method improves robustness for the fine-tuned version of DINO, indicating that the supervised fine-tuning process could cause changes for the worse in the salient behavior of the method, which are rectified by our method.\n\nPlease refer to Appendix K of the revised version for additional details and the full results.\n\n---\n\n\n__Re. K-nearest neighbor testing on novel classes__ \n\nWe thank the reviewer for this suggestion and agree that a $k$-NN experiment on datasets with different classes is indeed an interesting way of testing the improvement of latent representations by our method.\n\nAppendix L of the revised paper presents $k$-NN results for 3 such datasets: iNat2021 mini [Van Horn et al. Benchmarking Representation Learning for Natural World Image Collections. CVPR, 2021] (as suggested by the reviewer), the Pneumonia detection in X-ray images dataset [Kermany et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. 2018], and CIFAR-100 [Krizhevsky et al. Learning multiple layers of features from tiny images. 2009]. \n\nThe X-rays benchmark was selected simply since it is the first one that comes up on Google for the search “x-rays dataset deep learning”. \n\nThe main conclusions from the experiment are summarized as follows:\n1. Our method improves the $k$-NN accuracy across different settings ($k=1, k=10$) and all Transformer models for all datasets, with the only exception being ViT AugReg small with $k=1$ on the Pneumonia detection dataset.\n2. For challenging datasets where the models achieve under $50$% such as the iNat dataset (which contains $10k$ classes), our method improves over the original models by a very significant margin ($+5.37$% for $k=1$, $+5.28$% for $k=10$, averaged across all seven models).\n\nPlease refer to Appendix L for the full results.\n\n---\n\n__Re. paper organization in Sec. 2__ \n\nWe thank the reviewer for bringing this to our attention. We have edited Section 2 in the revised version to include titled paragraphs and changed the order and some of the content to allow for better readability.\n\n",
" \n__Re. additional ablation tests__ \n\nPer the reviewer’s request, we have added the proposed ablations to Table 12 in Appendix I. Kindly note that when removing the classification loss, the model is at risk of mode collapse since $\\mathcal{L}\\_{\\text{bg}}$ encourages the relevance on the background to be 0. The mode collapse, in this case, would be to zero out the relevance of the entire image. In an analog manner, $\\mathcal{L}\\_{\\text{fg}}$ encourages a high relevance in the foreground and can cause a mode collapse where all the image receives a relevance of 1. When applied together (the “w/o $\\mathcal{L}_{\\text{classification}}$ (Eq. 4)” ablation in Table 4), the two losses balance each other. However, when only employing one loss without the other and not adding a regularization term, the fine-tuning would encourage a mode collapse leading to lower accuracy. Due to the current 9-page limit, the results are enclosed in Appendix I and will be moved to the main text for the camera-ready version (which will have an additional page).\n\n---\n\n__Re. vanilla ViT vs. ViT AugReg__ \n\nLoosely rephrased from the AugReg paper- ViT AugReg aims to find the correct balance between the amount of training data, the model size, and “AugReg” (augmentations and regularization) since ViT models rely on AugReg more than CNNs.\nThe authors of explain that this is due to weaker inductive biases for ViTs. By carefully studying those relations, they are able to train models using the public ImageNet-21k that obtain similar performance to similar models that were trained on a much larger dataset (JFT-300M). \n\nWe thank the reviewer for highlighting this point since we find the experiments on AugReg highly important. These experiments demonstrate that even in the presence of near-perfect augmentations and regularization, our method is still necessary to boost robustness, i.e., augmentations and regularization are not enough to ensure model robustness.\n\n---\n\n__Re. background relevance + fine-grained per-class analysis__ \n\nRegarding the reviewer’s question as to the impact of the background, we concur that the background can be a useful cue, as long as it is not assigned a higher relevance than the foreground. Our goal is not to eliminate the background relevance entirely, but rather to ensure that the relevance on the foreground is higher. For an in-depth, per-class analysis of the impact of our method on each class separately, please see Appendix J, which was attached in the submitted supplementary materials zip, and is now part of the revised pdf file.\n\n---\n\n__Re. explainability method description__ \n\nFollowing the reviewer’s questions, we have revised Appendix B of the supplementary material to further clarify our use of the GAE method. We would happily add more clarifications upon request.\n",
" \nWe thank reviewer Tjbv for the very comprehensive and positive review and for the useful comments and points for discussion. We highly appreciate the attention to detail and the consideration of experiments in the appendices.\n\nBelow are our answers to the reviewer’s questions. We would be happy to answer any further questions the reviewer may have.\n\nPlease note that all references to the text refer to the revised version of the paper.\n\n__Re. fine-tuning models with the classification loss__ \n\nWe kindly note that all the models we experiment with have been fine-tuned on ImageNet-1k, which is also the dataset we use for our relevance-based fine-tuning, i.e. the models were fine-tuned with the classification loss (Eq. 10 in the revision, cross-entropy with the ground truth class). \nThis is indeed an important clarification, as some of the models were pre-trained on other datasets (e.g. ImageNet-21k), and the phrasing in the main text can be misleading. We thank the reviewer for bringing this to our attention. The revised version has been modified to emphasize this point (L. 127-128). \n\n---\n\n__Re. hyperparameters selection for baselines__ \n\nThe difficulty with the hyperparameter search for the baselines stems from the relevance loss term (for the baselines: $\\mathcal{L}\\_{\\text{bg}}$). The goal of the baseline methods is to reduce the relevance values in the background. Therefore it is crucial that we search for hyperparameters that cause a decrease in $\\mathcal{L}\\_{\\text{bg}}$. Thus, for each baseline run, we needed to ensure that the background loss was decreasing (otherwise, the baselines wouldn’t be able to trigger a change in the salient behavior). \nTo allow for a fair comparison, we ran a grid search for each model independently to ensure that $\\mathcal{L}_{\\text{bg}}$ is indeed decreasing through the fine-tuning process. \n \nWe kindly refer the reviewer to L. 599-603, where we propose a possible explanation for the instability of this loss in the baseline methods. [Liu et al. Rethinking Attention-Model Explainability through Faithfulness Violation Test. ICML 2022] evaluates different explainability methods for Transformer-based models and has found that vanilla input gradients (as used in the InputxGradient method) violate faithfulness, i.e., do not loyally reflect the salient behavior of the models. For this reason, it is difficult to find hyperparameters to control their values, as they are not necessarily indicative of the network’s relevance and can sometimes even appear to be random. \n\nWe note that this behavior is model-dependent. For example, DeiT models were much easier to grid search, and the decrease in $\\mathcal{L}\\_{\\text{bg}}$ was noticeable for various learning rate choices, while ViT AugReg was very difficult to grid search. As can be seen from the TensorBoard training logs for DeiT and ViT AugReg with the same choice of $\\lambda_{\\text{bg}}$, $\\lambda\\_{\\text{classification}}$: https://imgur.com/a/5X0xuwh, while $\\mathcal{L}\\_{\\text{bg}}$ converges for DeiT with those hyperparameters, this is not the case for ViT AugReg.\n\n---\n\n__Re. sensitivity tests accuracy (Fig. 8 in Appendix G)__ \n\nOur sensitivity tests were conducted with the exact same hyperparameter choice as the main paper, for a fair and unbiased comparison. This, however, may be sub-optimal to some choices of the number of classes and the number of training samples per class. Given more training data, there are more update steps. Therefore, we hypothesize that perhaps a different learning rate scheduler or a slightly lower learning rate would remedy the small drop in accuracy, and possibly even improve the robustness further. \n\nWe note that this drop in accuracy is much less significant than the increase in accuracy compared to the baseline method, and that, as the reviewer pointed out, it is evident specifically for ImageNet-A, but less for the other robustness datasets.\n\nRegarding the layers most influenced by the fine-tuning process, we find that in general, the final attention layer is typically the most indicative of the relevance values. This is supported by an ablation done in [Chefer et al. Transformer Interpretability Beyond Attention Visualization. CVPR 2021] which shows that calculating relevance using the last layer is almost equivalent to propagating the relevance throughout the entire network. Accordingly, since our main objective is to fine-tune the relevancy maps, we find that the last attention block is most influenced by the changes.\n",
" This submission aims to improve the robustness of vision transformers (ViT) by leveraging interpretability methods during training. The proposed approach relies on a recent method for computing pixel-wise relevance maps for ViT models. The relevance map for a pre-trained ViT does not necessarily coincide with the area occupied by the foreground object. Accordingly, a pre-trained model is fine-tuned with cross-entropy loss and two regularisation terms aimed at improving the relevance map. These two terms respectively encourage positive and negative agreement of the relevance map with a foreground segmentation map. This method is evaluated on several ImageNet-adjacent robustness benchmarks, and results in either noticeable improvements (>1pts: ObjNet, INet-A, INet-R) or performance on par with the baseline (+/-1pt: INet, INet-v2, Sketch).\n\n This is a nice simple idea and it appears to improve robustness without much effort. If I understand correctly, it relies on fine-tuning a model for 50 epochs with just 1500 images per epoch (500 classes, 3 images each -- half of the total classes). It thus requires very few annotated segmentation masks relative to the full dataset. However, the method also results in decent performance with automatic segmentations obtained by a recent method called TokenCut (CVPR '22). \n\nThe experiments make sense, particularly the choice of evaluation benchmarks as well as the selection of competing methods. I like that several architectural variants & differently pre-trained instances thereof were considered, as well as repeated runs with different random seeds (see Appendix). I do have some concerns/questions with the experiments however which I list in the next section of the review. I think an opportunity was missed to conduct a more detailed ablation, and I have concerns about the hyperparameter selection procedure for the competing methods. I am surprised that increasing the amount of fine-tuning data (beyond 3*500 images) can harm performance (see Fig. 8).\n\nIn terms of clarity, I think the description of the relevance map extraction procedure (mostly relegated to the appendix) could be improved. No need to reproduce the full description from the original paper which describes it, but some clarification is necessary, e.g. I am not sure how this is initialised to an identity map? Are there any relevant knobs and dials to tweak the results?\n\nOverall, I am leaning towards tentative acceptance because I like the method and the direction. The paper makes an important point (made elsewhere too), namely that focusing on a \"single measuring stick: [accuracy]\" can only get us so far. Instead -- and this is my read -- we have to look into encouraging better model behaviour through other means, e.g. in this case making sure the model attends to relevant parts of the object. Before reaching a final rating, I would appreciate responses to the questions I raise in the rebuttal regarding the experiments.\n\nOne unstated assumption that could perhaps be interrogated a bit more is that reliance on background is by default something to be discouraged. While this probably is the case for most classes, there are other classes with more ambiguous appearance where the background is presumably useful. A more fine-grained analysis of the impact on various classes would have been quite useful. This issue is alluded to in section 5 but without any corresponding qualitative or quantitative analysis as far as I can tell.\n\nThere is partially such an analysis in the appendix (section H), which compares performance on the 500 training classes. vs the 500 remaining classes. These are interesting results which are briefly discussed in the main paper (section 5). There is an accuracy difference between the two sets, but it is not consistent across models and datasets. Accuracy for the training classes is on average better, but in some cases it is the other way around. Are there any ideas as to why this is the case? \n I have a few questions about the experimental evaluation:\n\nI would have liked to see a baseline which fine-tunes the model only using the classification loss. The reported results labelled \"original\" appear to refer to performance for the pre-trained model without fine-tuning. Is this correct? As such, we don't quite have a proper comparison between methods.\n\nI am also very confused by the choice of hyperparameters for the competing methods (RRR, GradCAM) -- see Appendix. I don't understand why the weight for the classification loss was not kept static for all methods together with the learning rate. What happens if you conduct a grid search for the regularisation term weights by fixing the aforementioned hyperparameters. Could you also describe the \"difficulties\" you faced while choosing the appropriate hyperparam values?\n\nWhile it is interesting that the method is effective given very little data, it should however be mentioned that (based on Fig. 8 in the appendix) the number of classes (500) and number of samples per class (3) were selected based on accuracy. Increasing the number of classes and/or samples beyond the chosen values can have a slightly negative impact, especially visible for the ImageNet-A curve. \n\nThis is a little surprising to me and I wonder why that is. Was the number of fine-tuning steps adapted to a change in the number of samples/classes? Is then the improvement a function of then number of training steps, or does the diversity of fine-tuning data not matter all? This is not immediately clear based on Fig. 8. It thus seems like a relatively minor adaptation of the network weights is enough to get better relevance maps and with it improved robustness. On which layers does the fine-tuning have the largest impact?\n\nI also think that an opportunity was missed when it comes to examining the impact of different loss terms. I could not find an ablation study that considers different combinations of the regularisation terms that were considered (e.g. fg only, bg only).\n\nWhat is the difference between AugReg and the corresponding vanilla ViT in terms of training?\n\n Negative societal impact of the work is not addressed ([N/A] in the check list). ",
" The paper focuses on improving the robustness of Vision Transformers by monitoring the relevancy map of models. Acted as a fine-tuning step, the proposed method contains three losses to suppress relevance on background regions, force the model to predict using foreground information, and learn from its own predictions. Experiments on several datasets show the effectiveness of the method. Strengths:\nThe motivations in the paper were well established. It is a relatively simple idea, but the main claims are validated by experimental results and visualizations.\n\nWeakness:\n1. The paper should consider some self-supervised learning (SSL) methods using ViTs as a baseline, as SSL methods are believed to work well on out-of-distribution than supervised methods. Besides, DINO [1] also shows the property to identify the foregrounds, which may, to a certain extent, remedies the shortcoming of relying on image backgrounds to classify. Providing the performance of SSL ViTs will make the paper more convincing. \n2. Adding the $L_{classification}$ on the fine-tuning phase, the method is limited to pre-trained classes. What if the out-of-distribution datasets have some semantic-different classes (iNaturalist or other natural datasets with different classes), would the K-nearest neighbor testing of these novel classes still show better performance? \n3. Also, the organization of the paper is not ideal. For instance, in Section 2, the description of validation datasets and related works are mixed together. 1. Provide a baseline experiment using DINO pretraining checkpoints.\n2. What if the out-of-distribution datasets have some semantic-different classes, would the K-nearest neighbor testing on these novel classes still show better performance than the baseline? As mentioned in the paper, only parts of the classes used during fine-tuning improve the performance of the other classes. Therefore, the method should also work well in unseen classes?\n Yes, the authors have addressed the limitation. The potential negative societal impact is not mentioned, which is okay as the paper focuses on improving the robustness of general models.",
" This paper presents a method for robustifying Transformers-based image classifiers against different image distributions, assuming that better attention improves the generalization performance. The main idea is to force the relevance map (or an aggregation of attention maps in Transformers) to focus on foreground objects. The method generates a relevance map with [8] and gives a manual or unsupervised segementation map as a preferred relevance map. The advantage of the method is experimentally demonstrated. ## Strengths\n\n1. The method is simple but actually improves the performance. \n2. A good set of comparisons and ablation are presented. \n\n## Weaknesses\n\n1. According to Eqs. (6) and (7), a relevance map is based on gradients. I guess the classifier (or the model) is trained through relevance maps, and if this is the case, more details on how the model is trained with these gradients. \n2. There are some methods that try to debias a model, like [Burns et al., \"Women also Snowboard: Overcoming Bias in Captioning Models,\" ECCV 2018], which also uses manual segmentation for debiasing the model. I think the proposed method shares the basic ideas with such methods, and they may be easily adapted to classification tasks. I would like to see how the proposed method differs from these methods. Experimentally comparing some of these methods (for the manual segmentation variant) or at least providing discussion on them could be beneficial if they are comparable. \n3. The paper could discuss the data distribution used for training TokenCut in the context of generalization of the method to severely different data distributions. 1. I would like to see some discussions and explanations on Weaknesses 1-3.\n1. For the unsupervised segmentation case, how many images are used for training with segmentation results? \n2. I'm also curious about the performance of TokenCut over the datasets used in the paper. I'm not very sure if this is a limitation, but the paper does not mention the distribution used for training the unsupervised segmentation model. At least segmentation may not work when the data distribution is completely different (e.g., X-ray images). I think this point is not discussed in the paper.",
" When models are trained without extra guidance, they often extract spurious features that are causally irrelevant to the target task. The paper proposes to regularise \"where the model looks at\" with human-annotated segmentation masks (or with automatically-generated foreground masks). More precisely, the proposed method regularises Vision Transformers (ViT) by fine-tuning a normally pre-trained model with the objective of encouraging foreground regions to have greater \"relevance map\" [8] values. By aligning the model's attention with the true foreground, the proposed method improves the models' general robustness, measured in terms of ImageNet-A/R/V2/Sketch/ObjNet and SI-Score. ## Strengths\n\nThe greatest strength of the paper is that it seems to solve an important problem in a relatively straightforward and intuitive fashion. Some might say it is unrealistic and costly to use human-annotated foreground masks. I strongly disagree with that viewpoint. The problem of spurious correlation and attention misalignment is not solvable without extra human guidance; the dataset itself does not contain sufficient guidance as to which feature a model must utilise to solve the problem. If it did, you won't see the consistent issue with spurious correlations in various models and datasets. Nonetheless, many researchers shun away from using extra annotations (especially if they are expensive like segmentations) and data points, for the fear that they are seen as impractical. I do not think so. I believe using such extra annotations can in fact be much cheaper in practice than providing the needed human guidance through complex loss and regularisation terms and hyperparameters (which require expensive model re-training to run HP search for every dataset and architecture).\n\nThe method does seem to improve the robustness across the board (Tab 1 and 2). There do exist some inconsistencies here and there, but they are not major and are expected for such large-scale evaluations. The internal mechanisms are also verified quite well through ablative studies (Tab 3 and 4). The extra results in Appendix are also quite impressive. Hyperparameter selection is described well in Section D. The proposed method has used the same set of hyperparameters across experiments, while the baselines use hyperparameters tuned for each setup. The sensitivity tests in Section G provide a nice insight that the method already improves the robustness quite a bit with only 100 additional segmentation masks (100 classes x 1 sample/class). \n\n## Weaknesses\n\nThe main weakness is the choice of the explanation method - relevance map. This reduces the intuitiveness of the method quite a bit in my opinion. Based on my reading of Section B (yes, please put this important information in the main text!), the explanation is generated by \n- computing gradient of ViT output w.r.t. attention map\n- pointwise multiplying the gradient map above with attention values\n- taking pointwise ReLU and head-wise averaging.\nThis is complicated. It is not intuitive as to \"what effect will regularising such a complicated output of a ViT have on its inner mechanisms\". For example, it would have been much easier to grasp the effect of directly regularising the attention map with the ground-truth foreground masks. You then understand that the ViT will be performing attention-weighted pooling across tokens where the weights are better aligned with the actual foreground features. \n\nIt would be great if the authors could explain in words or mathematical language how the regularisation of the relevance map would affect the model in question. \n\nAbout hyperparameter tuning - could the authors explain for which objective metric the parameter tuning is executed? I have probably missed it. I'm worried if the tuning is performed with respect to the robustness measures. If so, the robustness information has leaked to the HP tuning and the good performances here will less likely generalise to other data and models.\n\nOther minor comments:\n- What is the authors' opinion on applying the technique on CNNs? It is in principle possible to generate the relevance maps for them. Maybe CAM-like heatmaps are more suitable there though.\n- What stops us from performing the technique on training from scratch? Is it purely a computational limitation (single V100) ? It would be nicer to assure an improvement in robustness for the scratch-training scenario. That will also render the \"classification\" loss (Eq 4) unnecessary and reduce complexity.\n\n## Conclusion\n\nI weigh the strengths far more than the weaknesses. This is a nice paper that addresses an important problem with an intuitive approach. I hope the authors answer the remaining questions (and make minor revisions) to juice out the last bits of possible improvements. See weaknesses above. They are okay."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"Ebst6DGCMB3",
"NrdACp8KqaY",
"OQR48A2JnZ",
"U4SEw6We6U9",
"Xs2vHRAosS-",
"U4C0GC2XR5w",
"Kx1M1YXIIzL",
"ZTjsOyDf4fa",
"FsFA3lL8ei1",
"t2r6oVx4fI",
"NKfGWmTJ0jD",
"ZTjsOyDf4fa",
"l4I5IgaCUxp",
"sXv9iTSb2PM",
"U0e_CX4xBQ",
"nips_2022_upuYKQiyxa_",
"nips_2022_upuYKQiyxa_",
"nips_2022_upuYKQiyxa_",
"nips_2022_upuYKQiyxa_"
] |
nips_2022_eN2lQxjWL05 | Decision-Focused Learning without Decision-Making: Learning Locally Optimized Decision Losses | Decision-Focused Learning (DFL) is a paradigm for tailoring a predictive model to a downstream optimization task that uses its predictions in order to perform better \textit{on that specific task}. The main technical challenge associated with DFL is that it requires being able to differentiate through the optimization problem, which is difficult due to discontinuous solutions and other challenges. Past work has largely gotten around this this issue by \textit{handcrafting} task-specific surrogates to the original optimization problem that provide informative gradients when differentiated through. However, the need to handcraft surrogates for each new task limits the usability of DFL. In addition, there are often no guarantees about the convexity of the resulting surrogates and, as a result, training a predictive model using them can lead to inferior local optima. In this paper, we do away with surrogates altogether and instead \textit{learn} loss functions that capture task-specific information. To the best of our knowledge, ours is the first approach that entirely replaces the optimization component of decision-focused learning with a loss that is automatically learned. Our approach (a) only requires access to a black-box oracle that can solve the optimization problem and is thus \textit{generalizable}, and (b) can be \textit{convex by construction} and so can be easily optimized over. We evaluate our approach on three resource allocation problems from the literature and find that our approach outperforms learning without taking into account task-structure in all three domains, and even hand-crafted surrogates from the literature. | Accept | This paper considers the problem of making decision-focused learning (DFL) more usable for both researchers and practitioners. It proposes a novel approach referred to as locally-optimized decision losses (LODL) which learns the parameters of surrogate intermediate losses to match the decision loss. Experimental results clearly demonstrate that LODL approach is able to learn effective surrogate for the considered tasks.
All the reviewers appreciated the LODL idea, but also raised a number of concerns. There was a lot of discussion and authors' have addressed most of the concerns and also acknowledged some limitations pointed out by some reviewers'. One expert reviewer who deeply engaged with the authors to both clarify and improve the paper was willing to strongly champion the paper. In their words: "It's a brilliant idea that will be foundational in the space and will be engaging and thought-provoking at the conference." Couple of reviewers' raised few points beyond the author-reviewer discussion which authors' could not see/respond to. However, I think the overall strengths of the paper outweigh these concerns.
Therefore, I recommend accepting the paper. I strongly encourage the authors' to improve the paper in terms of clarity, exposition, and additional experimental results to reflect the discussion with reviewers. | train | [
"2owQcx51Kvh",
"apTD57W2vRm",
"I1tcvBOC8rQ",
"Uk-KbbW9RB-",
"eizXYGZ1aaG",
"l8owqhJhodu",
"eLMqnbR6OR1",
"d10zsF4V6E",
"RBxoXKL1mJn",
"ef8CHnc-IyK",
"7XbTNM8splV",
"udvM9gpWgi",
"tZZHvz4aU3J",
"h87cvf_HOts",
"HPeR1pLd-kp",
"Xz2GmevwoTH",
"fXQUc2hpLJk",
"fTYNTAYphf7",
"h3kg09N_nYk",
"pxgQ3TLdWR",
"5CFbirRWnup",
"rcOMSCpWM-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! I appreciate that you decide to include scalability results in the camera-ready and promise to clarify the issues I mentioned. I suggest the authors can also discuss more scalability in the camera-ready. Incorporating information in the common response above will be helpful. ",
" I just also checked the code and verify `losses.py` is consistent with the description in this thread and not as presented in the paper. L170 subtracts the perturbed objectives from the optimal ones:\n\n```python\nobjectives = opt_objective - objectives\n```\n\nAnd then L216 computes the regression onto training instances of these:\n\n```python\npred = model(Yhats_train).flatten()\nloss = MSE(pred, objectives_train)\n```",
" Wow! In that case, I raise my score from a 4 to a 8 and advocate for the paper's acceptance. It's a brilliant idea that will be foundational in the space and will be engaging and thought-provoking at the conference. My original concerns were on 1) how to interpret the loss being optimized and 2) connecting the experimental results to other published results. From this discussion, I trust that the authors will update these significant writing issues in the paper for 1). In their first response in this thread, the authors state that the results are just normalized versions that are directly comparable to Table 1 of Wilder et al. This is an extremely reasonable and grounded experimental setting and in most cases the LODL results in a non-trivial improvement. \n\n## On the concerns from other reviewers\n\nI have read through all of the other reviewing details and do not strongly see a case for rejection from any of the reasons given from the other reviewers, and I am open to a discussion with them, of course. Here is my quick summary of them:\n\n### TJiM states:\n\n> The computational complexity of LODL is high, LODL is still an approximation-based method to differentiate the optimizer, and analysis/bounds\n\nI agree with the authors' rebuttal that the experimental results of their method are enough justification for acceptance. Better theoretical understanding is usually helpful, but in this case I do not think it is crucial.\n\n> The way to sample and train LODL around each training sample is questionable.\n\nI agree with this concern and hope the authors will emphasize this is a heuristic part and try to give intuition on what they have found to work and not work.\n\n### 4d1n states:\n\n> I am somewhat skeptical of its practicability considering the challenges in parameter tuning and the computational scalability of the proposed method.\n\nI think the experimental results justify the method\n\n### NGrH states:\n\n> The design of LODL is interesting, but I have a concern about the complexity of fitting the local loss function\n\nI think the authors have appropriately acknowledged and addressed thi",
" Nope! We ignored it in the text for notational simplicity, but all the experiments include the term.",
" Ok thank you! That makes so much more sense then. In that case, it seems like the additional $DL(y)$ term cannot be left out of the loss as it's much more of a bias term for the regression rather than a constant that can be ignored. Does this impact any of the experimental results?",
" It's a good point! We thought that $LODL(\\hat{y}) \\approx DL(\\hat{y})$ was a more intuitive description of what we were trying to do, and ignored the constant term '$DL(y)$' in favor of being 'morally correct'. However, this conversation has been extremely useful; we will change the text to make what we are doing more clear!",
" Thank you for all of the details! I will continue thinking about this throughout the review process. In case you see this in time, one last minor question: how does $DL(\\hat{y}) \\approx DL(y) - LODL(\\hat{y})$ show up in the loss at L191 (or anywhere else in the paper)? It does not appear this connection is made anywhere, and L188 almost seems to contradict this, stating that the goal is for $LODL(\\hat{y})\\approx DL(\\hat{y})$.",
" I'm not sure that I understand why they would be different? Consider the example from before; we have the points $y = (1, 2, 3)$ with decision loss $3$, $\\hat{y}_1 = (1, 3.1, 3)$ with decision loss $2$, and $\\hat{y}_2 = (3.1, 2, 3)$ with decision loss $1$ (and we want to maximize the decision loss for this problem). Then, the way that we create input-outputs for the LODL is:\n\n- *Input:* Take the difference between the prediction and the true label. So $\\text{input}(y) = (0, 0, 0)$, $\\text{input}(\\hat{y}_1) = (0, 1.1, 0)$, and $\\text{input}(\\hat{y}_2) = (2.1, 0, 0)$. This way the true label is always at $(0, 0, 0)$ (for any input).\n- *Output:* Take the difference between the optimal decision loss and the decision loss obtained by the prediction. So $\\text{output}(y) = 0$, $\\text{output}(\\hat{y}_1) = 1$, and $\\text{output}(\\hat{y}_2) = 2$. This way the optimal value is at 0 for any LODL, and we can also estimate the decision loss using the equation $DL(\\hat{y}) \\approx DL(y) - LODL(\\hat{y})$.\n\nAs a result, we enforce (by construction) that $\\mathbf{0}$ is the minima of every LODL, that at the minima its value is 0, and this value strictly increases (by convexity). Then, any predictions that are far away from the true labels are penalized strongly, making sure that the predictive model $M_\\theta$ makes predictions that are close to the true label.\n\nFor values around the true label, the values of the regression loss are similar to the values of the decision loss. For example, by fitting the WeightedMSE to the points $\\hat{y}_1$ and $\\hat{y}_2$ in the example above, you're guaranteed that $DL(\\hat{y}_1) = DL(y) - LODL(\\hat{y}_1)$ and $DL(\\hat{y}_2) = DL(y) - LODL(\\hat{y}_2)$. More generally, by fitting the regression loss using L191, you're guaranteed to get the $\\phi^*$ that best matches the $DL$ for the sampled points. This makes the regression loss and decision loss a (somewhat) apples-to-apples comparison.\n\nNow, the choice of loss function family *does* affect how this smoothing is done. For example, using the weighted 2-norm (WeightedMSE) penalizes deviations from the true label quadratically, while a weighted 1-norm would penalize it linearly. However, every supervised learning problem involves finding a model with good \"inductive bias\", and we think that this challenge is much more approachable to ML practitioners than the challenge of finding a good surrogate optimization problem.\n\nAs for curvature, that would be a great thing to measure, but when $z\\^\\*$ is piecewise constant (as in the example), it's not clear how you would define something like that? Given that we're interested in the cases where optimization problems are \"badly behaved\" and require surrogates, we do not include such analyses.",
" > In other words, we want to design a parameterized regression loss (LODL) that mimics the behavior of the decision loss\n\nThe misunderstanding I am raising is that the *value* of any of the parameterized regression losses (e.g. 0 at optimality) is going to be extremely different than the *value* of the decision loss. Then why is it reasonable that these values are matched when they are going to be so far off? Does this also create artifacts, for example instances that make the model prioritize instances with larger decision losses because those are farther away than the zero-centered regression loss?\n\nDoes it also make sense to ask if the *curvature* of the regression loss should match the *curvature* of the decision loss around the optimal value?",
" Ah! I think I see where we're seeing things differently. The goal in **predict-then-optimize** is to learn some predictive model $M\\_{\\theta\\^\\*}$ such that $\\theta\\^\\* = \\arg\\min_\\theta \\mathbb{E}\\_{(x, y) \\sim D} [f(z\\^\\*(M\\_\\theta(x)), y)]$ (L77). This expression is what defines what a 'good prediction' is in this setting, i.e., **we want to find a predictive model $M\\_{\\theta\\^\\*}$ that has a low decision loss**.\n\nNow, it is hard to directly optimize for this because $z\\^\\*$ is badly behaved, e.g., has zero-gradients. As a result, we come up with some $LODL\\_{\\phi\\^\\*}$ that (i) we *can* optimize over, and (ii) if we learn a predictive model $M\\_{\\theta\\^\\*_{LODL}}$ such that $\\theta\\^\\*\\_{LODL} = \\frac{1}{N} \\sum\\_{(x, y)} LODL\\_\\phi\\^\\* (M\\_\\theta (x), y)$, it approximately optimizes the objective above. **In other words, we want to design a parameterized regression loss (LODL) that mimics the behavior of the decision loss because, if we do, a model $M\\_{\\theta\\^\\*_{LODL}}$ that is trained on the LODL can be expected to perform well on the decision loss, and as a result, perform well on the optimization task.**\n\nDoes that make sense?",
" Thank you for the further clarifications! To be honest, I still do not fully understand but will continue thinking about it and will try to further discuss with the other reviewers during the discussion period. If I write the regression-based LODL parameterized by $\\phi$ as $\\mathcal{L}_\\phi=||M_\\theta(x)-y||_\\phi$, then the loss on L191 of the paper is finding some $\\phi$ that optimizes (for a single instance with some $y$ sampled around the true one):\n\n${\\rm argmin}_\\phi (||M_\\theta(x)-y||_\\phi - f(z^\\star(y), y))^2$,\n\nwhere $f(z^\\star(y), y)$ is the decision loss. I am sorry to repeat the same point, but I do not see how the value of the regression loss is comparable to the value of the decision loss and why we would want to match the value of a parameterized regression loss to the value of the decision loss. My understanding is that you are saying that I should instead see the LODL as a surrogate to the decision loss rather than a parameterized regression loss.",
" These are great questions! For the first one, *the 2-stage loss (MSE) is exactly $\\mathcal{L} = ||M\\_\\theta(x) - y||\\_2\\^2$ (up to a normalization factor)!* However, to see why this is bad in the context of the example, consider the 2 predictions $\\hat{y}\\_{good}=(1, 2, 4.1)$ and $\\hat{y}\\_{bad} = (1, 3.1, 3)$. Both have the *same error* according to MSE, but have *different values* of the decision loss ($DL(\\hat{y}\\_{good}) = 3$ and $DL(\\hat{y}\\_{bad}) = 2$). This is what we mean when we say that the 2-stage and decision losses are 'misaligned'. **As a result, despite the fact that it's easy to optimize and the gradients are well defined, we do not want to minimize $\\mathcal{L} = ||M\\_\\theta(x) - y||$.** Instead, in predict-then-optimize, we ideally want to optimize for the decision loss.\n\nHowever, for certain kinds of optimization problems (e.g., linear or discrete optimization) the decision loss cannot be directly optimized for via gradient descent (because of the zero-gradient issue). As a result, you have to come up with some approximation (i.e., *surrogates*) to the decision loss. While past work in DFL *handcrafts* these surrogates, we build task-specific loss functions (LODLs) that satisfy two criteria:\n\n1. They are convex-by-construction; we show that having non-convex surrogates leads to bad predictive models $M\\_\\theta$ (see Section 5.1).\n2. They are close to the Decision Loss (because that is what we want to optimize for).\n\n**As a result we construct our LODLs such that they are as close to the decision loss while still being convex (and, as a result, easy to optimize over), resulting in the formulation from L191.**\n\nDoes that answer your questions? (Sorry for the delay!)",
" (I tried to send this immediately after seeing the response but\nNeurIPS apparently doesn't let reviewers respond until the\ndiscussion period officially starts.)\n\nThank you for the clarifications! I am continuing to thoroughly look through all\nof the other reviewing details and have one clarification question on the example for now. My concern with L191 was also\non shaping a supervised/regression loss to the decision loss. In the\nexample you gave, we know the ground-truth\nparameters $y$, so one option would be to just regress\n$\\mathcal{L}=||M_\\theta(x)-y||$. **This regression loss seems easy to optimize and has\nwell-defined gradients.** My interpretation of the LODL loss on L191 is\nthat it parameterizes a regression loss like this and tries to make\nthe regression loss' value match the value of the decision loss in\nsome region around the optimal prediction --- **these seem like very\ndifferent quantities. This is the part I do not understand how to\ninterpret: L191 seems to be doing more than just smoothing the\noriginal decision loss because it tries to make the value of a\nparameterized regression loss match the decision loss.**",
" Thank you for your detailed comments and feedback! Unfortunately, it seems like we were unable to effectively communicate the importance of surrogates for DFL in our paper and, as a result, have been unable to convince you of the value of our contribution in removing the need for such surrogates. In answering your questions, we hope to address these misunderstandings:\n\n1. **Complexity**: We evaluate the complexity of our method and compare it to DFL in the common review above. However, the gist of the argument we make is that while our method can be expensive, DFL is *also* quite expensive. In addition, our cost can be amortized over different models whereas DFL cannot. Overall, this leads to comparable or better performance depending on the comparison regime.\n2. **An analysis of the gradients**: For linear/discrete optimization problems (one of the major focuses of this paper), the decisions (and as a result the decision loss) are piecewise constant in the parameters. As a result, the gradients are zero almost everywhere and are not useful for training predictive models (please refer to our response to Reviewer dkZJ where we show this more concretely using an example). This is the reason why we need to develop *surrogate* optimization problems to differentiate through when using DFL. In practice, both LODL and DFL surrogates attempt to \"smooth out\" the decision loss in such a way that it creates useful gradients; while this is typically done by hand in the DFL, in this paper we attempt to *automate this process* by reducing it to a supervised learning problem; this is our main contribution.\n3. **Sampling strategy**: You're right that the distributions of sampled predictions that we train our LODLs on, and the ones that they actually encounter while training the predictive model $M\\_\\theta$ are different. In fact, we explore this further in Section 5.3. However, there are a couple of reasons why we use this strategy:\n\n 1. *It works*: Our experiments show that we can indeed learn useful LODLs using this sampling method.\n 2. *It's cheap*: Sampling based on the actual predictions of the predictive model would be much more expensive, and likely wouldn't allow the kinds of amortization that make our method attractive from a complexity point of view.\n \n That being said, we are currently looking into how to better sample points so that they are more aligned with the actual distribution encountered while training $M\\_\\theta$.\n4. **Theoretical Analysis**: While we do not cannot ensure the similarity of the *gradients* of the Decision Loss, we do ensure that the optima of the Decision Loss and that of the LODL are the same (and are equal to the \"true parameters\" we're trying to learn). From a theoretical standpoint, this boils down to a \"Fisher Consistency\" result that we will include in the camera-ready.\n5. **LODL Advantages**: As we highlight in our common review, the main benefits of our approach are increased *usability* rather than increased performance. That being said, even if we *can* smoothly differentiate the decision with respect to the parameters (as in some convex optimization problems), we may not want to. This is because, even though the optimization problem is *convex*, the relationship between the decision loss (of the decision produced by solving the optimization problem) and the input parameters is *non-convex*, leading to possible local optima. This is why we highlight the importance of the *convexity of the surrogates/LODLs* in this paper. We also show in the experiments that these \"bad local minima\" associated with non-convex surrogates lead to poor performance in practice.\n\nWe hope this response helps understand the nuances and challenges associated with this problem, as well as our contributions!",
" Thank you for your detailed and thoughtful comments! We will add the suggested citations and make the clarity-related changes to the camera-ready. There are additional experimental details in the appendix, but to answer your specific questions:\n\n1. **\"Selecting sampling variance\"**: This is a great point and one that we have *some* intuition for. The variance has to be high enough that it leads to actually changing the decision, but low enough that it's 'realistic'. Practically, this value (along with those of other hyperparameters) is chosen via grid search over ~5 log-scaled candidates. We will update our description of the experiments to include these details.\n2. **\"Negative weights for WeightedMSE\"**: This is a very astute observation! We do indeed make sure that the weights are non-negative (in fact, slightly positive) by clamping the weights to some minimum value (and initializing such that it is above that minimum value). For the 'Quadratic' and 'DirectedQuadratic' variants, we do something similar by adding some minimum amount of MSE Loss to our learned LODL. This has the effect of ensuring that the minimum eigenvalue of the learned $H$ matrix is strictly positive, i.e., you have a strictly positive curvature in all directions. We will make this more clear in the descriptions of the methods.\n3. **\"Scalability\"**: We broadly address this in the common response to the reviewers above. However, this gist is that while solving $KN$ problems *is* challenging, it's a problem that is also shared by DFL. For the results in Table 1 we use 5000 samples; You can see the impact of increasing the number of samples in Table 4 in the Appendix (it also has additional details about the experimental domains!). We are also working on more concrete scalability figures and will include them in the camera-ready.\n4. **\"Figure 2\"**: In this domain, the points are sampled from a uniform distribution between 0 and 1. As a result, most of the points are in the range $[-0.75, 0.75]$ and are thus downward trending. Naively trying to fit these points, as in the 2-stage setting, leads to a predictive model $M\\_\\theta$ that is also downward sloping. Given that $M_\\theta$ has a negative slope, the predictive model thinks that the leftmost points have the highest utility and picks those (even though they \\textit{actually} have the lowest utility) leading to bad outcomes. In contrast, the decision-aware models (LODL and DFL) know that having a negative slope leads to choosing bad outcomes and thus choose a positive slope (even though this does a worse job at fitting the points). The difference between the two, however, is the convexity of the surrogate; for some initializations of the predictive model, DFL is unable to determine that a negative slope is better and leads to local optima which have neither negative slope nor good model fit.",
" Thank you for your review and link to related work! Regarding your specific questions:\n\n* **\"Why is the equation on L191 a good idea?\"**: You're absolutely right that you will never be able to *perfectly* fit the Decision Loss, but you typically don't want to... This is easier to explain with an example:\n > Consider an $\\arg\\max$ optimization with 3 'true' parameters (A, B, C), e.g., ${y} = (1, 2, 3)$.\n > Now, if you predict these parameters perfectly, your 'decision' is ${z} =$ \"Pick C\", and your 'decision loss' is the true value of parameter C, $DL = 3$.\n > In fact, any prediction ${\\hat{y}}\\_{good} = (1 \\pm \\epsilon\\_1, 2 \\pm \\epsilon\\_2, 3 \\pm \\epsilon\\_2)$ for $\\epsilon\\_1, \\epsilon\\_2, \\epsilon\\_2 < 0.5$ will have the exact same decision, and hence exactly the same decision loss. As a result, the decision loss is constant in this region, leading to zero gradients.\n > While this isn't terrible for these set of predictions, consider the set of predictions ${\\hat{y}}\\_{bad} = (1 \\pm \\epsilon\\_1, 3 \\pm \\epsilon\\_2, 2 \\pm \\epsilon\\_2)$, where the decision is \"Pick B\" and the decision loss is 2. If a predictive model makes such a prediction, it cannot improve its predictions because of the zero-gradients. This is why we don't want to fit the decision loss perfectly.\n > \n > To understand what WeightedMSE does, consider the two predictions ${\\hat{y}}\\_{1} = (1, 3.1, 3)$ and ${\\hat{y}}\\_{2} = (3.1, 2, 3)$. The decision in each of these cases is \"Pick A\" and \"Pick B\" respectively, and the corresponding decision losses are 1 and 2. With just these 2 points and the true parameters, WeightedMSE would fit the points $0 \\rightarrow 0$ (because 0 error leads to a cost of 0) and $2.1 \\rightarrow 2$ (because adding 2.1 leads to a cost of $3 - 1 = 2$) for parameter A, and $0 \\rightarrow 0$ and $1.1 \\rightarrow 1$ for parameter B. *As a result, WeightedMSE captures the rough cost of getting each parameter wrong*\n\n In summary, we try to approximate the Decision Loss by a well-behaved convex function (so it's easy to optimize over) that aims to *roughly* capture the behaviour of the Decision Loss. This is the intuition behind the equation on L191.\n\n While the approach proposed in the paper you suggested is interesting, learning the full decision loss is as hard as learning the closed-form solution to a (potentially very complex) optimization problem in the Predict-Then-Optimize setting. This is why, in this paper, we focus on designing *Locally-Optimized* Decision Losses for each set of true parameters ${y}$ in the dataset.\n\n* **\"How do these results relate to those in the literature?\"**: This is a great point and one that we seem to have overlooked in our current description of the experiments. In our paper, 'DFL' corresponds to 'NN2-Decision' from Table 1 in their paper and '2-Stage' corresponds to 'NN2-2Stage'. Given that our aim is just to compare 2-Stage vs. DFL vs. LODL (with similarly structured predictive models), we simplify the structure of Table 1 from Wilder et. al., and re-normalize values so that they are comparable across domains to some extent to create Table 1 in our paper. With regards to our findings, we observe that the results from their paper are sensitive to the choices of different initializations and domain hyperparameters, but observe that (broadly) DFL outperforms 2-Stage. We will make these connections to past work more explicit in the camera-ready.\n\nAlso, we briefly discuss the limitations and possible future work in Section 6. While it's true that the *best possible LODL* is task-specific, we believe that:\n\n1. The phenomena that motivate the choices of the loss function families proposed in the paper (Section 4.1) are fairly general, and thus these LODLs that account for them will be able to outperform 2-Stage methods.\n2. There is merit to reducing a differentiable optimization problem to a supervised learning problem that more people have expertise in solving (see the common review for more details).",
" Thank you for your review! To respond to your questions:\n\n1. **Complexity:** We broadly address this concern in the common response. However, your comment about the increasing difficulty of fitting LODLs with an increase in the dimensionality of the predicted parameters is spot on (in our experiments $\\dim({y}) = 50$). We try to mitigate this issue using the localness assumptions to reduce dimensionality and having simple LODLs that are easier to fit. However, our approach is likely to be most effective for problems in which the number of parameters is low, but the cost of calling the optimization oracle is high.\n2. **Relationship between LODL Quality and Task Loss:** We attempt to answer a version of this question in Section 5.3. There, we show that there isn't a correlation between the quality of learned LODL (according to MAE) and the Task Loss (or Decision Quality) for the set of sampled points used to train the LODL (the Gaussian Neighborhood). However, we also show that there *is* a correlation between the quality in the `Empirical Neighbourhood' (the MAE on the actual predictions that LODL encounters during training) and the Task Loss. This suggests that better LODLs could lead to improved performance. However, the bottleneck is our sampling strategy. Doing better on the sampled points does not necessarily improve performance. Harnessing improved LODL quality would require sampling more \"realistic\" predicted parameters somehow.\n\n We do not compare gradients because the gradients for the Task Loss/Decision Loss can be zero almost everywhere (see the example in our response to reviewer dkZJ for more details), so matching them is generally not a good idea.",
" We thank the reviewers for their thoughtful feedback! We’ve noticed some common themes in the reviews and thought that we’d respond to them here.\n\nTo start off with, we'd like to reiterate the motivation behind this paper---to make DFL **more widely usable** by avoiding the need to invent surrogate optimization problems or having to differentiate through them. The reviews evaluate our approach in the context of the literature but do not acknowledge this aspect of our contribution. In a lot of real-world contexts, the question is not \"Which DFL method should I use?\", but rather \"Should I spend the time and effort to use DFL at all?\". In those cases, our approach reduces the somewhat niche problem of differentiable optimization to one of supervised learning, which many more people are familiar with. To that end, we entreat reviewers to look beyond the specific LODL implementation used in this paper and also review our approach as a framework that allows potential users to reap the benefits of DFL without having to master Differentiable Optimization.\n\nThat being said, **complexity/scalability** is an important component of usability and something that 3 out of the 4 reviewers highlight in their review. While we did not comment on this aspect of our approach in the paper, we'd like to take the opportunity to do so here. Roughly, the amount of time taken by each of the methods is:\n\n* 2-Stage = $\\Theta(T \\cdot N \\cdot T\\_M)$, where $T\\_M$ is the amount of time taken to run one forward and backwards pass through the model ${M\\_\\theta}$ for one optimization instance, $N$ is the number of optimization instances, and $T$ is the number of time-steps ${M\\_\\theta}$ is trained for.\n* DFL = $\\Theta(T \\cdot N \\cdot (T\\_M + T\\_O + T'\\_O))$, where $T_O$ is the time taken to solve the forward pass of one optimization instance and $T'\\_O$ is the time taken to compute the backward pass.\n* LODL = $\\Theta(K \\cdot N \\cdot T\\_O + N \\cdot (T \\cdot K \\cdot T\\_{LODL}) + T \\cdot N \\cdot T\\_M)$, where $K$ is the number of samples needed to train the LODL, and $T\\_{LODL}$ is the amount of time taken to run one forward and backwards pass through the LODL. The three terms correspond to (i) generating samples, (ii) training $N$ LODLs, and (iii) training $M\\_\\theta$ using the trained LODLs.\n\nIn practice, we find that $(T'\\_O > T\\_O) >> (T_M > T\\_{LODL})$. As a result, the difference in complexity of DFL and LODL is roughly $\\Theta(T \\cdot N \\cdot T\\_O)$ vs. $\\Theta(K \\cdot N \\cdot T\\_O + N \\cdot T \\cdot K \\cdot T\\_{LODL})$. While our approach *can* be more computationally expensive, there are a few reasons why this typically isn't the case:\n\n* **Simplicity of learning LODLs:** In the calculation above, we assume that LODLs are trained in the same way as $M\\_\\theta$. However, in practice, they can often be learned much faster, sometimes even in closed form (e.g. WeightedMSE and DirectedWeightedMSE), leading to an effective runtime of $\\Theta(K \\cdot N \\cdot T\\_O + N \\cdot K \\cdot T\\_{LODL}) \\approx \\Theta(K \\cdot N \\cdot T\\_O)$. Then, the difference between DFL and LODL boils down to $T$ vs. $K$, i.e., the number of time-steps needed to train $M\\_\\theta$ vs. the number of samples needed to train the LODL.\n* **Amortization:** This is the biggest advantage of our approach. We need only sample candidate predictions once, to then train \\textit{any number} of LODLs (e.g., WeightedMSE, DirectedQuadratic) without ever having to call an optimization oracle. Similarly, once the LODLs have been learned, you can train \\textit{any number} of predictive models $M\\_\\theta$ based on said LODLs. In contrast, DFL requires you to call the oracle \\textit{every time} you want to train a predictive model $M\\_\\theta$. In this sense, it is fairer to compare LODL to *meta-learning* methods, and it's easy to see that, for a large number of models to train, as is common in ML (e.g. hyperparameter/architecture search, trading-off performance vs. inference time vs. interpretability, etc.), DFL is much more expensive than our approach. *In the future, we imagine that datasets could be shipped with not only features and labels, but also LODLs associated with downstream tasks!*\n* **Parallelizability:** Finally, the sample generation process for LODL is completely parallelizable, resulting in an $\\Omega(T\\_O)$ lower-bound wall-clock complexity for our approach. In contrast, the calls to the optimization oracle in DFL are interleaved with the training of $M_\\theta$ and, as a result, cannot be parallelized with respect to $T$, resulting in an $\\Omega(T \\cdot T\\_O)$ wall-clock complexity.\n\nThere are also additional ways in which we can speed up LODL (discussed in Section 6). Taking all of these into account, we find that our method is actually competitive with, or better than DFL in terms of scalability.",
" This paper proposes LODL as a surrogate to replace the original optimization loss while approximately providing gradient information of the original optimization loss. The key argument is that the gradient information may be difficult to obtain when solving complex optimization problems, such as non-convex optimization. In LODL, a surrogate loss is constructed based on variants of MSE/quadratic functions over a dataset sampled around each training sample, such that it approximates the original optimization loss and is easily differentiable. Pros:\n+ Constructing surrogates to obtain the gradients of the downstream optimization with respect to the predictions is important for decision-focused learning.\n\nCons:\n- The computational complexity of LODL is high. It's true that we reduce the dimensionality (Line 126), but the value of $N$ can be very large (which is typically the case in decision-focused learning where the number of training samples is large). Also, we have to learn a surrogate to approximate DL for each training sample. The total complexity will be way much higher than existing methods.\n\n- LODL is still an approximation-based method to differentiate the optimizer. There's no analytical/theoretical evidence to show that LODL can approximate the gradient of the original optimizer with respect to the predictions with sufficiently high accuracy. LODL is actually approximating DL, but this doesn't mean the gradients of DL with respect to the prediction $y$ is still well approximated by the gradient of LODL. One can easily construct counter examples in which two functions have similar values but dramatically different gradients in a neighborhood.\n\n- The way to sample $y$ to train LODL and learn $\\phi_n$ around each training sample $y_n$ is questionable. The samples are randomly generated based on a prior distribution in stage 1, but in stage 2 of learning the prediction model, the output --- predictions --- can follow a different distribution than the assumed prior distribution in stage 1. A direct consequence is that $\\phi_n$ may not be accurate for the new distribution of predictions in stage 2, raising further concerns with the use of pre-trained $\\phi_n$. This is also against the core idea of decision-focused learning where we want to learn the predictions by considering the entire decision pipeline as a single process.\n\n- Some analysis of LODL would be useful, e.g., sampling complexity and generalization bounds.\n\n- The authors are suggested to highlight the targeted scenario of LODL rather than general decision-focused learning. For example, in convex problems, we can efficiently and accurately differentiate the optimizer with respect to predictions, and so LODL is not needed or advantageous. - The computational complexity of LODL is high. It's true that we reduce the dimensionality (Line 126), but the value of $N$ can be very large (which is typically the case in decision-focused learning where the number of training samples is large). Also, we have to learn a surrogate to approximate DL for each training sample. The total complexity will be way much higher than existing methods.\n\n- LODL is still an approximation-based method to differentiate the optimizer. There's no analytical/theoretical evidence to show that LODL can approximate the gradient of the original optimizer with respect to the predictions with sufficiently high accuracy. LODL is actually approximating DL, but this doesn't mean the gradients of DL with respect to the prediction $y$ is still well approximated by the gradient of LODL. One can easily construct counter examples in which two functions have similar values but dramatically different gradients in a neighborhood.\n\n- The way to sample $y$ to train LODL and learn $\\phi_n$ around each training sample $y_n$ is questionable. The samples are randomly generated based on a prior distribution in stage 1, but in stage 2 of learning the prediction model, the output --- predictions --- can follow a different distribution than the assumed prior distribution in stage 1. A direct consequence is that $\\phi_n$ may not be accurate for the new distribution of predictions in stage 2, raising further concerns with the use of pre-trained $\\phi_n$. This is also against the core idea of decision-focused learning where we want to learn the predictions by considering the entire decision pipeline as a single process. Yes.",
" This paper proposes methods to approximate the decision-focused loss function that quantifies the quality of a prediction function by the quality of its induced decision. The proposed method considers several classes of locally parameterized loss functions at each label value in the sample. These loss functions are convex functions of the prediction. The parameters in the loss function for each label value in the sample are estimated by minimizing the loss approximation error at randomly sampled prediction values around the corresponding label truth. Originality: The idea of assigning different parameters to the loss function at different label values appears novel. This is an interesting idea to improve the expressive power of the loss function class that builds on relatively simple parametric losses (to guarantee convexity). \n\nQuality: The writing quality of this paper is overall very good and the proposed idea in this paper is well executed. But some important limitations of the proposed approach (e.g., computation and scalability) lack discussion. \n\nClarity: I found this paper is overall well-written and the high-level idea of this paper is easy to grasp. One exception is that the experiment section seems to lack some important details. \n\nSignificance: Although this paper proposes an interesting idea, I am somewhat skeptical of its practicability considering the challenges in parameter tuning and the computational scalability of the proposed method. 1. The proposed method learns the loss function parameters using samples generated by perturbing the label values (e.g., by adding gaussian noises). The magnitude of perturbation is obviously a very important tuning parameter. But how should this tuning parameter be selected? The selection of this tuning parameter is not discussed at all. Even the experiment section does not mention this. \n\n2. In the WeightedMSE and DirectedWeightedMSE losses, are the weights required to be nonnegative? Line 170 to 171 suggest that the weights are free parameters without any constraints. But negative weights along some dimensions mean that large prediction errors on those dimensions can actually decrease the loss, which doesn't seem desirable. I think negative weights may also encourage predictions that are far away from the labels in the training sets, where the loss function is least accurately estimated. This may also cause problems for the idea of using locally perturbed samples. \n\n3. The proposed approach needs to learn parameters for each label value separately. So if there are N training data points and we sample K local data points for each label value, then computing the DL loss function would require solving KN optimization problems. Isn't this very challenging for even moderately large-scale problems? Unfortunately, the computation limitation is not explicitly discussed in this paper and the problem sizes in the experiment section aren't clear about this either. For example, the perturbation process, the local sample size $K$ for the results in Table 1, and the running time information are not provided. \n\nSome other comments:\n- Is the $\\hat y^\\top Q\\hat y$ term in portfolio optimization a type? \n- The uses of the letters $n, N$ are confusing. From the equation below the line (77), $n$ refers to training data indices, and $N$ refers to training data sample size. However, in Section 5, $N$ seems to refer to the dimension of the uncertain variables, e.g., the number of resources or the number of stocks. \n- The processes of generating data based on real datasets in web advertising and portfolio optimization experiments are not clearly described. \n- Figure 2 seems interesting but I do not totally get the reason for the observed patterns. Can the authors provide some explanations? \n- The authors may hope to cite some additional literature on decision-focused learning, such as \n - Elmachtoub, Adam, Jason Cheuk Nam Liang, and Ryan McNellis. \"Decision trees for decision-making under the predict-then-optimize framework.\" International Conference on Machine Learning. PMLR, 2020.\n - Kallus, Nathan, and Xiaojie Mao. \"Stochastic optimization forests.\" Management Science (2022).\n - Hu, Yichun, Nathan Kallus, and Xiaojie Mao. \"Fast rates for contextual linear optimization.\" Management Science (2022). \n - Grigas, Paul, and Meng Qi. \"Integrated conditional estimation-optimization.\" arXiv preprint arXiv:2110.12351 (2021).\n - Qi, Meng, et al. \"A practical end-to-end inventory management model with deep learning.\" Available at SSRN 3737780 (2020). See the comments in the box above. ",
" This paper proposes a novel locally optimized decision loss (LODL) for decision focused learning (DFL). The LODL loss is a parameterized function trained with the true decision loss as the target. The LODL loss is a relatively faithful measure of the decision quality, and can provide informative gradient for the DFL training. The authors run the experiments on various optimization tasks including Linear Model, Web Advertising, and Portfolio Optimization, verifying the effectiveness of LODL. Also, the authors product some ablation studies and shows how well the LODL represents the decision quality. ### Originality \nThe idea of training a parameterized function to approximate the decision loss is interesting. Comparing with the MSE loss in 2-stage method, the LODL is more faithful to the decision quality. Comparing with the surrogate loss in DFL, the LODL can be easier to design. \n\n### Significance\nThe idea is novel for DFL. \n\n### Quality\nThe proposed method is compared with multiple baselines including 2-stage, DFL, and the LODL by NN on various resource allocation problems, which verifies the advantages of LODL for some problems.\n\n### Clarity\nThe paper is well-written and easy to follow. \n\n 1. The design of LODL is interesting, but I have a concern about the complexity of fitting the local loss function. The design fits one loss function for each sample by MSE. When the prediction space is large and needs a large number of samples, the training complexity would be very high. The authors may discuss more about the training complexity.\n\n2. The authors show the fitting quality of LODL by measuring the mean absolute error (MAE). From the results, we also know that the LODL may not easily fit some kinds of task loss for some inputs. It would be interesting to have some study about the relationship between the fitting quality, the gradient, and the task loss or different inputs. This may give more observations supporting the claims. The limitations of the design is clearly listed.",
" This paper considers learning losses for predictive models that\nare used in a downstream optimization task.\nSection 2 summarizes the basic setup where there is a predictive\nmodel $\\hat y=M_\\theta(x)$ that creates predictions used to\nparameterize an optimization process.\nThe baselines considered are 1) 2-stage learning, which trains\nthe predictive model with an intermediate loss, and\n2) decision-focused learning, which seeks to optimize a decision loss\nwith the predictive model, defined with the objective\nof the optimization problem.\nThe locally-optimized decision losses (LODL) proposed in this paper\nseek to learn the parameters of surrogate intermediate losses\nto match the decision loss. Strengths:\n+ The idea of parameterizing surrogate/intermediate losses makes a lot\n of sense in these two-stage settings, and the formulation considered\n here nicely shows the benefits of learning non-Euclidean regression losses.\n I can imagine learned losses to be a crucial,\n impactful, and long-lasting contribution in these settings.\n+ The experimental results clearly demonstrate that the LODL is\n able to learn a reasonable surrogate for the tasks considered\n\nWeaknesses:\n+ If we take the LODL to be the MSE loss parameterized by weights on\n each dimension, I do not understand the objective on L191:\n my interpretation is that it tries to make the weighted\n MSE loss match the decision loss around the optimal prediction\n by changing the weights.\n Since the MSE loss and decision loss are very different quantities,\n it like this objective will not be possible to optimize.\n+ Even though the paper experimentally shows that LODL works, the\n results are difficult to contextualize and compare to related research.\n For example, the web advertising task starting\n at L222 takes one of the settings from\n [Wilder et al., (2019)](https://ojs.aaai.org/index.php/AAAI/article/view/3982),\n but does not provide or present the results in a way that is\n comparable to the results in that paper. The submission would be\n significantly easier to evaluate in comparison to this work\n if it reproduces exactly Table 1 of Wilder et al., (2019) and\n adds additional lines showing how well LODL performs in\n comparison.\n\nRelated work:\n+ It could be interesting to connect the work to learned losses\n used in meta-learning, such as in\n [Bechtle et al. (2021)](https://arxiv.org/pdf/1906.05374.pdf),\n which takes a much more black-box perspective to learning a\n latent loss function. I would be very willing to re-evaluate my assessment after a discussion\naround the following questions on weaknesses I've listed above.\n\n1. Can you clarify how the LODL loss on line 191 should be interpreted? Did you consider alternatives to this?\n2. Can you comment on how the experimental results connect to\n established experimental results in DFL settings? The paper does not clearly discuss limitations in a dedicated section.\nWhile parameterizing and learning an intermediate loss seems appealing,\nit seems limited by needing to specify and learn the right\nparameterization."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"HPeR1pLd-kp",
"I1tcvBOC8rQ",
"Uk-KbbW9RB-",
"eizXYGZ1aaG",
"l8owqhJhodu",
"eLMqnbR6OR1",
"d10zsF4V6E",
"RBxoXKL1mJn",
"ef8CHnc-IyK",
"7XbTNM8splV",
"udvM9gpWgi",
"tZZHvz4aU3J",
"Xz2GmevwoTH",
"h3kg09N_nYk",
"pxgQ3TLdWR",
"rcOMSCpWM-",
"5CFbirRWnup",
"nips_2022_eN2lQxjWL05",
"nips_2022_eN2lQxjWL05",
"nips_2022_eN2lQxjWL05",
"nips_2022_eN2lQxjWL05",
"nips_2022_eN2lQxjWL05"
] |
nips_2022_RuNhbvX9o9S | Learning General World Models in a Handful of Reward-Free Deployments | Building generally capable agents is a grand challenge for deep reinforcement learning (RL). To approach this challenge practically, we outline two key desiderata: 1) to facilitate generalization, exploration should be task agnostic; 2) to facilitate scalability, exploration policies should collect large quantities of data without costly centralized retraining. Combining these two properties, we introduce the reward-free deployment efficiency setting, a new paradigm for RL research. We then present CASCADE, a novel approach for self-supervised exploration in this new setting. CASCADE seeks to learn a world model by collecting data with a population of agents, using an information theoretic objective inspired by Bayesian Active Learning. CASCADE achieves this by specifically maximizing the diversity of trajectories sampled by the population through a novel cascading objective. We provide theoretical intuition for CASCADE which we show in a tabular setting improves upon naïve approaches that do not account for population diversity. We then demonstrate that CASCADE collects diverse task-agnostic datasets and learns agents that generalize zero-shot to novel, unseen downstream tasks on Atari, MiniGrid, Crafter and the DM Control Suite. Code and videos are available at https://ycxuyingchen.github.io/cascade/ | Accept | This paper proposes a method to learn world models without rewards, using a collection of agents that explore an environment. The key idea is to maximize diversity between the trajectories collected by the agents to obtain a good world model, with an emphasis on being as efficient as possible. The authors present some theoretical justification for using a population of agents and their empirical results on several datasets provide a good demonstration of the method. The reviewers all agree this is an interesting and important setting and the author response significantly improves the paper on aspects of clarity and empirical results, based on the reviewer concerns. Overall, I believe this work provides interesting ideas and will encourage more work in this direction in the future. I encourage the authors to revise their paper taking the reviewer suggestions into account and add in the new experiments to make it stronger. | train | [
"ue6iCgBkEU",
"WCJsWhlmMPi",
"swLWDA0UhV",
"UA1K40tRCL",
"ITmuKmq8RQI",
"45CkpvW9Qsn",
"BpbH-UHuNwu",
"x_qN4mPlRvN",
"RP74IK81ykJ",
"M-TanDVG7f",
"ZAVxBnnyv0G",
"blF009OGmuQS",
"nBarta-SX0K",
"AW4K0gjkyRi",
"Y-ChXaRtMl0h",
"5D4Mm1Z8SDH",
"_wgHiTE2T9",
"o9FsPC3xnh",
"qVLmXF8jjjo",
"bB3u_HN3xZP",
"ASx3SgtsbG",
"jJTTnPmtkYE",
"jikxKFtNzu",
"8EaGV2Z9uk",
"Sj1UKx7Mx9y",
"96JD7H-SwU1"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi Reviewer zCLK,\n\nThank you for coming back, and for increasing your score to a \"weak accept\". It seems your only remaining concern is regarding the use of rewarding episodes as a metric for evaluating exploration. We want to reiterate that it is just being used as a proxy for depth of exploration, which we combine with state coverage to show breadth of exploration. We will expand on this in the additional page for our CRC, alongside re-wording sections of the intro and some Crafter analysis. \n\nHowever, given the confusion (which we do think is reasonable) we actually removed the rewarding episodes plot from the main body for Montezuma's Revenge. Instead, we show the zero-shot performance for our Atari results. If you have a spare moment then please check out the revised paper to see how it looks. \n\nIn light of this, we hope that you feel your are in a position to consider supporting our paper for acceptance with a score of 7+.\n\nThank you!",
" I want to thank the authors for their great use of the rebuttal. My main concerns are justified. I am still a bit skeptical about the rewarding episodes; however, it might be just some personal preference. \nI have raised my rating by 1 point. \n",
" Thank you for taking the time to read our response, and we are glad to have cleared up several details! We hope that we can further elucidate on the points you mention:\n\n1. Both of these objectives (ImagDiva and InfoGain) can be derived from the principle of maximizing information gain, a technique that is much older than Plan2Explore and has its origins in the classical Bayesian Experiment Design Literature. It may aid the reviewer to know that our ensemble information objective can be (roughly speaking) thought of as a reinforcement learning version of the batch acquisition objective of BatchBALD (See equation [1]). The ImagDiv term is derived using an information gain objective over trajectory embeddings (these take into account the whole trajectory) and also take into account the $B$ parallel deployment agents. This is derived from trying to maximize an objective of the form $I( \\Phi(\\tau_1), \\cdots, \\Phi(\\tau_B) \\parallel W)$ where $W$ is the posterior over models and $\\prod_{i=1}^B \\Phi(\\tau_i)$ is the product distribution over embeddings where $\\tau_i$ is a trajectory sampled from policy $\\pi_i$. The InfoGain objective on the other hand is the same term from Plan2Explore, and it is derived for each individual agent; in a population context this means that per-state they would take different actions, but this would not necessarily induce different trajectories. Moreover, the InfoGain objective is derived by maximizing an \\textbf{expected} mutual information objective $\\mathbb{E}_{(s,a) \\sim \\pi}[ I( h’ \\parallel W | s,a ) ] $ where $h’$ is the image embedding of the successor state to $s,a$ under policy $\\pi$. These two terms are **not** the same. To use an example, consider an MDP where there are 3 actions being A, B and C, of which A and B lead to the same uncertain next state, and assume one agent in the population has already taken action A. Plan2Explore would prefer action B, as that generally reduces model uncertainty, whereas ImagDiv would prefer action C, as that action explicitly leads to a different state compared to previous agents, inducing “deeper” exploration. Concretely, ImagDiv is an explicit ensemble diversity inducing objective, while InfoGain is a per-state expectation of mutual information terms. However, we agree with the reviewer that the difference may not be immediately obvious, so also propose to rename InfoGain as LocalInfoGain in the CRC to make the difference clearer.\n2. We wholeheartedly agree with the reviewer. That is precisely why we called our paper ‘reward free’. We have not claimed to have invented reward free deployments, we have in fact brought this language into the applied community (recognizing this has been extensively studied in theoretical works) while also introducing a version of reward free exploration that has (to our knowledge) not been studied in theory, where we have a specific number (and a priori known) of reward free deployments per round $B$. We also think that our solution based on information gain would be an exciting avenue for theoretical researchers to delve into. The observation behind our work (and many others in parallel Bayesian optimization in supervised learning) that diversity aids in parallel exploration has not been fully characterized theoretically. We hope that our work can start this conversation.\n\nAgain, we thank the reviewer for their time, and hope that our additional clarifications have helped address any major confusion that remains. If this is the case, then we kindly hope the reviewer provides additional support for our paper by increasing their score.",
" Thank you for this! We definitely agree, now that we have much stronger empirical results the intro is likely the place needing the most work if our paper is going to achieve the maximum possible impact. We will for sure take this on board and focus on improving it for the camera ready, which should be possible given we get an additional page.\n\nRegarding our experiments, we want to flag that you mentioned *\"I want to see how CASCADE works in a more realistic environment or image-input tasks\"*, but all of the environments in this paper are image input aside from Minigrid, which uses a similar type of observation (just not rgb). Crafter/DMC/Atari are all from pixels, and for this reason our method is based on DreamerV2. These are reasonably large-scale experiments!\n\nWith this in mind, would you feel your are in a position to give our paper an \"accept\", rather than \"weak accept\"? \n\nThank you!",
" I am sorry for the confusion. Maybe I should give you a clearer explanation for my rating. The point \"5\" is kind of \"accept\" rather than \"reject\".\n\nAs I mentioned, the core idea of your work is attractive. It does focus on one critical problem of the RL community. \n\nHowever, a good idea is not equal to a good story. There may exist some misunderstanding of my comments. That is, the \"well-structured\" and \"easy-to-read\" actually are for the technical parts (sec.3) of the paper. I believe this paper will benefit a lot from re-writing the abstract/introduction parts. \n\nMeanwhile, I note that you have greatly upgraded the analysis parts (sec.4) and added some experiments in the rebuttal. However, the first added experiments were not enough for me to raise the score, and I didn't ask for more complex environments due to the limited time. I want to see how CASCADE works in a more realistic environment or image-input tasks, and that's one of the most direct ways where the \"world model\" shows its true value and meaning to the RL community. \n\nConsidering you have conducted a lot of work in the rebuttal, I can raise my score by 1. BUT remember to upgrade the writing and tell a better story. If possible, add some more realistic experiments to show the advantages. Good luck.",
" Hi Reviewer zCLK,\n\nWe understand reviewer load is high and we thank you again for your time!\n\nWe just wanted to flag that we have made significant improvements to our paper, with new experiments and additional clarifications (based partly on your specific recommendations). Other reviewers have now raised their scores, and we were hoping you might consider doing the same, given you were originally \"borderline accept\" and our paper is now much stronger.\n\nThank you!",
" Hi all,\n\nBefore the discussion period comes to a close, we would like to ask for a few more moments of your time. Based on the request of Reviewer Yjhj, we ran further experiments on a fourth Atari game, Freeway, and the results are now in the paper. As you can see below, CASCADE very strong here, matching human performance, and performing nearly as well as methods that can solve this task (such as Ape-X, which gets 34.0):\n\n| Method | Random | P2E | PP2E | CASCADE |\n|:--------------|:-------------:|:-------------|:-----------|:------------|\n| IQM | 1.06 | 6.36 | 17.36 | 29.22 | \n| (95% CI) | ( 0.47, 1.84) | (3.66, 9.18) | (14.01, 20.67) | (28.88, 29.54) | \n\nIn the camera ready version we will have an additional page, so can further elaborate on these results and provide some additional analysis for Crafter. We can also use this space for some additional explanation of our method and to add more citations.\n\nWe are pleased to see there have been some upgrades from reviewers, with all now in favor of acceptance. It would be great if the reviewers could provide further support given the strength of our new results (3 new Atari games + Crafter + more DMC deployments) and additional clarifications made in the paper (in red).\n\nThank you and have a great day!\n",
" We are excited to share that we were able to get five seeds for Freeway faster than expected. The performance is very strong, as you can in the paper and pasted below:\n\n| Method | Random | P2E | PP2E | CASCADE |\n|:--------------|:-------------:|:-------------|:-----------|:------------|\n| IQM | 1.06 | 6.36 | 17.36 | 29.22 | \n| (95% CI) | ( 0.47, 1.84) | (3.66, 9.18) | (14.01, 20.67) | (28.88, 29.54) | \n\nBriefly, we match human performance *zero-shot*, and perform nearly as well as methods that can solve this task (such as Ape-X, which gets 34.0).\n\nWe want to extend our deepest thanks for all of the suggestions thus far: your feedback has made our work significantly stronger and we are now confident it would be a great contribution for NeurIPS. We hope that you feel your are in a position to consider supporting our paper for acceptance with a score of 7+, especially in light of the strong empirical results in the extended experiments you called for (thanks again!).\n\nThank you!\n",
" I would appreciate the authors for their detailed explanations. I would want to address two issues in my follow-up:\n\n1. With the additional explanations, it seems like the major difference between the two terms, on a very high level, is \"trajectory-level\" vs. \"point-wise\" uncertainty (according to my understanding). However, it is still very unclear to me in the current form of the paper for the intuition of these two terms. My major confusion comes from: the two terms seem to both be surrogates for the same objective term (the info gain), and both of the derivations check out to me. Thus maybe I am missing something, but the current presentation looks still unclear to me which is the source of the difference between the two final surrogates.\n\n2. Regarding the citation: I appreciate the authors' effort in being as inclusive as possible. However, my point of bringing up the theory paper is merely to show that reward-free exploration is a long-standing problem in theory and thus it is not a very novel setting so to speak. The papers I mentioned in my reviews are just very few selective papers on this topic and are just for the purpose of justifying points. I would also like to mention that, in my humble personal perspective, we should recognize the connections between theory and practical papers that study the same broad problems.\n\nIn summary, I think the authors are making great efforts in improving their presentation on the paper. The paper has no clear technical issue and the empirical results are impressive. I would like to improve my rating by 1 but refrain from further improvement since my major confusion is still not completely address. ",
" Thank you for taking the time to read our response and for acknowledging that we addressed some of your concerns.\n\nRegarding Freeway, given compute constraints we only had one seed when the rebuttal period ended (vs. five for others) so did not manage to include it. We will now resume these experiments and maybe get a result before the discussion period ends (August 9th 1pm PT). Regardless of the outcome we will include the results in the CRC.\n\nWe would appreciate it if you could help us understand what, if anything, you feel is holding you back from full support of the paper (i.e. a score of 6/7) given the concerns we've addressed.\n\nThank you!",
" Thanks for your response. My concerns about toy environments were addressed, and the additional experiments on Atari and Crafter improve the overall results. I updated my score to 5. \n\nBy the way, did you try Freeway, which is often considered as a significant exploration problem with sparse rewards? ",
" We appreciate you taking the time to read our response and for commenting that we have addressed all of your concerns. We have to say, it is very confusing to see such positive comments without an increased score, especially given we are currently only at a 5.\n\nIn particular, you note we have an “attractive solution for one critical problem”, with the only concern remaining being “proofreading”, yet in the initial review the first strength was “This paper is well structured and easy to read” and we will have an additional page in the camera ready and can easily spend time making the paper more coherent if given the chance.\n\nPlease can you either increase or provide us with a genuine reason why the paper does not warrant accepting?",
" Thank you for efforts in rebuttal. I think the authors have addressed my main concerns in the response. I agree that this paper proposes an attractive solution for one critical problem in the community, and the empirical evaluation proves it works. However, although you have upgraded some parts of the paper, I believe that more proofreading can better tell your story. I tend to keep my score.",
" Having said this, we empirically find that there is still a benefit to also incorporating the InfoGain objective, and as such introduce a trade-off parameter that controls how much we favor the more ‘local’ uncertainty reduction exploration that InfoGain encourages, and the more ‘global’ uncertainty reduction exploration that ImagDiv encourages.\n\nWe have tried to make this clearer in the updated manuscript, but with the benefit of another page for the camera-ready, we will definitely include these finer details.\n\n### Aren’t these toy experiments?\n\nIn short, no! The experiments we include are in all environments still commonly used by SoTA deep RL papers. In particular:\n* MiniGrid is partially observable, procedurally generated and has sparse rewards. It is regularly used for SoTA exploration algorithms, as can be seen [here](https://github.com/Farama-Foundation/gym-minigrid).\n* Montezuma’s Revenge has been used for deep RL exploration methods for the past 5-6 years. It remains a challenge for many methods, even with rewards. By operating in the reward-free domain, this becomes even more challenging (as the sparse rewards still provide indicators of progress).\n* Walker was recently proposed as a benchmark for unsupervised RL in URLB. This came out at NeurIPS 2021, less than a year ago. We also use three different settings with differing offline dataset initializations, providing an interesting use-case showing our method can be subsequently deployed with any initial data and improve generality. This is clearly not a toy benchmark!\n\nFurther, we consider a *more challenging* setting for all of these benchmarks, using just a handful of deployments (rather than the fully online setting in most prior work). Further still, we use the current SoTA world model, DreamerV2. Finally, these experiments are distinct, and crucially without changing *any implementation details*, our method works well in all settings.\n\n### New experiments \n\nEven though we feel our existing experiments are thorough and sufficient, we think some ideas from the reviewers are worth exploring to provide additional evidence for the efficacy of CASCADE. We therefore present new results, which are all included in our revised manuscript. In particular, we focus on the Crafter environment, highlighted by both reviewers zB57 and Yjhj. As alluded to, Crafter is a highly challenging environment for reward-free RL algorithms, and P2E is the state of the art agent in the online setting here. \n\nWe have been able to run 10 seeds for each method for 20 deployments using a deployment size of 50k and a population size of 10. This represents 1M steps, as included in the original Crafter paper. We use the evaluation protocol from the paper (geomean from all training data) to produce the “Crafter Score” as follows:\n\n| Method | Random | P2E | PP2E | CASCADE |\n|:-------------------:|:--------:|:--------:|:------:|:------------:|\n| IQM | 1.54 | 2.03 | 2.03 | 2.07 | \n| (sem) | (0.01) | (0.03) | (0.02) | (0.02) | \n\nIndeed, we are pleased to report that we do see gains here for CASCADE. Interestingly, we do not see any gains from PP2E, indicating that random initialization provides insufficient diversity. Meanwhile, CASCADE does see a small improvement, which we believe could be extended in future work. In particular, focussing on improved *behavioral representations*, which is an active area of work in the QD, multi-agent and deep RL communities. \n\n#### **DMC, more deployments**\n\nWe wanted to see how our methods compare on DMC with more data. In the original paper we showed the performance after 1 and 2 deployments. We have now expanded these results to *15 deployments*. Once again we see consistent gains for CASCADE vs. the baselines. See the paper for the new expanded results.\n\n#### **New Atari Experiments**\n\nBased on the feedback from Reviewer Yjhj we have been able to run two additional Atari environments. We note that these results are only for five seeds and not extensively tuned, which should be expected given the fast turnaround. Nonetheless, we once again see that CASCADE outperforms other methods in these domains. We have revamped our paper to emphasize the zero-shot performance, since other reviewers found the training metrics misleading. These will be in the Appendix in the camera ready, for completeness. We now have a new Figure 4 that includes *three Atari games* and Crafter, vs. previously just one Atari game. This represents a significantly more exhaustive set of experiments which alone we believe is sufficient for improved review scores. As we discuss in the updated manuscript, we see that CASCADE is statistically significantly better than the baselines in these new environments, and furthermore consistently displays strong performance across all environments.\n",
" We thank the reviewers for taking the time to provide us with thorough feedback. Overall we believe the reviews to be constructive and we have made every effort to address all concerns, including running a large variety of additional experiments and providing clarifications. **We have updated our paper to include these changes and believe it led to a much stronger version of the paper**. Given that our initial review scores were generally borderline, we hope to see the reviewers’ support for our paper to increase (to accept) or to receive additional feedback on how to further improve the paper. \n\nPlease see below some highlights of common themes in the responses, changes to the paper and new experiments. \n\n### How come we used B policies? Is this practical?\n\nWe assume this refers to the population of B agents. One area of confusion (which we hope we have clarified in the updated paper) is regarding the fact that we have an exploration policy consisting of B behaviors. One way to view this is that it is *one policy*, but can deploy different behaviors at different times. Concretely, consider it as a single policy with multiple behaviors “pre-loaded”. When might this be practical? Here are a few examples:\n* If you have a fleet of robots, for example a room full of robot arms, each arm could index a different behavior from the exploration policy. This is a common use-case in robotics, where data collection on a single robot is often impractical [46].\n* If you have a single robot but access for a few hours, each time there is a reset the behavior could switch.\n* If you are hoping to collect data with parallel compute, it may be possible to collect A episodes in parallel with A >> B. We would then collect A/B episodes with each of the B behaviors. It does not make sense to do all of this collection with the *same* behavior policy as most of the experience will be a duplicate. For instance, with a single GPU in a parallel simulator like Brax, A = 2000, so if we have a population of B=20 we are still collecting 100 episodes with each behavior. This is much better than having 2000 episodes with the *same* behavior which leads to a homogenous dataset that will not aid the generalization of our world model. Our work is the first step towards being able to appropriately leverage this type of simulator for learning world models.\n\nIn all of these cases we do not want to wait minutes to hours to retrain new behaviors. So having access to a diverse set of pre-trained behaviors makes it possible to collect a rich dataset in a single “deployment”.\n\n### How are ImagDiv and InfoGain related and how are they distinct?\n\nWe see that several reviewers have expressed a lack of clarity regarding these two terms, and fully accept our failure to communicate this effectively in the original manuscript. In short, InfoGain is the epistemic one-step dynamics uncertainty maximization term from Plan2Explore, and ImagDiv is a new novelty seeking term whose formulation is partly *inspired* by that of Plan2Explore, but is distinct by maximizing overall *trajectory* diversity, not maximizing one-step entropies.\n\nTo better understand this, consider the original InfoGain term in Plan2Explore (i.e., Eq 5 in their work). Concretely, this term relies on maximizing mutual information between the next latent state and parameters *conditioned on the current state and action*. In comparison, our ImagDiv formulation *does not* condition on the current state and action, and instead looks at the entire trajectory, which it does through the embedding function $\\Phi$.\n\nWhy is it important that our information theoretic formulation is not per-state? First, defining a per-state information gain objective in the way Plan2Explore does is suboptimal for deep exploration. This is because such objectives encourage local exploration, which reduces the uncertainty at a given state, but may not result in the best reduction in uncertainty in discovering the structure of the whole MDP, particularly in the batch deployment setting we study in this work.\n\nTo further understand this, the generalization of the InfoGain objective in Plan2Explore would reduce to stitching together per-state entropy maximizing policies that can maximize the per state epistemic coverage, but may fail at maximizing the *global structure* coverage of the MDP. For example, an ensemble of policies as defined in a tree MDP could therefore avoid each other on a per state basis, but may not maximize the probability of having all policies end up at a different leaf, crucial for ensuring policies that achieve diversity in the environment when taken together. To this latter point, we exploit the submodularity of the population formulation to ensure this ‘deep’ diversity in the trajectories.",
" ### Q4: Clarifications\nSee answer to Q3. In our experiments, $\\phi$ is the last hidden state of a trajectory, but is generally a transformation that converts trajectories into embeddings. The expectation is over trajectories in the RSSM world model itself, which includes marginalizing over the stochastic latent variable.\n\n### Q5: PP2E10 performance in Fig3\n\nIn practice what makes an environment “easy” or “hard” for a given method is an active area of research [4], which is poorly understood in the deep RL community. In this case, both environments are procedurally generated so while FourRooms looks simple (and is often presented as a static grid world in theory papers) it is actually a non-trivial problem for deep RL agents. Note also the observation is pixel-like and it is partially observable and procedurally generated, so the agent may overfit to specific layouts explored better at train time.\n\n### Q6: Offline Dataset\n\nThis is a great question since we could certainly have clarified things further. The interesting thing here is we want to show that given *any* offline dataset, we can do subsequent deployments and build a more general model. The specific datasets used are challenging because generally models do not learn well from expert data in particular, since it is a narrow distribution, thus exploring online is paramount to ensuring adequate coverage in the collected data. Furthermore, the “random” datasets represents training from scratch with no prior knowledge, since any of these approaches could deploy a random policy for their first deployment. We have clarified this in the paper.\n\n### Minor Issues\n\nCorrect, the algorithm will never terminate. In theory it is open-ended the same as any other RL method. However, we understand this is in contrast to some of the motivational examples (e.g. deployment on robots) so we will clarify this to make the open-ended application more explicit.\n\nOther issues should all be fixed in the revised manuscript.\n\nThank you again, we look forward to hearing from you in the coming days!\n\n[1] Agarwal, Alekh, et al. \"Flambe: Structural complexity and representation learning of low rank mdps.\" Advances in neural information processing systems 33 (2020): 20095-20107.\n\n[2] Modi, Aditya, et al. \"Model-free representation learning and exploration in low-rank mdps.\" arXiv preprint arXiv:2102.07035 (2021).\n\n[3] Matsushima et al., Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization, ICLR21\n\n[4] Furuta et al., Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning, ICML21",
" Thank you for your review! We are pleased to see you appreciate that our work builds on theoretical foundations and applies them in a novel deep RL setting. We are optimistic that the issues raised in the review are things we can resolve over the coming days, and we hope you can subsequently consider supporting our paper’s acceptance. In particular, we have made a concerted effort to emphasize the existing theoretical work in this space. We appreciate that we previously did not include sufficient references here, but we do want to highlight that we put a tremendous effort into citing previous work (>100 citations in v1), so this was entirely unintentional. We hope that now with appropriate recognition, it is clear our work extends these ideas to the deep RL paradigm, which makes a solid contribution for NeurIPS. \n\nSee below for our detailed responses to your Major Weaknesses.\n\n### W1: The paper is not very well-origanized.\n\nWe will seek to address these issues in the remaining responses.\n\n### W2: Existing theory undermining novelty. \n\nWe want to emphasize this work is not theoretical in nature. It is a methods paper to which we have added what we believe are useful theoretical intuitions. We also do not claim to have invented the theory of reward free exploration. In fact, we make that clear in the related work section (which has been expanded upon). In this work we consider a reward free setting where the learner can deploy exactly $B$ policies at each time-step. Our contribution is to adventure an answer and devise methods for this problem inspired by an information theoretic approach borrowed from existing work in active learning. We would be very excited if our work could spur a more thorough theoretical exploration of these ideas, of which (in the batch setting) *we are the first to port to deep RL*. \n\nWe have added [1] and [2] to our related work section. Nonetheless we respectfully disagree with the reviewer in ascribing much more similarity between [1,2] and our work than sharing the objective to efficiently build a model by explicitly maximizing state coverage. First, we demonstrate our algorithms to be extremely effective (SOTA) in the neural network approximation regime, something that neither FLAMBE nor MOFFLE do as they only work for low rank MDPs, a regime that is far from describing many practical problems such as the large scale problems in our experiments. Second, we are chiefly invested in the concept of designing a diverse set of policies for exploration. This is an explicit objective of our work, and one that in practical settings we believe is crucial for the batch exploration problem we are proposing and one for which many theoretical works (the ones you cited included) do not provide an answer to. \n\n### Q1: Figure 1\n\nThe top row represents data collection and the second represents learning. The key idea here is learning occurs offline, rather than with some deep RL methods where the agent pauses every timestep to take a gradient update. The Figure was based on a similar diagram in [3] which introduced the deployment efficiency setting. To address the reviewer’s concern we have updated the Figure to make the agents distinct in the top two rows (representing deployment and learning). In [3] the offline setting is shown as 1, but we agree it makes more sense to give it a 0, so we also made that change. Thank you for the suggestions!\n\n### Q2: Contextual MDP\n\nIndeed, it is entirely possible there are also different dynamics for each context. In this paper we consider the common case where the dynamics are the same across each context, but the reward function changes. We still find the CMDP formalism helpful, since it is being adopted more broadly in the community, but agree this special case is unclear in the paper. We have updated the manuscript to reflect this.\n\n### Q3: Derivation of the objective/mismatch between main and Appendix\n\nWe apologize for the confusion here, and note that the appendix *only* contains a derivation for the ImagDiv term. We hope that the general response has cleared up how the ImagDiv term and InfoGain are inspired by the core idea of maximizing mutual information, but differ considerably in the types of policies that they induce. Concretely, ImagDiv tries to ensure overall diversity across trajectories, whilst InfoGain tries to maximize the one-step epistemic uncertainty over imagined MDPs, which is less amenable when considering populations of agents. We will make sure this is communicated properly in the appendix.",
" ### W3: Formalism unclear\n\nWe hope this is now largely cleared up in the general response, but to answer your individual concerns, when moving to the trajectory perspective, we want to maximize over policies as these induce trajectories; note that Plan2Explore still takes an argmax, but instead over actions as their formulation focuses on the one-step setting.\n\nRegarding the intuition behind the mutual information between a distribution of states and an imagined MDP, when decomposing the objective into a difference in entropies, (i.e., Eq. 1 in our paper), we see that the first term refers to general diversity in the trajectories, whilst the second term refers to diversity in trajectories that is irreducible (since we condition on the MDP, e.g., as a result of inherent transition function stochasticity). Taking their difference gives the epistemic uncertainty over the trajectories themselves, hence why we choose to maximize this term. Then, when factoring in the population, as we do in Eq. 4, we need to make sure we don’t ‘double count’ the epistemic uncertainty due to those trajectories already being preferred by other individual policies.\n\n### W4. Theory is motivation not guarantee\n\n“Theory is motivation not guarantee”... we agree with this statement! This is a deep RL paper, our theoretical statements should be seen as a motivating guide to justify the design choices we have made. We do not consider this work to be a theoretical paper nor we believe our main results are theoretical in nature; we have now changed the wording in the paper to reflect this.\n\n### W5. Writing more polished\nWe have now made these changes in the paper, thank you for finding them.\n\nMoving to the questions....\n\n### Q1: “While true”?\n\nIn theory we could run this algorithm forever, it is intended to be open-ended. The key idea is that at each deployment we collect a large batch of data. This is very different to existing paradigms where the model often retrains every timestep with a single episode of new experience being collected at a time. That being said, we agree it is confusing in this specific context so we made a change in the manuscript. We hope it is clearer now. \n\n### Q2: L.97 clarification\n\nThere is no difference from Plan2Explore here in principle, it is just 1) the specific objectives used and 2) the number of steps collected with the subsequent policy. We collect thousands of steps vs. P2E which updates every timestep.\n\n### Q3: Fig3\n\nThe only answer we can give is that the train-time data collection may not be a perfect proxy for the generality of the world model. We tried to give a few different metrics to show the performance of different approaches for this very reason (see our reply to reviewer zCLK for additional details). The fact that CASCADE is pareto optimal here is reassuring that it is the strongest method.\n\n### Q4: Fig3\n\nFollowing the previous question, it is simply the case these two metrics do not perfectly capture the breadth of the distribution of data collected at train time. It could be the case that the state coverage is the same but CASCADE is more uniform over the covered space so models it better.\n\n### Q5: Fig5\n\nInteresting observation, we do not know any reason why this would be the case. The models are all trained on the same initial data. \n\n### Q6: Addressing weaknesses\n\nAbsolutely! Thank you for the opportunity :)",
" We appreciate your thorough review, providing us with plenty of opportunities for clarification and improvement. We believe we have addressed all of these concerns, so please let us know if anything else is required for your score to be improved.\n\n### W1: Lack of experimental evidence.\n\nFirst, we respectfully disagree about our experiments not being on complicated environments. Concretely:\n* MiniGrid is a non-trivial exploration environment, used by state of the art model free algorithms explicitly testing for exploration given the sparse reward and partial observability. See for instance: RIDE (ICLR 2020), AMIGo (ICLR 2021), AGAC (ICLR 2021), NovelD (NeurIPS 2021) just to name a few. Our reward-free agents can solve two of the tasks in this benchmark zero-shot, only deploying a handful of exploration policies.\n* From Atari we include Montezuma’s Revenge which is widely considered to be one of the most challenging exploration environments. Our agents can get >0 performance zero-shot with reward-free exploration. We now also have additional Atari experiments, each presenting their own unique exploration challenges.\n* For DMC we use Walker which is one of three environments considered by the URLB benchmark (NeurIPS 2021). Note that in URLB the “unsupervised” methods do not perform especially well here, so this is very much still the frontier for reward-free deep RL research. Further, we consider initializing our agents with *three different datasets*: random, medium and expert. This is essentially three different experiments all using the walker benchmark. See Figure 7 in the Appendix for the full results. This alone is a large-scale experiment, using one of the environments widely studied for current SoTA reward-free RL algorithms. \n\nSecond, since we appreciate that you may need further convincing, we have made our best effort to run new experiments *based on your specific recommendations*, which are viewable in the general response and updated manuscript. Concretely, we have been able to run 10 seeds for each method in Crafter. We tested each agent with 20 deployments with a deployment size of 50k, using population size of 10 for PP2E and CASCADE. This sums to 1M steps, for fair comparison to the original Crafter paper. We use the evaluation protocol from the paper (geomean from all training data) to produce the “Crafter Score” as follows:\n\n| Method | Random | P2E | PP2E | CASCADE |\n|:-------------------:|:--------:|:--------:|:------:|:------------:|\n| IQM | 1.54 | 2.03 | 2.03 | 2.07 | \n| (sem) | (0.01) | (0.03) | (0.02) | (0.02) | \n\nAs can be seen, we do see gains here for CASCADE. Interestingly, PP2E does not outperform P2E, indicating that random initialization provides *insufficient* diversity. Meanwhile, CASCADE does see a small improvement, which we believe could be extended in future work. In particular, focussing on improved *behavioral representations*, which is an active area of work in the quality diversity, multi-agent and deep RL communities. \n\nFurther, we have also added results on two of the Atari games mentioned: Hero and Frostbite; please see the updated paper for details about these (principally Figure 4).\n\n### W2: Limited comparison to prior work\n\nWe respectfully disagree that our current experiments are “classical toy environments”. There are many recent purely empirical papers in RL using these environments to demonstrate SoTA agents (e.g. RIDE, AGAC, NovelD). Instead, we use these same environments in a far more complex setting of reward free deployments to learn a general world model. To reiterate:\n- MiniGrid is widely used by SoTA exploration agents due to partial observability and sparse reward. Further it is procedurally generated.\n- Montezuma’s Revenge has been a huge focus for exploration works for the past 5-10 years. Very few works consider this in the reward-free setting.\n- DMC Walker is from a brand new benchmark (URLB) which only came out last year. It seems very unfair to call this a “classical toy environment” when it is less than 12 months old. Further, we use three different offline initializations, which was maybe missed by the reviewers. \n\nThe criticism that the setting is not justified by real world settings seems unreasonable. Almost all deep RL papers at NeurIPS could be rejected on this basis. In fact, our setting is very much justified by real world settings. Think of the multiple robot arms working simultaneously to collect data. In this case B is small but not 1. Further, the majority of offline RL works simply use D4RL, a proprioceptive (and lower dimensional) benchmark which is far more toy than using a latent world model *from pixels* in three different experimental domains; indeed, the original deployment-efficiency paper uses this domain [64]. ",
" Moving to the Concerns/Questions:\n\n### Q1. Pseudocode exploration policy. \nThis is to improve the generality of the formulation, and CASCADE still naturally fits into this. We can consider CASCADE as comprising a *single policy* that can switch between B behaviors during deployment without retraining. We then collect A * B episodes at deployment time, since we collect A episodes with each of the B behaviors. This could be conducted either in parallel or sequentially. We have made a comment on this in the paper as we agree it was not clear before.\n\n### Q2. Embedding choice.\nThis is a great question, and it is very much an open problem in the literature. It boils down to the following: “what is the right behavioral representation with which to measure diversity?”, which we have seen discussed in a huge range of fields in RL. In our case, we take the final recurrent state for two main reasons. First, inspired by work in neural machine translation [cite Seq2seq], which use the final encoder RNN state to decode complex translations, we note that the final state should contain information that summarizes the trajectory. Therefore if two final states are similar, it is reasonable to assume that their imagined trajectories are also similar; we find this to be the case empirically. Second, it is significantly more tractable to compare only final states between agents; indeed we experimented trying to utilize non-parametric distance estimation of imagined trajectory latents, and observed that these did not perform as well, and took significantly longer to run.\n\nHowever, we fully accept that if we want to solve something more complex, such as Crafter, this may need a more sophisticated embedding which explicitly models longer term interactions. We leave this for future work.\n\n### Q3. Typo\nFixed, thank you!\n\n### Q4. Jump from Eq 2 -> 3.\n\nAs identified, in our method we produce $B$ exploration policies. This makes it paramount we extend the single policy mutual information maximization formulation that produces a single policy (equation 2) to the multi policy setting (Equation 3). As we show (Lemma 1) it is not sufficient to produce a single policy and play it $B$ times. We hope this is clearer now!\n\n### Q5. More complex experiments.\n\nSee the response above, we have some exciting new results! We are glad that the reviewer agrees the method doesn’t have to work perfectly in every environment, but in many cases it does provide an improvement.\n\nTo conclude, we have now improved the abstract by making our contribution clearer, and we believe our experiments were already thorough and general but have improved them anyway. Given this response, we kindly hope the reviewer provides additional support for our paper by increasing to an accept.",
" Thank you for your positive review, we appreciate that you found the motivation easy to understand and the paper to be clear. It seems your concerns are twofold: 1) the abstract quality 2) more “complicated” environments. We will try to address these below.\n\n### 1. The abstract.\n\nWe completely agree; we should have done a better job specifying the novelty and necessity of CASCADE. We have subsequently changed the abstract based on the reviewer’s feedback and we hope that it is now much clearer!\n\n### 2. a) The experiments are not strong enough, b) need more complicated environments.\n\na) We are always seeking to improve our work and we have made our best effort to run some new experiments to strengthen the paper. Based on feedback from reviewers we are pleased to let you know that we have been able to run 10 seeds for each method in Crafter. We tested each agent with 20 deployments with a deployment size of 50k, using population size of 10 for PP2E and CASCADE. This sums to 1M steps, for fair comparison to the original Crafter paper. We use the evaluation protocol from the paper (geometric mean from all training data) to produce the “Crafter Score” as follows:\n\n| Method | Random | P2E | PP2E | CASCADE |\n|:-------------------:|:--------:|:--------:|:------:|:------------:|\n| IQM | 1.54 | 2.03 | 2.03 | 2.07 | \n| (sem) | (0.01) | (0.03) | (0.02) | (0.02) | \n\nAs can be seen, we do see gains here for CASCADE. Interestingly, PP2E does not outperform P2E, indicating that random initialization provides *insufficient* diversity. Meanwhile, CASCADE does see a small improvement, which we believe could be extended in future work. In particular, focussing on improved *behavioral representations*, which is an active area of work in the quality diversity, multi-agent and deep RL communities. \n\nFurther, we have also added results on two of the Atari games mentioned: Hero and Frostbite, as well as including more deployments for DMC. In all cases CASCADE demonstrates an improvement over the baselines. Please see the updated paper and general response for more detail.\n\nb) We have to disagree about our experiments not being complicated environments. Concretely:\n* MiniGrid is a non-trivial exploration environment, used by state of the art model free algorithms explicitly testing for exploration given the sparse reward and partial observability. See for instance: RIDE (ICLR 2020), AMIGo (ICLR 2021), AGAC (ICLR 2021), NovelD (NeurIPS 2021) just to name a few. Our unsupervised agents can solve two of the most difficult tasks in this benchmark zero-shot.\n* From Atari we include Montezuma’s Revenge which is widely considered to be one of the most challenging exploration environments. Our agents can get >0 performance zero-shot with reward-free exploration. We now also have additional Atari experiments, each presenting their own unique exploration challenges.\n* For DMC we use Walker which is one of three environments considered by the URLB benchmark (NeurIPS 2021). Note that in the URLB the unsupervised methods do not perform especially well here, so this is very much still the frontier for reward-free deep RL research. Further, we consider initializing our agents with *three different datasets*: random, medium and expert. This is essentially three different experiments all using the walker benchmark. See Figure 7 in the Appendix for the full results. This alone is a large-scale experiment, using one of the environments widely studied for current SoTA reward-free RL algorithms. \n\nWe argue that each of these environments is non-trivial and highly relevant for the current frontier of reward-free RL. The key point though is that we have a single method that works on all of them with *no implementation modifications*, demonstrating our method’s generality. ",
" Thank you for your positive review! We are pleased to see you found the work to be well-motivated and high quality. Focusing on the Questions and Weaknesses, it seems the largest area of concern can be addressed with better explanation on our side. We will attempt to explain here and we hope that if this satisfies the concerns you will consider raising your score.\n\n### Why is rewarding episodes being considered in a reward-free setting?\n\nThis is a great question, we agree it is confusing. Essentially we are trying to measure the *overall effectiveness* of the exploration policy. We decided that a fair way to measure would be “reward” + “coverage” because:\n* *Rewarding episodes* may capture the discovery of more complex behavior, but does not indicate the *breadth* of exploration. For instance, consider an agent that adopts a complex exploration policy that consistently reaches the furthest room, but does not explore any rooms in between. This would facilitate the learning of a *narrow* set of complex behaviors, but will struggle if the goal gets moved at test time to a room that’s closer to the initial position.\n* *Coverage* will reward filling the space but may not capture depth of exploration or more complex behaviors to get the final few percentage points. For instance, an agent may learn to fully cover an entire room, but not learn complex behaviors such as opening doors and entering new rooms.\n\nWe therefore believe that considering these two metrics together provides a reasonable representation of the exploration effectiveness. We then use zero-shot transfer performance as a further validation of the quality of the model. Finally, we note that recent exploration works also use similar metrics when evaluating [1, 2].\n\n### Final performance of P2E.\n\nWe will try to add this; it may not be ready for the rebuttal but will be in the CRC. For sure we will see better performance for P2E in the DMC tasks.\n\n### Elaborating on collecting without retraining.\n\nThis is largely addressed in the general response, but essentially imagine having a finite amount of time to collect data with a policy; we would want that policy to do diverse things to facilitate learning unknown downstream tasks. Essentially our method will switch between pre-trained diverse behaviors, rather than constantly deploy the same behavior. PP2E of course does the same thing, but without explicitly enforcing diversity in the behaviors.\n\nThank you again - please also check out the additional experiments (from the other reviewers) that are shared in the individual responses but more clearly in the general response and updated manuscript. Please let us know if there is anything else we need to address for you to raise your score.\n\n[1] Flet-Berliac et al. Adversarially Guided Actor-Critic. ICLR 2021\n\n[2] Zha et al. Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments. ICLR 2021",
" This work introduces a new problem setting, Reward Free Deployment Efficiency focusing on two incentives: 1. Task agnostic exploration facilitates generalization. 2. Exploration policies that can collect large quantities of data without centralized retraining facilitate scalability.\n\nIn addition, this work introduces the Coordinated Active Sample Collection via Diverse Explorers (CASCADE), which can gather a diverse set of data and is inspired by Bayesian Active Learning.\n\nIn this work, B exploratory agents are trained in parallel. At each deployment, a loss containing two parts is maximized; the first part is a diversity term between the agents' behaviors. And the second part is the so-called information gain. \n\nThe authors have used the DreamerV2 agent as the base agent for their work and their baselines. Furthermore, they used the well-known Plan2Explore (P2E) and a modified version (Population Plan2Explore, PP2E) as their baseline. Finally, they have produced experiments to show how well their proposed method can explore the state space and also perform with a zero-shot manner to the task-specific problem. Originality:\nThis work introduces a new setup that is called reward-free deployment efficiency. Furthermore, inspired by the P2E agent, the authors propose a new method called CASCADE that can perform decently compared to the previous baseline in this setup.\n\nQuality:\nThe authors have done an excellent job explaining their setup, proving their claims, and discussing their experiments.\n\nClarity:\nThe paper is well-written which prevents a familiar reader from doing additional passes through different paragraphs.\n\nSignificance:\nDespite the fact that the proposed method, CASCADE, is well-motivated, I do think that there are few arguments about why the reward-free deployment efficiency matters. I can assume real-world scenarios where access to about 20 different agents is not possible.\n 1. Why is rewarding-episodes being considered in a reward-free setting? (This is a bit confusing)\n2. Would it be possible to have the final performance of the P2E (not in the deployment efficiency setting) reported as a dashed line? This is informative in case one wants to see how difficult the proposed problem setting is.\n3. Could you elaborate more on this? \n\n>to facilitate scalability, exploration policies should collect large quantities of data without costly centralized retraining.\n\n I can easily imagine real-world scenarios where access to about 20 different agents is not possible.\n The authors have addressed the limitation of their work in section 4.3.",
" This paper discusses a common problem in DRL, w.r.t, low efficient exploration in sparse reward tasks, and poor generalization of the trained agent. The authors propose CASCADE that learns to build a world model following a self-supervised exploration strategy. The naive motivation of CASCADE is to theoretically improve the learning objective of Plan2Explore to be more generalized. The results of the empirical evaluation show the effectiveness of the proposed method. **Strengths:**\n\n1. This paper is well structured and easy to read. The motivation is easy to understand, and I agree with it. It seems that the authors are not trying to over-sell the contributions, which is good.\n \n2. The theoretical proof is clear, and there is no mistakes in term of notations.\n\n3. The representation of the experiment results is clear, and it is clear to find how CASCADE works.\n\n4. This paper includes a strong background introduction and related research.\n\n**Weakness**\n\n1. I encourage the authors to improve the part of the abstract. I cannot tell precisely the novelty and necessity of CASCADE in the first reading.\n\n2. The experiments are not strong enough to demonstrate the advantages of the proposed method. I was expected to see CASCADE to work in more complicated environments, e.g., the SUNCG dataset (refer to [1]).\n\n\n\n **Concerns and Questions**\n\n1. The pseudo-code leads to misunderstanding. There is only one exploration policy in algorithm 1, which does not match the main contribution of this paper. The input of algorithm 1 should be multiple policies, which can be more explicit and reinforce the proposed method's main difference from the others.\n\n2. As mentioned in Line.123, the embedding of the representation is the final state. I wonder what the advantages of doing this are. It seems that there are multiple MDPs existing. Why not embed the representation in terms of similarity or uncertainty.\n\n3. There is a typo in Line 136. The proof is in Appendix C.1.1. \n\n4. There lacks an explanation of why it is necessary to extend eq.2 to the population-based version (eq.3).\n\n5. It would be great to include the evaluation results of CASCADE in more complicated environments (as mentioned above). It does not matter even if the trained policy's performance is not very good.\n **Limitations and potential negative societal impact**\n\nN/A\n\n**Reference**\n\n[1] Song, S., Yu, F., Zeng, A., Chang, A. X., Savva, M., and Funkhouser, T. (2017). Semantic scene completion from a single depth image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1746–1754.\n\n##upload after the rebuttal##\n\nConsidering that the authors have conducted a lot of work during the rebuttal, the added experiments reinforce the empirical evaluation parts of the paper in a way. I raise my score by 1. ",
" # New setting for reinforcement learning\n\nThe paper first introduces **reward-free deployment efficiency**, a new setting for reinforcement learning.\n\nThe *deployment efficiency* part means that we limit the number of agent versions that we deploy in the environment to collect new experience (for scalability reasons). \n\nThe *reward-free* part means that the collection procedure is task-agnostic. It aims at exploring the state-action space as much as possible without reward consideration (for generalization reasons). \n\n# New algorithm within this setting\n\nWithin this setting, they propose **CASCADE**, an algorithm to deploy a fleet of *coordinated* agents that gather task-agnostic experience in the environment. \n\nSimilarly to [Plan2Explore](https://arxiv.org/pdf/2005.05960.pdf), the exploring behavior is trained with an intrinsic reward that estimates the uncertainty of a world model (they use DreamerV2 world model). This objective has an information-theoretic interpretation. In the same way as Plan2Explore, they provide reward labels at test time, after data collection, to train a reward predictor for the world model. They can then train a task-specific agent in the imaginary MDP defined by the world model. \n\nThe major difference is to have different exploration policies and to *coordinate* them, so that they can organize together to explore different parts of the world.\n\nThey provide a theoretical motivation for this diversity of behaviors on a tabular version of their method (CASCADE-TS). \n\nTheir experiments demonstrate the superiority of CASCADE over random exploration and Plan2Explore, on MiniGrid, Montezuma's Revenge, and 4 tasks in DeepMind Control (stand, walk, run, flip). \n\n # Strengths\n\n- Clear demonstration of the ability to explore the world in MiniGrid toy environments and Atari Montezuma's Revenge, with clear visualizations.\n\n- Interesting improvements over an initial dataset of demonstrations with very few deployments in DeepMind Control.\n\n- They provide a theoretical motivation behind the intuition that it is better to have diverse exploration policies.\n\n--- \n\n# Weaknesses\n\n## Lack of experimental evidence\n\nThe experiments about world exploration (4.1) are in toy environments of MiniGrid (FourRooms and MultiRoom) and in only one Atari game (Montezuma's Revenge). Running all Atari might be too much, but some Atari games present interesting world exploration problems (Hero, Freeway, Frostbite for instance). \n\nAlso, [Crafter](https://github.com/danijar/crafter) would be a much less trivial environment to benchmark the capabilities of CASCADE: procedurally generated world with many assets and interesting behaviors to discover.\n\n---\n\n## Deployment-efficiency limits comparison to prior work\n\nThe deployment-efficiency setting seems a bit artifical here. Indeed, it is justified in the text by scalability reasons, that are important when it comes to real world problems, but the environments considered in this work are classical toy environments. \n\nIntroducing a restriction on the number of deployments prevents proper comparison to prior work. In particular, it would have been really interesting to compare CASCADE with Plan2Explore (also reward-free / task-agnostic) by using their number of deployments and all of their DMC environments (a superset of the ones considered here). \n\nIntroducing this new setting (which is not justified by any real world environments) is risky since the comparison / novelty / interest are much harder to assess with a change in the experimental setup.\n\n---\n\n## The formalism is sometimes very unclear\n\nIn particular, L111 to L125, the information theoretic objective taken from Plan2Explore (Equation 1) is confusing. \n\nFirst, looking at [Plan2Explore](https://arxiv.org/pdf/2005.05960.pdf):\n- The intrinsic reward is to collect experience that maximizes the world model uncertainty, and they estimate this uncertainty via the empirical variance of an ensemble of next WM-latent predictors. \n- They clearly state that this objective only *approximates* an information gain, and they frame it as a nice information theoretic interpretation of their objective. \n\nHere, equation 1 states that the exploration policy is the argmax of the mutual information term, which is confusing since it is not directly optimized in Plan2Explore but approximated. Also, L116 mentions several MDPs to explain the mutual information term, who are they? In fact, in Plan2Explore, it is clear that there is a *single* imaginary MDP (defined by a single world model), and that they consider the disagreement in the ensemble of predictors, all trained to predict the dynamics of the same MDP dynamics. The explanation with several MDPs does not seem to make sense. This ensemble of predictors seems to be the central piece of the \"information theoretic\" objective, but is only mentioned in Appendix B as an implementation detail.\n\nAlso, in Plan2Explore, the mutual information is taken with a random variable $w$ that represent the optimal dynamics parameters, in a Bayesian way. Here, in equation 1, what is the mutual information between a distribution of states and an imaginary markov decision process? \n\n---\n\n## Theory is more a motivation than a guarantee\n\nFrom what I understand, they prove that is is better to have diverse explorers (Lemma 1) and that CASCADE-TS (tabular version of CASCADE) learns faster or similarly to a naive fleet of identical explorers. However, there is no guarantee that CASCADE-TS actually converges to a solution of the optimization problem. \n\nThe name of section 3.3 is clear on that point (\"Theoretical Motivation\") but the abstract claim is not (\"theoretical guarantees\").\n\n---\n\n## The writing could be more polished\n\nSome typos, and the notations could be more rigorous (e.g. L92 $\\rho_0$ not defined, L137 $e$ is not defined).\n\n---\n\n# Originality\n\nOriginality is not clearly a strength or a weakness here. The work relies on the formalism of Plan2Explore and the code of DreamerV2, but the added term to maximize diversity among explorers in order to achieve better exploration / zero-shot generalization is interesting. \n 1) The `while True` in Algorithm 1 is slightly confusing since the deployment efficiency is precisely to limit the number of iterations of this loop, which here is infinite. Why couldn't you use a `for` loop with the number of deployments instead? That would allow to explicitly mention the number of deployments and precise that it is supposed to be small.\n\n2) L97, could you explain why updating $\\pi_{EXP}$ using the uncertainty of the imaginary MDP is different from Plan2Explore? \n\n3) Figure 3, in FourRooms, how is it that P2E, with a lower state coverage and less rewarding episodes found, seems to beat PP2E on zero-shot success rate? \n\n4) Still in Figure 3, could you explain more why CASCADE consistently reaches 100\\% zero-shot success rate, while its number of rewarding episodes and state coverage are very similar to PP2E?\n\n5) Figure 5, I would expect the 4 methods to be ranked randomly at deployment 0 (they all have access to the same data to start with), but across 30 seeds and 4 tasks, it looks like CASCADE is above the other ones even at deployment 0. Is there any reason for that?\n\n6) Would it be possible to address the weaknesses pointed out in the \"Strength And Weaknesses\" sections? In particular, experiments on more Atari games / Crafter, comparison to Plan2Explore with their number of deployments and environments, and clarification of the formalism.\n Section 4.3 clearly mentions some limitations. In particular, the authors make it clear that the learnt zero-shot policies are far from being optimal, which is understandable considering the difficulty of the reward-agnostic experience collection. \n\nThey do not discuss the potential negative impacts of their method, but clearly explain why the \"deployment efficiency\" constraint is useful in terms of security and cost.\n",
" The paper introduces a setting called reward-free deployment efficient setting, where one can interact with the environment with limited number of times/limited number of updates (but could with multiple agents in parallel), in the reward-free fashion. The paper proposes to solve this problem by learning a world model with a network ensemble, and with a certain number of exploration policies. The paper presents to train the world model in a similar fashion as DreamerV2, and train the exploration policies with a composition of heuristic objectives. Finally, the paper evaluates the effectiveness of the algorithm on both game and locomotion benchmarks. ## Strength\n\n1. The idea of using a set of exploration policies is a crucial contribution, which also ensembles the idea of \"policy cover\" used for exploration in multiple theory papers. The paper provides some good intuition on why using beyond one fixed policy is important for exploration, such as Figure.2. However, the theory component does not seem very surprising, which will be more detailed explained in the latter part of the review. \n\n2. The new reward-free deployment efficient setting indeed has some practical significance, especially in the robot learning scenario as the paper mentioned. The combination of reward-free exploration is also well justified. It would be more interesting to see if the proposed algorithm can indeed be adapted to the robot learning scenario, if possible. \n\n3. The performance of the algorithm, especially on the atari game benchmark, is competitive. Although the setting of games may not be a very good fit for the deployment efficient setting, it also shows that the proposed algorithm can indeed be a good algorithm for just performing reward-free exploration. It would be interesting to see any comparison to algorithms with explicit exploration bonuses. \n\n## Weakness\n\n1. The paper is not very well-origanized. Some of the results are not crucial, and there are many unclear parts and unjustified components of the algorithm/derivation. Details will be listed in the questions and limitations section. \n\n2. The idea of learning a global accurate dynamics model, with a number of exploration policies based on the current dynamics models, in the reward-free exploration section, is not new (c.r. the idea of policy cover mentioned above). There are many theory papers that considered the similar scenario and proposed algorithms with similar intuition, for example, [1,2], which in fact undermines the novelty of the paper. \n\n### references\n\n[1] Agarwal, Alekh, et al. \"Flambe: Structural complexity and representation learning of low rank mdps.\" Advances in neural information processing systems 33 (2020): 20095-20107.\n\n[2] Modi, Aditya, et al. \"Model-free representation learning and exploration in low-rank mdps.\" arXiv preprint arXiv:2102.07035 (2021).\n ## Major Questions\n\n1. In Figure 1, the top two rows seem identical. Thus there are a lot of redundancies in the figure and in fact seems a little bit strange. Also, the definition of deployment seems to be the number of interactions that the algorithm itself will have with the environment? Thus the number of deployments of offline RL should be 0? Otherwise, if the offline dataset is collected from a heterogeneous source, the number of deployments of offline RL could also be large?\n\n2. The definition of contextual mdp (CMDP): according to [3], the definition of CMDP seems to be that we have a set of mdps, where each mdp, in addition to have different reward, but also have different dynamics conditioned on the context?\n\n3. The derivation of the final objective is very confusing. In the main text, the ImagDiv term seems to be a heuristic surrogate of the first entropy term in line 140. Under the assumption that the conditional latent distribution is a gaussian with variance unrelated to the policy, which seems to be the assumption made for section 3.2, the ImagDiv objective is actually estimating the whole objective? However, in the appendix, the paper also presents the derivation for the InfoGain term, under the same assumption. This is confusing because the paper tries to use two different objectives to estimate the same thing?\n\n4. In the definition of ImageDiv term, is the expectation also over the model ensembles? Also $\\Phi$ is not defined in the ImageDiv term but is defined in the appendix for the InfoGain term. Is $\\Phi$ in ImagDiv the same as $\\mu(m_{\\psi}\\pi)$?\n\n5. In Fig.3, why is PP2E10's zero-shot success rate higher in a more challenging environment?\n\n6. In the experiment for locomotion, why do we need to assume that we have access to the offline dataset? It is known that offline RL needs good coverage from the offline dataset to perform well, thus it looks like in this setting the algorithm already gets a good warm-start? How would the method perform without the offline dataset?\n\n### references\n\n[3] Hallak, Assaf, Dotan Di Castro, and Shie Mannor. \"Contextual markov decision processes.\" arXiv preprint arXiv:1502.02259 (2015).\n\n## Minor issues\n\n- In algorithm 1, it seems like the algorithm will never terminate?\n\n- Many hyperlinks to the sections in the appendix are wrong. For example, line 136.\n\n- Line 140, on the right hand side of the last equality, in the second entropy term, the distribution of latents should be conditioned on $\\tilde \\pi^{(j)}$'s but not $\\pi^{(j)}$'s (the subscripts).\n\n- The x-axis in Fig.5 is different from the text description. 1. The result in Lemma 1 assumes deterministic MDP and deterministic policy. Would it be obvious that one policy may not be able to explore the whole environment?\n\n2. Section 3.3 seems to be in an awkward position. After a very heuristic description of the deep RL approach, section 3.3 returns to a theoretical analysis of a variant of the algorithm, and the setup is also simpler (tabular MDP). Maybe it could instead serve as a motivating section? "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"WCJsWhlmMPi",
"45CkpvW9Qsn",
"RP74IK81ykJ",
"ITmuKmq8RQI",
"blF009OGmuQS",
"jJTTnPmtkYE",
"nips_2022_RuNhbvX9o9S",
"ZAVxBnnyv0G",
"5D4Mm1Z8SDH",
"ZAVxBnnyv0G",
"o9FsPC3xnh",
"nBarta-SX0K",
"ASx3SgtsbG",
"Y-ChXaRtMl0h",
"nips_2022_RuNhbvX9o9S",
"_wgHiTE2T9",
"96JD7H-SwU1",
"qVLmXF8jjjo",
"Sj1UKx7Mx9y",
"ASx3SgtsbG",
"8EaGV2Z9uk",
"jikxKFtNzu",
"nips_2022_RuNhbvX9o9S",
"nips_2022_RuNhbvX9o9S",
"nips_2022_RuNhbvX9o9S",
"nips_2022_RuNhbvX9o9S"
] |
nips_2022_3-3XMModtrx | Is a Modular Architecture Enough? | Inspired from human cognition, machine learning systems are gradually revealing advantages of sparser and more modular architectures. Recent work demonstrates that not only do some modular architectures generalize well, but they also lead to better out of distribution generalization, scaling properties, learning speed, and interpretability. A key intuition behind the success of such systems is that the data generating system for most real-world settings is considered to consist of sparse modular connections, and endowing models with similar inductive biases will be helpful. However, the field has been lacking in a rigorous quantitative assessment of such systems because these real-world data distributions are complex and unknown. In this work, we provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions. We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems. In doing so, we propose evaluation metrics that highlight the benefits of modularity, the regimes in which these benefits are substantial, as well as the sub-optimality of current end-to-end learned modular systems as opposed to their claimed potential. | Accept | This study investigates modular architectures, their properties, and their effectiveness in a class of synthetic yet informative scenarios. The reviewers unanimously recommend this paper for acceptance, some of them with high praise, and I enjoyed it as well: I suspect it will be read widely and have a lasting impact on our thinking about modularity. | train | [
"GPoKbSvWDz",
"rVwLFfr0H8W",
"TN4WSe8uXl8",
"OuWRrLu_nJS",
"bKGw0-hMRe8",
"3Jt3tGS5QSG",
"6jJGQigVlXj",
"Eq_-dEyaJv6",
"j5ax1FNU6Ce",
"eyhEivyrbR",
"dGF0yxJV5q-",
"RGh9wPKxDg2",
"WXsy9qgG71J",
"byFbhK9cF12",
"IEF5IW6-XNk",
"I5_kIOTV4rr",
"PMssHUCyC5B",
"5K7GuKyQmws",
"lGSjptQ_p7",
"mvCLvBUsdHz",
"ALqfTKrmN2D"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks so much for your rebuttal! I understand your points, and believe this paper to have merits - I think it should be accepted, and my score currently reflects that! \n\nHoping that the other reviewers can similarly see the merits of this work!",
" We thank the reviewer for their time and response and are grateful for the score increase. \n\n1. We understand the reviewer's point and will revise the text to mention the specific type of modularity that we are talking about, which are dynamic modules that can be queried and are interpretable. While one can have modular systems embedded in a more general network through loose and highly connected circuits, it becomes non-trivial to discover them which poses another challenge. Further, we talk about dynamic sparsity here, which is context dependent selection of different \"modules\" or circuits, which is not the case in static sparse regularized networks. We will add a discussion point on these in a revision of the draft to further clarify and emphasize the type of modularity that we discuss in the work.\n2. We completely echo the reviewer's sentiments here. However, we would like to point that to understand whether this perfect specialization is a necessary path for AI, we do need a test-bed and metrics to build the foundation for a systematic study of specialization over simple to complex domains. As described in the additional comment on connections to real-world settings, one can use this same style of analysis with the notion of specialization being at the level of languages in multilingual language modeling, to discover whether this notion of specialization is actually meaningful and beneficial in terms of performance. \n\nWe hope that this further clarifies the reviewer's concerns. Please do not hesitate to communicate other details that you believe would improve the paper.",
" Thanks for your response. I agree that transfer ability is better considered in another paper. I also agree that real world case is more complex and the paper gives a more clear setting, the work is an early but necessary step in the direction of learning with modularity. I have updated my score (5->6). \n\n1. The authors may clarify in their paper that MoE is a typical implementation to equip modularity prior, but it is not the only choice and MoE is not equal to modularity. I agree modular network is usually implemented by MoE nowadays. But it's also a very strong prior compared to the definition of modularity itself, ``a pattern of connectedness in which elements are grouped into highly connected subsets, which are more loosely connected to other such groups.'' [2]. Some others [1] also take e.g., sparse regularization as modular priors. We can also implement modular prior inspected by [3] using task-specific masks.\n\n2. In the experiments, we see the advantage to achieve perfect specialization, i.e., GT-modular. However, is it really a necessary path for AI to achieve such perfect specialization in real-world cases, so that it can learn things modularly and generalize more well? I still have concerns about how the conclusion made by the paper can guide the real-world AI design. \n\n[1] Clune, J., Mouret, J. B., & Lipson, H. (2013). The evolutionary origins of modularity. \n\n[2] Wagner, G. P., Pavlicev, M., & Cheverud, J. M. (2007). The road to modularity. \n\n[3] Csordás, R., van Steenkiste, S., & Schmidhuber, J. (2020). Are neural nets modular? inspecting functional modularity through differentiable weight masks.",
" Given that the author-reviewer discussion period is coming to a close soon, we request the reviewer to let us know if our responses have resolved their concerns and if there are any other questions that we can address.",
" Given that the author-reviewer discussion period is coming to a close soon, we request the reviewer to let us know if our responses have resolved their concerns and if there are any other questions that we can address.",
" Given that the author-reviewer discussion period is coming to a close soon, we request the reviewer to let us know if our responses have resolved their concerns and if there are any other questions that we can address.",
" Thanks to the reviewer for responding and updating their score. If the reviewer has any additional concerns or clarifications that we can help resolve, we would be happy to. Please do let us know in case of any questions.",
" Thanks to the authors for their response to my review. I think they are right that I had underestimated the complexity of the tasks. Mainly because of this, but also taking into account the other improvements, I have updated my overall score.",
" **Straight-forward Conclusion**\n\nWe would like to argue how our findings are not obvious (straight-forward). We test on very simple rule-based data settings in an infinite-data regime, which is arguably the easiest setting to discover specialization since there is an abundance of data. However, even in this simplified setting we see that specialization is not always obtained equally in all considered architectures, which points to two possibilities: (a) either obtaining perfect specialization is impossible without any manual engineering, or (b) we need some inductive biases or regularization schemes to incentivize specialization and prevent collapse in such systems.\n\nBuilding on this conclusion, our contribution also provides a test-bed for the study of such inductive biases by providing concrete quantitative metrics that can be used to quantify the problems experienced by such systems. We believe this is a first and important step to move away from single-example visualizations as evidence for specialization to more rigorous quantitative assessment. We see our synthetic suite of tasks as an evaluation tool for network architectures, and not as toy tasks as a proxy for real-world ones. We believe network architecture design needs to move beyond trial and error, and that careful architectural evaluation will be an integral part of system design in the future. Our work is an early but necessary step in that direction.\n\n**Capacity**\n\nWe refer the readers to the details about the implementations provided in a separate common comment above. All our corresponding comparisons between models are controlled for the number of parameters, and we also do analysis over different model sizes.\n\nWe hope that the additional details and clarifications provided provide a clearer picture of our work and resolve the reviewer’s concerns. We would also be happy to address any additional questions that the reviewer may have.",
" We thank the reviewer for their insightful comments about our work, and are hopeful that the steps taken (outlined below) will address their concerns. We also refer the reviewer to the general comment that was made to all reviewers for additional and pertinent information. Importantly, we now provide connections to real world tasks in one of the separate comments above. \n\n**Impact on Real-World Data Setting**\n\nWe refer the reader to the separate comment on the connections to real world settings as well as possible extensions to more real world and complex settings. We hope that this discussion would provide insights into the impact of this work on more real-world domains, and that our efforts are adequate to address the reviewer’s concerns.\n\n**Transfer Ability** \n\nWe agree with the reviewer that this is an exciting topic to explore with our experimental setup. Furthermore, it would be interesting to also consider performance and sample complexity based metrics for transfer ability across different tasks and even some pre-training and fine-tuning setup. However, to do this, we require not only mixture-distribution based tasks but also some notion of similarity between tasks because transfer of knowledge only makes sense when there is some commonality between the pre-training and fine-tuning setups and thus, one needs to design tasks and mixture distributions that are related to each other so that analysis can be done on pre-training on a set of tasks and then fine-tuning on another set of different but related tasks. This requires a lot more deliberation on the design choices and we believe that it should be a stand-alone contribution in itself and out of scope of this work. Once a proper set of task distributions are decided, our metrics can then be readily usable for further analysis. We refer the reviewer to the separate comment on real-world extensions and considerations for a more detailed discussion about this. Finally, we thank the reviewer for this insightful comment which has spurred early thinking that will likely lead to a follow up project. In sum, we respectfully reiterate that we believe the question of transfer ability falls outside of the scope of the current paper, if investigated properly, and hope the reviewer agrees with the value of our current results as well as its potential for expansion into future work.\n\n**MoE Structure** \n\nWe use the MoE structure as the implementation choice of modular systems since it is the most commonly used implementation of modular systems (see examples cited below). We would be quite interested to know other implementations that do not share this flavor or cannot be framed as an MoE.\n\n* Recurrent Independent Mechanisms; Goyal et. al 2019\n* Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules; Mittal et. al 2020\n* Object Files and Schemata: Factorizing Declarative and Procedural Knowledge in Dynamical Systems; Goyal et. al 2020\n* Dynamic Inference with Neural Interpreters; Rahaman et. al 2021\n* Compositional Attention: Disentangling Search and Retrieval; Mittal et. al 2021\n* Transformers with competitive ensembles of independent mechanisms; Lamb et. al 2021\n* Fast and slow learning of recurrent independent mechanisms; Madan et. al 2021\n* Neural Production Systems; Goyal et. al 2021\n* Routing Networks and the Challenges of Modular and Compositional Computation; Rosenbaum et. al 2019\n\n**Generalizability of Modular-op System** \n\nOur primary focus of using Modular-op system is as part of the benchmark to illustrate (a) the benefits of a modular system that decides module selection based on the correct information, i.e. ignores the irrelevant futures for module selection, and (b) to show the downsides of gradient-based learning for specialization as it also suffers from the problems of collapse and specialization, just to a lesser extent.\n\nWe first see that in the synthetic setting we considered, there is only one notion of specialization which is governed by the rule contexts $c$. In this setting, by driving the module selection through only $c$, we get better performance. Instead of leveraging it as a model for real-world tasks, we instead propose using Modular-op and GT-Modular to rank a notion of specialization for a task. A concrete example would be to think of multilingual language modeling and see in this setup whether driving specialization (either in Modular-op or GT-Modular) at the language level leads to better performance, or at the level of families of languages, or neither.\n\nOnce we have obtained a notion of specialization with the help of GT-Modular and Modular-op systems, we can leverage additional task inputs according to that notion for better performance.",
" We thank the reviewer for their useful comments about our work as well as for recognizing its importance. We provide implementation details of the models as well as the connections to real world tasks in a separate comment to all reviewers above and have revised the main text and the Appendix to reflect the same. Below, we treat additional concerns, and are hopeful that our answers and edits adequately address the reviewer’s concerns.\n\n**Simplicity of Dataset** \n\nWhile we agree that the datasets considered are simple, this is by design to adequately evaluate models. However, we suspect there might be a misunderstanding about the complexity of the tasks considered. It is true that for our MLP experiments, the inputs are just two real values and an integer context. However, for experiments on recurrent domains, the input consists of a sequence of 32 dimensional vectors with the sequence length 10, as well as a sequence of integer contexts. Similarly, our experiments on attention consider inputs to be a set of $4R$ or $6R$ dimensional inputs along with integer context for each element of the set, where $R$ is the number of rules. Another measure of complexity for our tasks is the number of rules themselves, which we range from 2 to 32. In light of these experimental setups, we note that this is close to an infinite data regime.\n\n**Extrapolation to Larger Models and Complex Datasets**\n\nWe perform our experimentation over a range of model sizes, with the larger models having 1 million or 10 million parameters depending on the setting (eg. check right-most points in the Figures 25 and 41 in the Appendix). We would also like to stress that the data considered is not small since new data is sampled at every training iteration (infinite-data regime). A working hypothesis which we have is that if specialization is so difficult to learn with simpler data available in abundance, it would be much more difficult to obtain in more complex and limited data regimes. This is further discussed in the comment on real-world applications and Appendix B where we talk about considerations to be kept in mind when extending to more real-world domains, as well as limitations.\n\n**Implementation Details** \n\nWe thank the reviewer for pointing this out. We refer the reviewer to the general comment on the implementation details of all the models and tasks, and note that we have amended the text for increased clarity on this topic (see Appendices D - F). Furthermore, we clarify that ablation experiments run over a large number of model sizes, typically ranging from 100 thousand to 10 million parameters, which suggest clear trends and validity of our findings over a range of model sizes.\n\n**Nuance in Model Implementation**\n\nThe nuances in the implementation details of attention-based and recurrent experiments that the reviewer is astutely pointing to is done to ensure that the sizes of the different models are similar. This requires a non-trivial computation because in a recurrent system, a monolithic RNN with hidden size 256 has more parameters than a mixture of 4 RNN models with 64 hidden size each, but has less parameters than a mixture of 4 RNN models with 256 hidden size. The computations done in the code are to obtain an estimate of the hidden size that maintains that the total number of parameters in the two systems are similar. We add this detail in the relevant sections of the Appendix (E and F), and apologize for the oversight.\n\n**Softmax in Modular Systems**\n\nWe have added this detail in the Model Setup sections in the Appendices D, E and F. Essentially the $p_m$’s in Table 1 define the probability of activation of module $m$, and thus the vector $p$ represents a probability vector which can be obtained through a softmax. We thank the reviewer for pointing out this area of potential confusion.\n\n**Mixture of Experts Distribution**\nWe apologize for the overload of notation here. When we talk about the data distribution, it is actually a Mixture distribution (https://en.wikipedia.org/wiki/Mixture_distribution) where each component of the mixture can be thought of as a distribution/functional mapping that an expert in a MoE model should learn to represent. We clarify this point in the revised manuscript.\n\n**I in the Equations**\n\nSince not all data is uni-dimensional, we use $I$ to represent the identity matrix. Hence, a number of times the data is sampled from a gaussian with identity covariance matrix if the data is multi-dimensional. We add a clarification statement in the text to this effect.\n\nWe hope that the additional details and clarifications provided help paint a clearer picture of our work and resolve the reviewer’s concerns. We would also be happy to address any additional questions that the reviewer may have. We thank the reviewer for the engaged and astute comments which helped us considerably improve the paper.",
" We thank the reviewer for their insightful comments and recommendations for our work. We provide connections to real world tasks in a separate general comment above and take this opportunity to further address additional concerns about our work.\n\n**Analysis on existing sparse MoE models**\n\nWe agree with the reviewer on the point of expanding this analysis to some form of synthetic language based tasks and the impact of MoE based systems on such tasks. While it is an important direction to pursue, we think that this would require careful construction of the mixture distributions in the synthetic language domain which is a contribution on its own, and would be best suited as a follow-up and is out of the scope of the current work. In addition, we feel that even performing analysis on either multiple language modeling or multilingual translation could be important avenues to explore and discover if perfect specialization at the language level is important or not, and then further to see how well MoE systems are able to specialize accordingly. Also on the note of existing MoE models, we can see the Recurrent Network experiments as a specific case of one object file and R schematas in the SCOFF model of Goyal et. al 2020; Object Files and Schemata: Factorizing Declarative and Procedural Knowledge in Dynamical Systems for a system with $R$ rules.\n\nTherefore, we want to reiterate that while the proposed analysis of MoE systems trained on large datasets is important, it would require two additional considerations: (a) careful design of mixture distributions in the synthetic language domain such that there is only one notion of specialization, and (b) design of tasks and mixture distributions that are related to each other so that analysis can be done on pre-training on a set of tasks and then fine-tuning on another set of different but related tasks. These would require a lot more deliberations on the design choices if done right, and we believe that they should be stand-alone contributions in themselves. Once a proper set of task distributions are decided, our metrics can then be readily usable for further analysis. We hope that the reviewer can still recognize the value of the current contribution as a first step to develop evaluation tools for network architectures, which we foresee to be further used and developed in the future.\n\n**Title** \n\nWe thank the reviewer for the suggestions about the title but unfortunately it is out of our hands as it cannot be changed during the rebuttal stage of the submission.\n\n**Limitations** \n\nWe thank the reviewer for pointing this out. While we cover some of the limitations of our current analysis in the Future Work Section in Appendix A, we understand that we could have done a better job at articulating the limitations of the analysis done. To rectify this, we update the Appendix with additional details on Limitations (Appendix A), outlining the synthetic nature of the current tasks as well as the possible complexities that can be faced when extrapolating to more complex domains (Appendix B).\n\nWe hope that the additional details and clarifications provided help paint a clearer picture of our work and resolve the reviewer’s concerns. We would also be happy to address any additional questions that the reviewer may have.",
" We thank the reviewer for their praise concerning the significance of our work, and their comments on the lack of implementation details. To address this, we have provided detailed implementation details as well as extensions and connections to real-world settings in separate common comments above. We have also revised the Appendix to reflect this update and provide code for our experiments along with the submission. We plan to open-source our code for the community in order to ease reproducibility and help conduct further analysis. We hope that the implementation details clarify the reviewer’s doubts, and we would be happy to address any additional questions that the reviewer may have.",
" **Pre-training and Fine-tuning Extensions**\n\nIt is also possible to extend this analysis to test for transfer ability of models by constructing a set of tasks to pre-train on and then another set of tasks to perform fine-tuning on, with the hypothesis that a well-specialized system should learn better or faster during fine-tuning. However, we would like to point out that this is not a simple extension since it also requires a clear notion of consistency/similarity between different rules. One could assume that training on certain rules and testing on completely unrelated rules is not of as much importance, and hence it requires a notion of similarity between tasks (eg. KL divergence between different mixture components as a notion of similarity; but it would require a move away from deterministic computations to noisy rules). Even after obtaining such a metric, it would provide another axis of study; i.e. how much similarity should be there between tasks for modularity to provide benefits. While an important question, we believe that it is a different research question from what we try to answer, which is the sub-optimality of modular systems in obtaining specialization.\n\n**Synthetic Language Task Extensions**\n\nOur setup can also be extended to testing of language models (LMs) by modeling the data distribution as some form of a mixture distribution in an underlying probabilistic context free grammar (pCFG) and analyzing whether current MoE systems specialize on the notion of experts in this setting. \n\n**Usage in Statistical Modeling and Neuroscience**\n\nMixture distributions and Mixture of Experts based models have been widely used in Machine Learning and are applied in a number of real-world scenarios. They are often used to model statistical populations with subpopulations where each subpopulation could be modeled by a specific density and the mixture weights would reflect the proportion of each subpopulation. In this regard, we can look at our analysis at trying to determine in a general case of mixture distributions, how well can an MoE model discover the subpopulations, how can we evaluate it and whether it leads to any benefits in terms of performance.\n\nIn the recurrent domain, connections of the proposed data-distribution and modeling assumption can be made with switching linear dynamical systems (sLDS) which have been shown to be widely successful in modeling non-stationary interactions between high-dimensional neural populations (Fox et. al 2008, Fox et. al 2010, Wulsin et. al 2013, Glaser et. al 2020). Our recurrent-based data is reflective of the modeling assumptions in sLDS and our RNN models can be seen as an implementation of a flexible mixture-of-experts based system in this domain, however without incorporating the bayesian or stochastic perspective which is an important next step as outlined in Appendix A. Since such works rely on learning to discover low-dimensional structure in neurons through mixtures, we believe that our analysis would benefit this direction of research too by quantifying the extent to which an expert orients with a subpopulation.\n\n* Nonparametric Bayesian Learning of Switching Linear Dynamical Systems; Fox et. al 2008\n* Bayesian Nonparametric Inference of Switching Dynamic Linear Models; Fox et. al 2010\n* Parsing Epileptic Events Using a Markov Switching Process for Correlated Time Series; Wulsin et. al 2013, \n* Recurrent Switching Dynamical Systems Models for Multiple Interacting Neural Populations; Glaser et. al 2020\n\n*Final Comments*\n\nWe believe that the extensions that the reviewers have suggested (Synthetic Language tasks, Transfer ability, etc.) require careful consideration in the data-setup and would lead to stand-alone contributions in their own sense to answer questions that are different (but related) from the questions asked in this work. We are happy to incorporate discussions into these extensions in our Appendices A and B as important future work. In these sections, we also discuss limitations of our work and additional considerations that researchers would have to take into account when designing extensions along the provided directions.",
" We provide a detailed discussion about the impact of our analysis to real-world domains as well as additional considerations for researchers to take into account when considering the different real-world extensions proposed here.\n\n**Understanding Large MoE Models**\n\nMoE based models have also been shown to be quite successful in large-scale domains (Fedus et. al 2021, Shazeer et. al 2017, Lepikhin et. al 2020, Zuo et. al 2021, Wang et. al 2022). However, it is not clear whether they only offer ease of optimization or also benefits in performance through some notion of specialization. We believe it is an important research question to understand if their performance gains are linked to specialization, and if they are, how far are we from perfect specialization and how to reach there. To this end, we believe that our metrics can provide concrete quantitative assessment of the level of specialization obtained, and we believe that improving the capacity for specialization in our settings would extrapolate to more complex domains too. There are already some partial works that try to address the problems that we quantify; for example, the switch transformer uses a load-balancing term to prevent collapse. Certain works (Zuo et. al 2021, Wang et. al 2022) also show that context dependent routing often doesn’t provide additional benefits over random routing and one possible reason for this could be that context dependent routing is often severely sub-optimal in obtaining specialization as seen in our experiments.\n\n* Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity; Fedus et. al 2021\n* Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer; Shazeer et. al 2017\n* GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding; Lepikhin et. al 2020\n* Taming Sparsely Activated Transformer with Stochastic Experts; Zuo et. al 2021\n* AdaMix: Mixture-of-Adapter for Parameter-Efficient Tuning of Large Language Models; Wang et. al 2022)\n\nIn sum, consider our synthetic task setup along with evaluation metrics as an integrated tool for model architecture evaluation, rather than toy tasks. We think such architecture evaluation will be important to develop in the future, as networks start to exploit more modular structure. We consider our work as a systematic contribution toward this, helping to go beyond trial-and -error network design.\n\n**Further Comments About Extensions to Real-World Settings**\n\nThe reason we consider synthetic settings is to have a very clear definition of specialization, that is, there is only one criteria which should drive specialization in our experiments and that is the rule context c. However, in more complex domains and multi-task settings, it is not so clear anymore. For example, between CIFAR10 and ImageNet classification, specialization could be at the level of dataset (CIFAR10 vs ImageNet) or at the level of object types (living vs non-living objects or ground vs water vs sky objects) or even at the lower level details (like presence of features like eyes, wings, wheels, etc.). Another example is Multilingual Language Modeling, where the notion of specialization could be tied to individual languages, or to different language families.\n\nEven though the level of specialization is unclear in such complex domains, a possible way forward is taking a handful of notions of specialization and testing whether any of them leads to better performance in MoE models (eg. whether specialization at the level of language leads to better multilingual language modeling metrics in large MoE network styled as GT-Modular). That is, through GT-Modular and Modular-op styled MoE models, we can at least now test whether a designed notion of specialization is good for the task or not. Further, we can also extrapolate ways of improving specialization and reducing collapse from our synthetic domain to large-scale MoE systems which might lead not only to better performance but also more optimal sparse gating systems.",
" *Modular-op*: This model is quite similar to the Modular system, with the only difference being that $p_{i,m}$ is not obtained from each module’s computations but instead from a separate non-linear single-layered feed-forward network which gets $c_i$ as input and outputs a probability vector $p_i \\in \\Delta_R$ for each token $i$, i.e. the activation probability of each module for each token.\n\n*GT-Modular*: This model is also quite similar to the Modular system, with the only difference being that $p_{i,m} = 1$ if $m$ is equal to $c_i$, otherwise $p_{i,m} = 0$. Thus, there is a unique sparse one-to-one correspondence between rule context and module selection. It can also be thought of as a Modular-op model with the separate network being an identity mapping.\n\nFor our experiments, we ablate over the encoding dimension $d$ and the hidden size which defines the heads dimensionality of the model over the sets \\{(32, 128), (64,256), (128, 512), (256, 1024), (512, 2048)\\} and control for the number of parameters between the four different kinds of models considered.\n\nDetails about the task setup and model training can be found in Appendix E. We have also updated Appendix E with the above additional details about the models.\n\n**Recurrent Neural Network (RNN)**\n\nInput consists of a sequence of vectors $\\\\{v_i\\\\}^N_{i=1}$ where each vector $v_i$ is of dimensionality 32, as well as a set of rule contexts $\\\\{c_i\\\\}_{i=1}^N$ where $c_i \\in \\{0, …, R\\}$ with $R$ being the total number of rules. As in the MLP setup, we first use a single layered non-linear feed-forward network to independently encode each tuple $(v_i, c_i)$ to some latent space of dimension d. The encoded input then goes through a choice of model ranging from Monolithic, Modular, Modular-op and GT-Modular which gives an output in $\\mathbb{R}^d$ which is then fed to a single-layered non-linear feed-forward decoder network to give the final prediction $\\\\{ \\hat{y}_i \\\\}$ with $i=1..N$.\n\n*Monolithic*: This model consists of a single LSTM Cell that gets the encoded sequence as input and outputs a corresponding sequence of vectors in $\\mathbb{R}^d$.\n\n*Modular*: This model consists of R different LSTM Cells (modules) each of which gets the encoded sequence as input and outputs a corresponding activation score $p_{i,m}$ and prospective output $h_{i,m}$ for each token. The actual output of this modular system at each token can be understood as $\\sum_m p_{i,m} h_{i,m}$ which incorporates the output of each module in a soft manner as $p_{i,m}$ is obtained through a softmax. This output is then fed to a decoder, as in the other models.\n\n*Modular-op*: This model is quite similar to the Modular system, with the only difference being that $p_{i,m}$ is not obtained from each module’s computations but instead from a separate non-linear single-layered feed-forward network which gets $c_i$ as input and outputs a probability vector $p_i \\in \\Delta_R$ for each token $i$, i.e. the activation probability of each module for each token.\n\n*GT-Modular*: This model is also quite similar to the Modular system, with the only difference being that $p_{i,m} = 1$ if $m$ is equal to $c_i$, otherwise $p_{i,m} = 0$. Thus, there is a unique sparse one-to-one correspondence between rule context and module selection. It can also be thought of as a Modular-op model with the separate network being an identity mapping.\n\nFor our experiments, we ablate over the encoding dimension $d$ and the dimensionality that controls the hidden size of the RNN over the set \\{(32, 128), (64,256), (128, 512), (256, 1024), (512, 2048)\\} and control for the number of parameters between the four different kinds of models considered.\n\nDetails about the task setup and model training can be found in Appendix F. We have also updated Appendix F with the above additional details about the models.\n\nWe have also provided the code with the submission for ease of reproducibility and will be open-sourcing our code for the community to use.",
" To address commonly asked questions about implementation details, we provide an overview of our framework below. In addition, we considerably improve the presentation of such details in the text with precise reference to Appendices to facilitate readability. A fluid description of this setup with a balance between details and readability is difficult to produce. We appreciate the comments and suggestions from all reviewers which considerably improves our contribution.\n\n**Multi-layer Perceptron (MLP)**\n\nInput consists of numbers $x_1 \\in \\mathbb{R}$ and $x_2 \\in \\mathbb{R}$ as well as the rule context $c \\in \\\\{0, …, R\\\\}$, with $R$ being the total number of rules. The model consists of two encoders $E_x$ and $E_c$ where $E_x$ maps $x_1$ and $x_2$ independently to $\\mathbb{R}^{d}$ and $E_c$ maps $c$ to $\\mathbb{R}^{d}$. Each of the encoders are implemented as non-linear neural networks with a single hidden layer. The encoded inputs are then concatenated together and fed to a model chosen from Monolithic, Modular, Modular-op and GT-Modular. The output of this model lies in $\\mathbb{R}^d$ and is fed to a non-linear decoder with a single hidden layer to provide the final prediction $\\hat{y}$.\n\n*Monolithic*: This model consists of a non-linear single layered neural network that gets the concatenation of the three encodings as input and outputs a vector in $\\mathbb{R}^d$.\n\n*Modular*: This model consists of R different non-linear single layered neural networks (modules), each of which gets the concatenation of the three encodings as input and outputs a corresponding activation score $p_m$ and prospective output $h_m$. The actual output of this modular system can be understood as $\\sum_m p_m h_m$ which incorporates the output of each module in a soft manner as $p_m$ is obtained through a softmax. This output is then fed to a decoder, as in the other models.\n\n*Modular-op*: This model is quite similar to the Modular system, with the only difference being that $p_m$ is not obtained from each module’s computations but instead from a separate non-linear network with one hidden layer which gets the encoding of c as input and outputs a probability vector $p \\in \\Delta_R$, i.e. the activation probability of each module.\n\n*GT-Modular*: This model is also quite similar to the Modular system, with the only difference being that $p_m = 1$ if m is equal to c, otherwise $p_m = 0$. Thus, there is a unique sparse one-to-one correspondence between rule context and module selection. It can also be thought of as a Modular-op model with the separate network being an identity mapping.\n\nFor our experiments, we ablate over the encoding dimension $d$ and the hidden layer size of the model over the set \\{(32, 128), (64,256), (128, 512), (256, 1024), (512, 2048)\\} and control for the number of parameters between the four different kinds of models considered.\n\nDetails about the task setup and model training can be found in Appendix D. We have also updated Appendix D with the above additional details about the models.\n\n**Multi-head Attention (MHA)**\n\nInput consists of a set of vectors $\\\\{v_i\\\\}^N_{i=1}$ where each vector $v_i$ is of dimensionality $4R$ for Search-Version 1 and $6R$ for Search-Version 2 (Appendix E) as well as a set of rule contexts $\\\\{c_i\\\\}_{i=1}^N$ where $c_i \\in \\{0, …, R\\}$ with $R$ being the total number of rules. As in the MLP setup, we first use a single layered non-linear feed-forward network to independently encode each tuple $(v_i, c_i)$ to some latent space of dimension $d$. The encoded input then goes through a choice of model ranging from Monolithic, Modular, Modular-op and GT-Modular which gives an output in $\\mathbb{R}^d$ which is then fed to a single-layered non-linear feed-forward decoder network to give the final prediction $\\\\{ \\hat{y}_i \\\\}$ with $i=1..N$.\n\n*Monolithic*: This model consists of a single Multi-Head Attention block with $2R$ heads that gets the encoded input set and outputs a corresponding set of vectors in $\\mathbb{R}^d$. We keep the number of heads as $2R$ to allow for learning for all rules, as each rule requires 2 heads.\n\n*Modular*: This model consists of $R$ different Multi-Head Attention blocks (modules) with 2 heads each, each of which gets the encoded input set and outputs a corresponding activation score $p_{i,m}$ and prospective output $h_{i,m}$ for the $i^{th}$ token. The actual output of this modular system at each token can be understood as $\\sum_m p_{i,m} h_{i,m}$ which incorporates the output of each module in a soft manner as $p_{i,m}$ is obtained through a softmax. This output is then fed to a decoder, as in the other models.",
" The submission is a comprehensive rethinking and assessment of the research in modular network. It develops a series of benchmarks and metrics to evaluate the benefit of existing works using modular architecture. Specifically, It performs experiments on four kinds of model corresponding to different levels of specialization and obtains some empirical findings about the design of modular network. Strengths:\n\n1. The submission is a pioneer in systematically evaluating the performance of modular network in a unified framework.\n2. Modular network is a heated topic attracting wide interest, and the work is of great significance to the community.\n\nWeaknesses:\n\nNone It seems that the description is very high-level and the implementation detail is omitted. For example, in modular setting, how is the confidence score computed? Similarly, in modular-op setting, how to decide which module to evoke (we only know it is decided on $\\mathbf{c}$)? limitations are non-applicable / adequately addressed.",
" This paper carefully and thoroughly examines recent trends around modularity in neural architectures, with a special focus on recent sparse mixture-of-experts (MoE) models through construction of synthetic “rule-based” tasks. These tasks specifically target both the learning and generalization potential of these architectures, showing how various architectural inductive biases perform in the presence of multiple “rules/tasks” (different pathways in an MoE for example), and in-distribution/out-of-distribution data. Using the proposed rule-based data generation procedure and evaluating three core architectures (MLPs, self-attention, and RNNs with and without various modular architectural tweaks), the results show the impact of modular-constrained specialization (it helps!), and a small gap between “modular” and “monolithic” systems trained end-to-end (we need to do better at training modular systems!). The strengths of this paper are in its clarity and simplicity. It sets out to rigorously test the abilities of sparse, modular architectures vs. the “monolithic” architectural equivalents — what can these modular architectures learn that monolithic architectures cannot? In an ideal world, are modular architectures better?\n\nBeing able to construct a simple process for generating data and evaluating these hypotheses is a strong contribution of this work; going further to test the various types of generalization, collapse modes, and carefully probe the “end-to-end” modular learning vs. an “oracle” learning are just additional strengths that really help contextualize what it happening.\n\nThe weakness of this paper is that there’s little analysis of the existing sparse-MoE models that are trained on tremendous amounts of natural data (e.g., Switch-Transformers, MoE Language Models). It’d be interesting to see if you can construct synthetic language tasks that capture the same type of modularity and show that even when fine-tuning (or zero/few-shot finetuning these existing base models), the existing failure modes still appear! - Nit: Could the title be a bit more descriptive? I understand the desire for something short and punchy, but this paper does a lot of really cool stuff that should be expressed in the title?\n- Perhaps something like “Evaluating Learning & Generalization of Modular Inductive Biases in Neural Architectures through Rigorous Control Tasks”? I believe this paper could do a better job of stating the limitations with respect to the fully synthetic nature of the proposed control tasks. These are absolutely useful; but there will always be a faction of scientists who want to see how real data (especially at scale) interacts with the story presented in this work!\n",
" The paper studies the benefits of MoE like modular network, in terms of many metrics, e.g., in/out of distribution performance, collapse-avg/worst, alignment, adaptation and inverse mutual information. The authors generate data, rules and tasks by synthetic neural process, and study monolithic, modular, modular-op model architectures against the ground-truth modular structure. They point out that an architecture with modular prior is not enough to perfectly learn the ground truth. \\+ Whether modularity architecture helps multi-task learning is studied from a well-defined perspective.\n\n\\+ The data process and the developed metrics sound reasonable.\n\n\\+ The paper is well-organized and easy to follow.\n\n\\- It's unknown how the proposed data process can impact on real-world rules & data setting.\n\n\\- A more important and meaningful metric, in addition to the proposed ones, could be transfer ability, or compositional generalization ability on new task, which is thought to be a key advantage of sparse and modularity design. The conclusion of the paper is less informative without this part.\n\n\\- MoE structure is only one of the implementations of modular architecture, and the title is somehow ambiguous.\n\n 1. It seems that modular-op usually comes with better performance in all metrics defined by the authors, but how general this conclusion can be? This is an important conclusion, which probably can guide us to design task level, task and input level, or input level (e.g., token level) gating. My major concern is that, the generating process of the synthetic data may not necessarily match any real-world case. \n\n2. The conclusion of paper seems a little bit straight-forward, considering the synthetic data-generation process. It would be more informative If the authors can further study transfer ability on modular structures. \n\n3. In the experiment, is the capacity of a monolithic model the same as modular & modular-op & gt-modular? N/A",
" This paper presents an elegantly designed experiment to evaluate the effectiveness of a number of modular architectures in various settings. The topic is an important one, in my opinion, since modular architectures have the potential to address a number of key issues in ML, including compositionality and continual learning. The question the authors address is whether backpropagation can discover the structure in data that is inherently suited to modular architectures (because it is synthetic and designed to be) and learn to specialise in modules accordingly. By comparing with monolithic architectures at one end of the spectrum and with modular architectures forced to specialise (thanks to oracular knowledge of the data) at the other, they are able to assess both a) the extent to which perfect modularisation improves performance, and b) how well modularity can be learned by backpropagation. The results suggest that - within the narrow setting of the experiment - a) modularity does improve performance on sufficiently complex data, but b) backpropagation struggles to discover the underlying structure in the data and to learn to specialise accordingly. The paper concerns the important topic of modularity and presents a well thought out experiment addressing an interesting question. The results are informative and interesting. Overall, this is good science, and the sort of thing we should see more of at the big ML venues. The main weakness of the paper, I feel, is that the synthetic data is very simple - just two real-valued variables, plus an integer context variable, parameterised by just two real values. Do we expect the results to apply with more complex datasets and large architectures? I’m not sure. Perhaps it’s easier for backpropagation to discover structure in the data at scale than in a small, simple dataset.\n\nThe paper isn't very explicit about the model architectures used in the experiment, either in the main paper or in the appendix. I assumed they would be very small, given the low-dimensionality of the synthetic data. Delving into the code (thanks for providing this), I see the model architectures are a bit more nuanced than I expected. I suspected this is to help ensure the modular and monolithic versions had the same numbr of parameters? But I also see that the modular architecture has a softmax in there. Does this realise some form of competition between the modules? If so, this isn't mentioned in the main paper. What do the authors mean by a “mixture experts distribution”? I am only familiar with MoE in the context of architectures, not distributions (and a quick search on Google backs this up).\n\nWhat is “I” in equations 5, 8, and 13? Why not just make it 1?\n\nSee also questions above. See above"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"bKGw0-hMRe8",
"TN4WSe8uXl8",
"j5ax1FNU6Ce",
"WXsy9qgG71J",
"RGh9wPKxDg2",
"j5ax1FNU6Ce",
"Eq_-dEyaJv6",
"dGF0yxJV5q-",
"eyhEivyrbR",
"mvCLvBUsdHz",
"ALqfTKrmN2D",
"lGSjptQ_p7",
"5K7GuKyQmws",
"IEF5IW6-XNk",
"nips_2022_3-3XMModtrx",
"PMssHUCyC5B",
"nips_2022_3-3XMModtrx",
"nips_2022_3-3XMModtrx",
"nips_2022_3-3XMModtrx",
"nips_2022_3-3XMModtrx",
"nips_2022_3-3XMModtrx"
] |
nips_2022_68EuccCtO5i | Differentially Private Model Compression | Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream Natural Language Processing (NLP) tasks while simultaneously guaranteeing differential privacy. The inference cost of these models -- which consist of hundreds of millions of parameters -- however, can be prohibitively large. Hence, often in practice, LLMs are compressed before they are deployed in specific applications. In this paper, we initiate the study of differentially private model compression and propose frameworks for achieving 50% sparsity levels while maintaining nearly full performance. We demonstrate these ideas on standard GLUE benchmarks using BERT models, setting benchmarks for future research on this topic. | Accept | This work proposes and empirically evaluates algorithms for compressing and fine-tuning a large model for a downstream task, while satisfying DP for the downstream task training data. The set up is the following: we have a large pre-trained language model such as BERT. We would like fine-tune it for a task using a dataset D, as well as compress it to a smaller model. The paper studies algorithms that are DP with respect to D and do fine-tuning+compression. The authors propose and evaluate different strategies for this problem and compare the privacy-utility tradeoffs.
The reviewers found the empirical evaluation to be thorough. Some of the other concerns raised by the reviewers have been addressed to my (and in most cases, their) satisfaction.
I think the problem studied by the paper is timely and important. I view the paper largely as a solid empirical study of natural algorithms for this problem. While the paper can be improved as discussed in the reviews and rebuttal, I believe it brings attention to an important problem and makes solid progress on it. I would therefore recommend acceptance. | train | [
"jJfhpv_pf0X",
"JHTD5yV_iHg",
"yMNwKVRfgq",
"ZWGZAEa9xcF",
"76WUyEhFgZC",
"AUJlrJ2HuP",
"4IsA7kinovg",
"yF8J6SFbPTl",
"nJAkJ_m5l9w",
"wEf-N-k5xC6",
"JIXOX-cBW_is",
"uGejnq1vRox",
"4S815jBCJjY",
"W3sSdPAmGJ",
"e3fUoSTkCwL",
"4J6vRMl4iV",
"L7u1VHSTEqO",
"SaGpF500-N8",
"H1vv3AagDt",
"jUYHYJk5N9L",
"opxcBLLLhC3",
"QePr2azemVc",
"30wS-tc1Ic6"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for participating in the discussions. We will include experiments with epsilon = 1 in the future revisions of the paper. We will add more discussions on pros/cons of our approach DistillBERT (or any pre-trained compressed models) and elaborate on where our work is applicable. We appreciate your time and feedback on our paper.",
" Thanks for your time and involvement in the discussions. Regarding Q2, we follow the methodology in the literature and evaluate the results in GLUE using the DEV set.\n\n",
" The authors have satisfactorily addressed all my comments, thanks for that. While I agree with their position on how this work brings light to the topic of model compression, after reading other reviews and comments, I also stand by my earlier comment of limited technical novelty. Therefore, I will maintain my score as it is. ",
" Q1: It is quite subjective to say if a performance drop of1.7% is small or negligible. But, anyway, the paper is not on the focus of performance. It would be nice to state something in a objective manner.\n\nQ2: Thanks for your explaination. To better contexualize your result, it would be nice if you could also show the raw BERT/DistilBERT result.\nHow about the comment \"it is unclear whether the results in GLUE are evaluated in DEV or TEST set\"\n\nQ3: Thanks, the counter intuitive result is indeed interesting.\n\n",
" 1. Epsilon\n- I agreed that this epsilon problem is not only the problem of this paper. It is the problem of DP itself.\n- However, what the authors claimed in this rebuttal does not make sense.\n- As the authors said, theoretical guarantee is important. However, what I want to say is that whether the theoretical guarantee with epsilon = 4.25 is meaningful? With epsilon = 4.25, what we can theoretically guarantee is that the upper bound of the probability difference ratio with and without one sample is exp(4.25) = 70. I do not think this is a useful guarantee at all.\n- In addition, as you said, the membership inference attack metrics are specific to datasets. In that case, with epsilon = 8, the claim that membership inference attack metrics are very small is also data specific claim. \n- The logic that the authors provide in this rebuttal has many counter-examples; thus, those responses do not make me convince.\n\n2. DistillBERT\n- I understand that there are some advantages of the proposed method in comparison to DistillBERT.\n- However, those are not well explained in both rebuttals and current manuscript. Please revise the manuscript that can clearly compare the advantages and disadvantages of the proposed method in comparison to the DistillBERT.\n\nI am going to increase my score to 5 but it is definitely the maximum score that I can provide.\nIf this paper is accepted, it would be great if the authors can address all the comments about the epsilon, DistillBERT and various compression ratios.\n\nThank you for the hard working on this rebuttal.",
" Thanks for your time and involvement in the discussions. We appreciate your questions.\n\n1. Epsilon. We agree with you that smaller values of epsilon are better, and as a community we are striving hard to train state of the art deep learning models with epsilon << 1. We are not there yet, and we will continue to push the frontiers. Moreover, what is a good value of epsilon is perennial topic of discussion ever since DP models started to be deployed, so, we will focus on something more specific here.\n\nWe believe that some additional comments regarding worst case analysis and membership inference attacks may help. We want to reword our responses in the rebuttal concerning this comment.\n\nThe current theoretical analysis shows that worst case privacy loss is around 4. This does not mean that even theoretically privacy loss is 4; it is an upper bound. It could be far lower than that but we are unable to prove it due to various mathematical reasons. On the other hand, some of works on membership inference attacks show that models trained with epsilon = 8 offer significant privacy protection, and adversary's chances of success is as low as random guessing, which translates to very small values of epsilon. \n\nThus, our reported epsilon = 4 is only an upper bound. In our experience having an upper bound which has a mathematical proof is crucial in many deployment scenarios as empirical tools (such as membership inference attacks) are specific to datasets. Thus, to us, getting rid of reporting (large) epsilon values which come with mathematical proofs and completely replacing them with empirical hypothesis testing style experiments can lead to privacy solutions that are not well grounded and come with no concrete guarantees.\n\n2. DistillBERT. We thank you for your comments. The cost of pre-training can be prohibitively large even if it is only one time. Moreover, it is not merely about cost of pre-training, but also also about infrastructure, pretraining datasets, etc. Our algorithms cater to scenarios where pre-training compressed models is hard. We believe that this is an important scenario to study.\n\n3. Compression Rates. We will try to do these experiments for future revisions of the paper.",
" Thanks for your time and involvement in the discussions. We appreciate your questions. We are happy to answer your questions.\n\n1. The pruning (steps 4,5,6 in Algorithm 2, steps 4 and 5 in Algorithm 3) do not incur any further privacy cost. To understand this better, let us recall the basics of analysis of DPSGD (which we outlined in the paper and also above in general reply to all reviewers). In DPSGD analysis, it is crucial (to apply composition theorems) that all the model weights are released as public information after every iteration of the algorithm. In other words, after iteration t, the weights of model W_t are public information. Further, we know that differential privacy satisfies post processing property. That is, if the output of an algorithm is DP, then any operation on the output without looking at the data will preserve DP with exactly same privacy cost. Therefore, our proof that pruning steps (steps 4,5,6 in Algorithm 2, steps 4 and 5 in Algorithm 3) do not incur any additional privacy loss simply follows from the post processing property of DP. We will add a tiny theorem regarding this for clarity in the future versions. \nOn the other hand, if your question is on can we show any new privacy amplification theorems because we only retain a subset of weights is a very nice theoretical question. We leave it as an open question.\n\n2. As we write in lines (253-254) and (276-277), we split the privacy budget equally among all the iterations of the DPSGD algorithm in Algorithms 2 and 3. \n\n3. Finally, we did add a small discussion regarding variation in our epsilon values in the general rebuttal. We repeat it here for convenience. The reason epsilon values vary slightly across different GLUE tasks is due to the choice of \\delta, which controls the failure probability of not satisfying DP guarantee. As we set \\delta = 1/10N, our epsilon values also change as the privacy loss curve is a function of both epsilon, delta, and fixing one of them uniquely determines the other.",
" Thank you for the rebuttals. I carefully read other reviews and the general responses from the authors as well.\n\nFirst, thank you for providing the experimental results with epsilon = 1.0. As I expected, the performance drops from epsilon = 4.25 to epsilon = 1.0 is significant. \nAlso, I think the justification in comparison to DistillBERT is still not valid. With enough computational costs (one time only), I am not sure why the proposed method is superior to DIstillBERT. In that point of view, I am standing to my original score (4). But I will not be upset if the AC recommends acceptance for this paper.\n\n1. Epsilon\n- I understand that other works also use the large epsilon values.\n- However, it cannot be a backup that this work can also use large epsilon values.\n- The main advantage of DP is that DP can guarantee something theoretically. However, if that guarantee is somewhat useless (like epsilon = exp(4.25)), there is no point to using DP at all. \n- As you said, it would be better to use membership inference attack as the privacy metrics instead of epsilon in DP. \n- But thank you for providing the results with epsilon = exp(1). And the performance drop from epsilon = exp(4.25) seems significant.\n\n2. DistillBERT\n- I understand your points on DistillBERT for general compression ratios.\n- But I think my point is still the same that we only need \"one-time\" training for any size of DistillBERT. \n- In that case, do the computational costs matter a lot?\n\n3. Compression rates\n- The main reason that I want to see the results with various compression rates is to check whether this method can be generalized to any compression sizes.\n- If this can be shown, the authors can claim the advantages from DistillBERT better.- Unfortunately, this paper does not have this result.",
" I have two follow-up questions to the comments about privacy guarantees of DPIMP methods.\n\n1) In the DPIMP methods, while the fine-tuning steps are performed with DP, there is are additional steps corresponding to pruning. If the pruning steps weren't present, typical composition calculations would certify the epsilon value of a resulting model obtained with DP-SGD. However, it is not clear, theoretically, how the epsilon value changes when the pruning steps are introduced. \n\n2) In Algorithms 2 and 3, the list of inputs does not contain the epsilon parameter based on a desired privacy level. So then, experimentally how are tables 4 and 5 realized with epsilon values upper bounded by 4.25?\n\nClarifications on the other points I raised are satisfactory. ",
" We thank the reviewer for their participation in the discussion. We appreciate your time, comments, and valuable suggestions.\n\nIn hindsight, we agree with you that we should not have used the words teacher and student model but instead used Full-model and compressed model. This would make our paper more readable, and we intend to do that in the future versions. We would also include the formal problem statement. Thanks for your suggestions, and helping our work be more readable to the general audience. \n\nFinally, you write that there are still some misconceptions left after the second part of the rebuttal. We would be happy to clarify any questions you have.",
" I thank the authors for a detailed response. While I'm mostly satisfied with part 1 of their rebuttal, where they elaborated on the sections that were unclear to me, there are still some misconceptions left after the second part of the rebuttal. I additionally agree with most points addressed in the general response to authors. One thing that stands out to me still (some may claim that this is purely preferential however) is the argument on PATE being brought up because of the student-teacher terminology. I suggest if the authors strongly believe that their work should NOT be compared to PATE, then they should modify the terminology to alleviate any chance of confusion. Because evidently, as almost all reviewers picked up on it, so will the general audience.\n\nI just want to clarify that I am aware on the epsilon values used in other works, my question was specifically on the < 4.25 (as in where does the .25 come from).\n\nI fully agree that there are indeed certain interesting results presented in this paper and it is technically sound, however, the methodological contributions are still rather limited in my view. This being said, I believe that the main purpose of such works is to encourage the broader scientific community to participate in research not directly related to their own topic, which this paper tries to achieve. As a result, I am happy to marginally increase my score (from 4 to 5).",
" In this comment, we want to give updates about new experiments, add some more details to our problem description and our frameworks.\n\nNew experiments\n\nWe repeated all of our experiments with a smaller privacy budget of epsilon = 1 and presented the results in Appendix B in the revised version. The observations from the original experiments continue to hold in this regime. In particular, the relative drop in accuracy between large models and compressed models using our frameworks remains roughly the same. \n\nModel Compression, Ensemble Learning, PATE, and overloading of words student and teacher models.\n\nThere seems to be some misunderstanding regarding the model compression problem considered in this work, ensemble learning, and PATE framework. This confusion has resulted in drawing some unfair comparisons between our results and the PATE framework; to us, it appears, this confusion is primarily due to usage of terms “teacher” and “student” models in both compression literature and PATE framework. In a nutshell, we are not considering the ensemble learning framework, and hence there is no necessity in our frameworks to do private aggregation (PATE). Our focus was to study the most dominant model compression algorithms used in practice (in the non-private setting) and how to incorporate DP constraints. Although we addressed this in lines 68-80, here we would like to further elaborate on this. To make everything precise, let us formally state our problem, which we illustrated in Figure 1. We will include this formal problem statement in the full version for more clarity.\n \nProblem Statement: Input to our problem is privacy parameters (\\epsilon, \\delta), a large model M_A with initial model parameters \\theta_A(0), a private sensitive dataset D from a downstream task which we want to solve, and a compression factor \\gamma. Let |M_A| denote the parameter count of M_A. Our goal is to produce a compressed model M_B satisfying two constraints: \n1) |M_B| \\leq \\gamma \\cdot |M_A|. \n2) The final weights of model M_B (denoted by \\theta_B(t)) should be (\\epsilon, \\delta)-differentially private with respect to dataset D. A compression algorithm can make use of M_A in an arbitrary way as long the final weights of model M_B (\\theta_B(t)) are differentially private with respect to dataset D.\n\nAccuracy Comparisons: We measure the quality of compression algorithms by comparing the accuracy obtained by M_B satisfying (\\epsilon, \\delta)-DP on downstream task D to the accuracy obtained by M_A satisfying (\\epsilon, \\delta)-DP on downstream task D. This allows us to quantify how much performance one loses in private training due to model compression. Note that we are not comparing against the performance of non-private models. We would like to find compression algorithms where differentially private M_B has the nearly the same performance of differentially private M_A.\n\nNo New Assumptions: In this work, we assume that M_A is BERT with initial parameters \\theta_A(0) obtained by pretraining. We do not have access to pretraining dataset and we do not enforce DP on the pretraining step. The only dataset available for compression algorithms are sensitive datasets D and nothing else. This is exactly the model assumed in almost all DP-NLP papers such as [34, 74, 76], and we are not making any new assumptions compared to the previous works. Our choice of datasets from GLUE benchmarks are also inspired by the recent works in DP-NLP [34, 74, 76]. Given this problem statement, our results show performance of two compression algorithms DPKD and DPIMP while satisfying (\\epsilon, \\delta)-DP for \\epsilon in the range of [4, 4.25], and \\delta = 1/10N, where N is the size of the dataset, when compression factor \\gamma = 1/2. All our results satisfy DP guarantees.\n\nSlight Variation in Epsilon Values: The reason epsilon values vary slightly across different GLUE tasks is due to choice of \\delta, which controls the failure probability of not satisfying DP guarantee. As we set \\delta = 1/10N, our epsilon values also change as the privacy loss curve is a function of both epsilon, delta, and fixing one of them uniquely determines the other; See [23] more details.",
" Now we are ready to state why we did not consider the PATE framework to solve our problem. We give 3 main reasons along 3 different axes.\n\n1) Technical: A natural way of using the PATE framework in our setting would be to partition the sensitive dataset D into say 10 teacher models M_A1, M_A2… M_A10, where the architecture of each teacher model is M_A. We fine-tune each teacher model M_Ai on the portion of the sensitive data set D. The student model M_B would aggregate the predictions of the teacher models to learn. However, to do private aggregation of the teacher ensemble (hence the name PATE), the student model would require access to a non-private dataset D’ which has the same distribution as D [47, 48]. In other words, if D is a MNLI dataset, we would require additional data which has a similar distribution to MNLI dataset. We do not have access to such a non-private dataset D’ in our problem statement.\nAlternatively, we can use the test set of D (i.e, test set of MNLI) as the public data set for student training. However, this would lead to unfair comparisons with previous approaches and comparing our results to the literature. Doing so also has the following drawback noted in the original paper [47]: “Note that this may increase the variance of our test set accuracy measurements, when compared to those computed over the entire test data.”\n\n2) No Published Work on Solving GLUE via PATE: Ensembling learning is a method for training multiple ML models (often using different algorithms) to obtain better predictive performance, and PATE provides a framework for DP aggregating these predictions to train a new model. Thus, PATE is a framework for training any ML model, and should not be confused as a model compression technique akin to pruning. It is true that ensemble learning framework can be applied for model compression but that is not the focus of this paper; see next point. The performance of PATE for training large deep learning models lags behind the training via DPSGD, and is well known in the community. To provide evidence, SOTA results in NLP and image classification from the past year all use DPSGD [34, 74, 76, 12, 42]. \nWe do not know of any paper that studies PATE framework for solving GLUE benchmarks using Large Language Models (LLMs). Our DPIMP framework achieves average accuracy of 83.5% for GLUE benchmark using half as many parameters as BERT. We do not know of any published work that achieves an accuracy of greater than 80% on GLUE benchmark using PATE framework using any BERT model.\n\n3) Model Compression in Private vs Non-private World: The focus of this work was to study the most widely used compression algorithms in the non-private world based on our experience and understanding, and make them differentially private and evaluate their performance. Thus, we chose Iterative Magnitude Pruning and SGD with KD objective function. Based on our experience and understanding these two most widely used algorithms for model compression; see also this survey [43]. Also observe that ensemble learning is not a dominant method of model compression in the non-private world, as evident from these highly cited papers [51, 31].\n\nHaving said that, we are not making any claims that our work is exhaustive. It is true that PATE can be applied for model compression, but there are many other techniques such as quantization, data augmentation, etc., that can also be applied; see for example this excellent survey on the topic [43]. However, doing such an exhaustive comparison in a single paper is beyond scope. We believe that part of the reason why some of the DP papers think PATE as a natural framework for model compression is partly due to the nomenclature; both compression literature and PATE use the words teacher and student models. But it is important to note that they do not have the same semantic meaning. Moreover, the frameworks for compression algorithms studied in this paper themselves are important enough that they deserve full attention, and we wanted to be as comprehensive as we can in the choices we have made, including as the reviewer pc64 said pointing out techniques that do not work. \n\nWe hope that our paper paves way for more research in DP model compression, exploring more algorithms and new theoretical analysis, perhaps even improving our results. Given the significance of this problem both in theory and practice, it would benefit the entire community. ",
" We thank the reviewer for positive feedback, comments, and questions. We address the specific questions asked by you.\n\nQ: some statements are slightly over-claimed.\n\nA: We are sorry that you think we overclaimed our results. We respect your opinion and would be happy to reword our sentences according to your suggestions. For our future reference, we would like to understand which part of the sentence you felt is overclaimed. First note that we use the word sparsity instead of inference latency in abstract and our contributions. We did not claim that our models have 50% shorter inference time nowhere in our submission. Moreover, in lines 228-231, we discuss sparsity and inference latency, and we explicitly point out that sparisity does not correlate with inference latency. However, many works in model compression literature, including the recent award winning works of [20, 11] and seminal work of [32], use sparsity as the measure of model compression. Further, their motivation to study sparsity is also to minimize inference latency and energy costs. To our understanding, a part of the reason to use sparsity as a measure of model compression is that measuring inference latency of models depends on the hardware configurations and it is not easy. Sparsity gives a more clear and abstract measure of model complexity, besides being interesting on its own as a tool for measuring function representations. Second, your comment seems to imply that our compressed models performance is significantly below the performance of large language models, and hence we should not use the word “nearly”. We are happy to replace nearly with 1.7%, as our DPIMP obtains sparsity of 50% and is only 1.7% below the performance of full BERT (Table 5).\n\nWould you agree if we replace our original sentence with the following: we initiate the study of differentially private model compression and propose frameworks for achieving 50% sparsity levels while guaranteeing that performance drop compared to full model is small; for some of our algorithms average performance drop on GLUE benchmark is 1.7%.\n\nQ: The paper seems to report relatively weak results in GLUE benchmark.\n\nA: Thanks for your question. We think there is a small misunderstanding in reading our tables. We are comparing performance of full BERT models trained with exactly the same privacy parameters to that of compressed models with 50% sparsity. We are not comparing performance of our compressed models to non-private BERT models, which is what you are referring to in those citations. DP guarantee already comes with some performance drop, which is well documented in prior works. In this work, we are studying the relative performance drop of private models when we impose model sparsity constraints. We hope this answers your question. \n\nQ. Section 3.4 and 3.5 do not introduce anything beneficial. \n\nA: Thanks for the question. In Section 3.4 we propose zero-shot initialization strategies, and Table 2 shows that this gives nearly 18% boost to average performance compared to random initialization (Table 1). In section 3.5 we address the question, can larger models, which are known to achieve better privacy-vs-utility tradeoffs [34, 74, 12, 42], be better teachers. Unfortunately, the answer turns out to be no, but we found this counter intuitive. While it is fair to say that Section 3.5 does not improve the results over section 3.4, it brings to light an interesting phenomenon that larger models, which achieve better utility, are not necessarily better teachers for model compression in the DP world.\n\nWe would very much appreciate the reviewer considering increasing their rating in case they find our responses compelling.",
" Q: Comment on recent work on DP at the prediction time.\n\nA: Thanks for the comment, and we added a few lines regarding these recent works of Majmudar et al. and Ginart et al. in the revised version. However, they study a different problem of protecting privacy during prediction time. While this is an interesting problem, our setting is quite different from this one. In our setting, we want to publish the student model to the public and want to ensure that no privacy violations occur. Observe that this is a stronger guarantee than DP at prediction time; a model satisfying our condition is automatically DP at prediction time but not vice versa. Consider a scenario where a company is trying to deploy a model trained on emails of several customers on the users’ phone. In this setting, it is important that weights of the network being installed on the users’ devices do not leak any privacy of the data on which it was trained. A model that is DP at the prediction time would not be able to protect against such attacks. We would like to note that most of the deep learning with DP literature considers the problem of releasing models privately.\n\t\t\t\t\t\nWe would very much appreciate the reviewer considering increasing their rating in case they find our responses compelling.",
" We thank you for a careful reading of the paper, positive feedback and questions. Below we address specific questions raised by you.\n\nQ: Motivation for introducing DPKD is mentioned in lines 118-121…\n\nA: Thanks for the question. Before we answer this question, let us recall the basics of analysis of DPSGD given in lines 87-91. In DPSGD, we add noise to clipped per-sample gradients and hence every iterate of DPSGD is private. In other words, after each iteration t of DPSGD, model weights W_t can be assumed to be public information. Now, in iteration t+1, gradients of samples are computed with respect to W_t, and hence privacy only depends on gradients belonging to the samples in a single batch. This is crucial for applying amplification by subsampling theorems in privacy analysis. \nNow consider a framework where the teacher models are trained using SGD on the dataset D and the student models are trained with DPSGD while minimizing the Equation 1 on dataset D. Such an algorithm does not output a differentially private compressed student model. This is due to the distillation loss term H(y_true, P_S) in Equation 1. Here, P_S is a function of the entire dataset as the teacher was not trained with DP. Therefore, gradients of samples are now functions of entire dataset D, which forbids us from applying subsampling theorems in privacy analysis. Our solution to circumvent this was to make P_S DP as well by training the teacher model with DP on dataset D. \n\nWe hope this explanation helps. This is a subtle but important aspect of our DPKD algorithm. To the best of our knowledge, this is our contribution that does not appear in literature. \n\t\t\t\nQ: In Algorithm 1, line 3, what does it mean to initialize the student model with a privacy budget?\n\nA: Thanks for asking this clarifying question. You are right that in our experiments, the initialization of student model weights (in Section 3.4) does not incur any privacy cost, so \\epsilon_2 = 0. Having said that, there could be student initialization strategies that are functions of the dataset D, in which case, we need to account for the privacy loss. We will clarify this in the future versions of this paper.\n\nQ: Choosing privacy parameters, epsilon and delta, as shown in line 144?\n\nA: The privacy parameter \\delta controls the failure to probability of not satisfying the DP guarantee. If N is the size of the private data set, it is good to have \\delta << 1/N. The reason is simple: Consider a mechanism which randomly releases a sample from a dataset. Such an algorithm would satisfy DP with epsilon = 0 and \\delta = 1/N. However, clearly this is not a good private algorithm. Text books recommend using \\delta = 1/N^2, but all the recent works in deep learning with DP use \\delta nearly 1/N. Compared to some recent works, our choice of delta = 1/10N is better.\nOur choice epsilon is again inspired by the recent works on deep learning with DP. Most of the papers in this literature use epsilon around 8. In this regard, we are using a smaller epsilon compared to the previous works, which means better privacy. Recently, we also did experiments with epsilon = 1, and obtained similar results. We included these results in Appendix B in the revised version.\n \nQ: Guarantees for Pruning are missing: \n\nA: We are sorry that you could not find the privacy guarantees of our DPIMP algorithm. We state them in Table 4 and 5, where we give the performance of compressed models produced by DPIMP. In fact, all the compressed models (using both DPIMP and DPKD) have a privacy value of epsilon < 4.25. If your question was how we allocate privacy budget across pruning iterations, then it is mentioned in the line 253-254. We allocate an equal privacy budget across all the iterations of DPIMP such that resulting epsilon < 4.25.",
" Q:Comparison to prior work is not exhaustive.\n\nA: Thank you for giving references of the papers related to our work. We cite many of those papers in our submission already and we added the other ones pointed out by the reviewer in the revised version. As we wrote in our submission, the settings considered in those references are different from the problem we are considering. Moreover, we are confident that there is no published paper that uses the PATE framework to do fine-tuning of LLMs on GLUE benchmarks and obtains results comparable. Please refer to the response to all reviewers for a more detailed discussion on this point.\n\nQ: Choice of privacy budget epsilon. \n\nA: Most papers in deep learning space have used epsilon values around [4-8]; see the SOTA papers in NLP and image recognition tasks published last year [34, 74, 12, 42]. Most of those papers use epsilon = 8. Moreover, the US census bureau, which is the largest deployment of DP used epsilon > 18 [X1]. While these epsilon values may seem large, it is still a worst case guarantee. In comparison to many of these works, our choice of epsilon is on the lower end. Moreover, we have conducted more experiments with epsilon = 1, and our results remain unchanged qualitatively; please see Appendix B in the revised version.\n\nQ: Bigger model does not necessarily outperform a smaller one:\n\nA: Thanks for asking this fascinating question. While this statement may not be true in general nor can be proved theoretically (even in a non-private world), there is plenty of evidence that for NLP tasks that larger models tend to give better utility-vs-privacy tradeoffs [34, 74, X2]. \n\n\nQ: What is novel?\n\nA: Finally we address your question regarding novelty of the work. It is true that our work is empirical and does not have new mathematical results. Our goal was to bring to spotlight an important class of problems and algorithms related to model compression to the DP literature. Model compression is an extremely active area of research in non-private world ([43] for a survey), yet it has not received similar attention in the DP community. Case in point: there is not a single ICML, ICLR, NeurIPS paper on the topic. We believe that the model compression problem considered in our paper (where a single large model such as BERT is compressed into small BERT during fine-tuning stage) is new and different from the settings considered in other papers. Furthermore, our setting is more relevant to deployment of NLP models such as BERT, GPT2 etc. for common NLP tasks such as natural language understanding and next word prediction. \n\nFrom a technical standpoint, we believe that our paper shows some interesting results in DPIMP including its connections to the Lottery ticket hypothesis (we would love to hear your feedback on these sets of experiments since DPIMP provides better performance than DPKD). We believe that zero-shot initialization strategies for student models in DPKD is pretty surprising in its effectiveness in closing the gap. Given the importance of this problem, we think our work gives a substantial baseline for more work to follow. \n\nFinally, we believe that identifying a right and important problem, bringing the attention of the community towards solving it is in itself a worthy goal. \n\nWe would very much appreciate the reviewer considering increasing their rating in case they find our responses compelling.\n\nX1: https://www.census.gov/programs-surveys/decennial-census/decade/2020/planning-management/process/disclosure-avoidance.html\n\nX2: Survey https://differentialprivacy.org/dp-fine-tuning/\n",
" We thank you for a detailed review and questions and many positive comments. Before we will address specific questions you asked, we would like to point out that we have a separate comment for all reviewers describing new experiments we performed, the problem we considered in this paper and its comparison to ensemble learning and PATE. That comment should also clarify some of the other questions you had regarding privacy guarantees of our algorithms, what data set teacher models control, etc. \n\nHere we clarify specific questions you asked.\n\nQ: What dataset the teacher controls? \n\nA: Thanks for the question. This should be clear from our formal problem statement in the comment above but we briefly describe it again here. In our problem, both large language models (LLMs or teacher models) and compressed models (student) are working on the same private data set D, which in our case is GLUE tasks: SST2, MNLI, QNLI, QQP. Our LLM models are pretrained models such as BERT, but we have no access to the pretraining dataset. The differential privacy needs to be guaranteed only by the compressed model on the dataset D. The teacher models can use the private dataset D in arbitrary ways as long as published student models are DP. This is exactly the set up considered in the previous works on DP-NLP [34, 74, 76] but with additional constraints on the size of the models.\n\nQ: I am not entirely sure I understand section 2.1: what is the meaning of the 'sophisticate argument(s)'?\n\nA: Sorry that what we wrote was not clear. Let us recall the line where we use the phrase sophisticated arguments. \n\n“To get the tightest privacy parameters, however, one needs more sophisticated arguments such as the Moments Accountant method [1] or numerical composition algorithms [21]”.\n\nIn the above sentence, the phrase sophisticated arguments refer to the Moments Accountant method and numerical composition algorithms. In lines 91-83, we give a high-level description of analysis of DPSGD based on subsampling and strong composition theorem. However, this does not give the tightest bound on privacy. Moments Accountant method and numerical composition algorithms are mathematical techniques of obtaining the tightest composition theorems. These two are main technical contributions of the respective papers, and describing how those two techniques work is out of scope of this paper.\n\nQ: It was unclear to me why in section 3.2 DP-SGD was insufficient?\n\nA: Thanks for the question. We assume that you are asking why training only student models with DPSGD is not sufficient. Before we answer this question, let us recall the basics of analysis of DPSGD given in lines 87-91. In DPSGD, we add noise to clipped per-sample gradients and hence every iterate of DPSGD is private. In other words, after each iteration t of DPSGD, model weights W_t can be assumed to be public information. Now, in iteration t+1, gradients of samples are computed with respect to W_t, and hence privacy only depends on gradients belonging to the samples in a single batch. This is crucial for applying amplification by subsampling theorems in privacy analysis. \nNow consider a framework where the teacher models are trained using SGD on the dataset D and the student models are trained with DPSGD while minimizing the Equation 1 on dataset D. Such an algorithm does not output a differentially private compressed student model. This is due to the distillation loss term H(y_true, P_S) in Equation 1. Here, P_S is a function of the entire dataset as the teacher was not trained with DP. Therefore, gradients of samples are now functions of entire dataset D, which forbids us from applying subsampling theorems in privacy analysis. Our solution to circumvent this was to make P_S DP as well by training the teacher model with DP on dataset D. \n\nWe hope this explanation helps. This is a subtle but important aspect of our DPKD algorithm.",
" We thank you for a detailed review of our paper, giving us positive feedback, and fair comments. Below we will address specific questions you asked.\n\nEpsilon Value. We agree that epsilon around 4 used in our work may seem to offer low privacy protection at the first glance. However, it is important to keep in mind that this is a worst-case guarantee, where it is assumed that the adversary knows all data except the one we are protecting the privacy of. So actual protection on real datasets can be significantly higher. Indeed, many works have shown that even with epsilon = 8, one enjoys significant protection against membership inference attacks; for example [76] show that membership inference attacks on BERT models trained with epsilon = 8 is no better than random guessing (50% success rate). Moreover, almost all previous works in deep learning space have used epsilon values in this range; see the SOTA papers in NLP and images recognition tasks published last year [34, 74, 12, 42]; most of those papers use epsilon in the range of [4-8]. Moreover, the US census bureau, which is the largest deployment of DP used epsilon > 18 [X1]. Considering everything, we believe that our choice of epsilon offers more privacy protection.\n\nExperiments for epsilon value: Based on the reviewer’s suggestion, we repeated all of our experiments with a strict privacy budget epsilon = 1 and presented the results in Appendix B in the revised version. The observations from the original experiments continue to hold in this regime, and in particular relative drop in accuracy between large models and compressed models remains roughly the same.\n\nDistillBERT: Thanks for drawing attention to the performance of DistillBERT. The advantages/disadvantages of our proposed methods over distillBERT is a worthy point to discuss more. As you can see from our experiments, only unstructured IMP beats the performance of distillBERT, whereas structured IMP and zero-shot initialization strategies come close. Understanding this gap was precisely our goal: quantify how much performance one loses if we do not have access to pre-trained small student models such as distillBERT and how we can mitigate it. But as you also alluded, access to pretrained student models is not always an option. For example, suppose an application wants 1/15th of BERT; where would we find such a pretrained model? One option is to repeat the pretraining algorithm of distillBERT to produce 1/15-BERT. However, there are plenty of scenarios where this is not an option either due to infrastructure issues or due to cost or both. Our goal was to provide algorithms for model compression in such scenarios for fine-tuning stages that can match the performance of distilBERT. One way to interpret our results is to say that, if one can pretrain student models, then it could lead to better performance; if not, one could use the strategies described in our work and get an estimate of how much performance is left on the table. We believe this is a valuable information as pretraining costs for LLMs (even 1/10 of BERT has millions of parameters) is prohibitively large.\n \nCompression Factor: The above discussion also reveals why we chose the compression ratio of 50% for our experiments. We wanted to compare against distillBERT, which is a widely available public pretrained compressed model. However, given a chance, we would conduct experiments to compare against pretrained models with lower compression ratio.\n\nStructured-vs-Unstructured Compression: This is a great point, and we have acknowledged already that sparsity alone is not an accurate measure of inference latency. However, sparsity is one of the widely accepted measures of model compression starting from the pioneering works Lecun et al of [32] to the recent award winning work on Lottery Ticket Hypothesis [20, 11]. \n\nExperiments on Image classification. While our algorithms are not specific to a particular domain, our focus was on NLP applications. We hope that you agree that this is an important domain and worthy of full attention. We expect that our work will trigger more research in other domains including image classification, multimodal models, etc.\n\nPATE framework for model compression. Please see our general comment for all reviewers for a detailed discussion on this point.\n\nWe would very much appreciate the reviewer considering increasing their rating in case they find our responses compelling.\n\nX1: https://www.census.gov/programs-surveys/decennial-census/decade/2020/planning-management/process/disclosure-avoidance.html",
" This paper focus on a important scenario to consider differential privacy and model compression together. It considers KD and (structured and unstructed) pruning. The paper is generally well-written. Strengths:\n- it is interesting to consider DP and compression together.\n- consider both KD and (structured and unstructed) pruning.\n- the paper is well-written\n\nWeaknesses:\n- some statements are slightly over-claimed\n- the methods seems not that novel. For example, DPKD is like a pipline to bring many existed methods together.\n- Sec. 3.4 and 3.5 do not introduce anything beneficial, but something that has nearly idential results -- strategies to make better students/teachers.\n\n In the abstract, the authors state \"we initiate the study of differentially private model compression and propose frameworks for achieving 50% sparsity levels while **maintaining nearly full performance**\". From Tab. 2/3/4/5, it seems it does not achieve as well as the authors claim. A relatively better result is achieved by unstructured DPIMP, which has 83.5 vs. 85.2 (raw BERT) in AVG (there is still a tiny gap); however, unstructured pruning with 50% sparsity usually has a limited speed-up during inferencing. The above statement seems slightly overclaimed.\n\nThe paper seems to report relatively weak results in GLUE benchmark, not only for the proposed method but also for existing work. By checking other papers (e.g. DistilBERT https://arxiv.org/pdf/1910.01108.pdf or even a stronger one TinyBERT https://arxiv.org/abs/1909.10351), we could observe that the reported results for BERT and DistilBERT are lower than original papers. It is unclear whether the results are evaluated in DEV or TEST set. Could you please provide more details? Frankly speaking, with half the size of BERT-base parameters, such achieved performance might not be competitive with some models without pretraining (which might be much faster). none",
" - The authors focused on compressing LLM in a differentially private way. Previous works mostly focused on differentially private fine-tuning but not the compression parts.\n- The proposed method achieves very similar performance with 50% sparse model compression and guaranteeing differential privacy.\n- The authors showed the limitation of straightforward ways of applying DPSGD to Knowledge Distillation and proposed the improved version of it to reduce the accuracy losses. Strength:\n- The authors tackled an important but under-explored problem: DP for model compression in LLM.\n- The paper is well-written and easy to follow.\n\nWeakness:\n- The experiments are limited. It would be good if the authors provide more diverse experiments with varying compression rates.\n- Epsilon value is set to be too high. With epsilon < 4.25, I am not sure whether the achieved model is private enough.\n- The advantages from DistBERT is not convincing. Comparing with unstructured DPIMP is not fair. 1. Compression rate\n- In the paper, the authors only show the experimental results with 50% sparsity.\n- However, in many applications, we need much higher compression rates to increase the latency and reduce the costs.\n- It would be great if the authors can provide more results with a diverse range of compression rates.\n\n2. Epsilon value\n- Based on the differential privacy definition (Section 2), epsilon determines the maximum differences between two probabilities with and without a single sample.\n- If we select the epsilon value as 4 (which is < 4.25), the difference ratio is exp(4) = 55.\n- In other words, the maximum probability differences would be significantly large.\n- In this paper, most results are based on the epsilon value < 4.25. But I think this would be a too loose threshold. \n- It would be good if the authors provide tighter privacy thresholds and the corresponding performances.\n\n3. DistBERT\n- Even though DistBERT is computationally expensive, we only need one time for training the DistBERT (because it is KD on public data). \n- Then, we can use the DistBERT for various downstream tasks.\n- In that case, I think the computational complexity of the DistBERT does not matter a lot.\n- At the end, the objective is to achieve the differentially private compressed language models and the objectives seem the same between DistBERT and the proposed method.\n- In that point of view, the advantages of the proposed method in comparison to DistBERT are limited.\n\n4. Structured DPIMP vs Unstructured DPIMP\n- It would be good to show the quantitative inference latency comparison between these two methods.\n- Also, directly comparing between unstructured DPIMP and DistBERT is not the fair way.\n- Actually, based on Table 4, DistBERT seems consistently better than Structured DPIMP. - As the authors said in the introduction, the proposed method is not limited to the NLP.\n- In that case, it would be good to show how well those compression methods (with DP) can be applicable to other domains in the appendix.\n- Also, the authors do not consider the PATE framework for model compression. However, in some applications, PATE works better than DP-SGD. To show the generalization of the proposed method, it would be better to discuss and consider PATE as well.",
" A commonly used technique in practical NLP systems is that of model distillation (or compression or sparsification). This technique converts a Large Language Model (LLM) into a more lightweight/small model so as to make the prediction faster and memory efficient.\n\nOn the other hand, Differentially Private (DP) training has recently started to become a standard notion of safety against various adversarial model attacks. \n\nThis work lies at the intersection of the above two concepts, in that it introduces methods to obtain compressed student models, from large teacher models, that have differential privacy guarantees. To that end, the paper investigates the following approaches:\n\n1) Obtaining student models via a distillation loss function. Here DP is introduced possibly in the fine-tuning steps of teacher and students models. \n\n2) Transforming teacher models into student models via Iterative Magnitude Pruning (IMP). Here DP is introduced in the fine-tuning steps. Strengths:\n\n1) The setup and motivation is practically quite relevant. This is because both distillation and differential privacy are critical in large-scale NLP systems.\n\n2) It studies two distinct compression methods: one based on the distillation loss function and the other based on weight pruning. \n\nWeaknesses:\n\n1) The DP guarantees for pruning based methods are entirely missing. For instance, the output of algorithm 2 is said to be a \"private student model\" but one doesn't know what the epsilon-privacy guarantee is. This makes the algorithm practically useless since the user doesn't have a calibration of privacy gains. \n\n2) The experimental results are limited only to BERT and 1/2-BERT. It would be interesting to see if their results generalize to other architectures. \n\n3) The technical contribution to introduce DP to model compression is a fairly straightforward application of existing tools and doesn't involve significant novelty. 1) A main motivation for introducing DPKD is mentioned in lines 118-121. While that explanation with regards to the KD loss function makes intuitive sense, is it possible to formalize it properly? The text in those lines is used to justify the introduction of DPKD but it is quite hand-wavy and doesn't cite any previous work for support as well.\n\n2) In Algorithm 1, line 3, what does it mean to initialize the student model with a privacy budget? How can just an initialization step have a privacy guarantee? This is also not clear from the experiments as there only the teacher and student model training steps involve DP.\n\n3) What is the motivation for choosing privacy parameters, epsilon and delta, as shown in line 144? Calibration of these quantities is a problem in current literature as we don't know which values are good, so the current choice seems arbitrary. \n\n4) As mentioned in the weaknesses, the DP guarantees for DPIMP methods are missing. Can we provide epsilon, delta values for the models resulting from those algorithms?\n\n5) Related to the PATE works, there is also very recent work on introducing DP to LLMs at the prediction stage (see recent works such as https://arxiv.org/abs/2201.00971 and https://arxiv.org/abs/2205.13621). It will be helpful to add a few lines regarding these to improve the quality of presentation. \n\nMinor typos:\n\n1) Phrase \"for initialization compressed models\" in line 54.\n\n2) \"show\" should be replaced with \"shows\" in line 55.\n\n3) Phrase \"opens a whole in direction\" in line 295. Yes.",
" This paper explores the links between differential privacy and model compression. Authors propose a framework for training NLP models with high utility by leveraging DP-SGD and knowledge distillation.\nThis is an experimental paper. The work is rather easy to follow, with clear motivations of why this technique should be explored, how the scientific community can benefit from private compressed models etc. The method that authors propose seems sound and can prove useful for a broad scientific community. The way the manuscript is structured really aids its interpretability: the motivation is followed by a naive implementation, followed by the comments on why such method does not perform well, followed by another iteration with limitations addressed etc. This in my view makes the paper much friendlier to larger audience.\nAuthors provide a number of evaluations of their technique in comparison to its non-private counterpart. What I find really commendable is the fact that authors report the results which were not entirely positive and demonstrated their understanding of the limitations of their method. The overall results of compressed models being just slightly behind the non-private counterparts look rather promising.\n\nHowever, this work contains a number of shortcomings that I think that authors need to address. \nFirstly, authors reject the idea of using PATE in their evaluation as DP-SGD has better performance. I am not entirely sure A) why is this a valid metric to reject the work that is the closest DP student-teacher implementation to this manuscript and B) I am not even sure that this statement is necessarily true [4,5]. It seems to me that PATE was avoided by the authors without a strong reason behind it, making me question some parts of the evaluation of the proposed method. \nSecondly, I am not convinced that a combination of DP and KD is a particularly novel method. In addition to the aforementioned PATE, I also found a number of works such as [1,2,3] (in fact [1] even shares the name of the framework with this study), which seem to have addressed this issue before in detail (some of which were peer-reviewed previously). So what exactly is the novel contribution of this work which was not previously proposed? Because on a high level, while the compression seems like a fairly useful trait (and a promising result), combining DP-SGD with KD is fairly trivial and has been done before.\nOne other comment I have is the argument on the 'better models': for DP training a 'bigger' model does not necessarily outperform a smaller one, so I am not sure such argument holds [6]. \nMinor: inconsistencies with DP-SGD (the original version) and DPSGD.\n\n[1] Lyu, Lingjuan, and Chi-Hua Chen. \"Differentially private knowledge distillation for mobile analytics.\" Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2020.\n[2] Sun, Lichao, and Lingjuan Lyu. \"Federated model distillation with noise-free differential privacy.\" arXiv preprint arXiv:2009.05537 (2020).\n[3 ]Wang, Ji, et al. \"Private model compression via knowledge distillation.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.\n[4] Papernot, Nicolas, et al. \"Scalable private learning with pate.\" arXiv preprint arXiv:1802.08908 (2018).\n[5] Uniyal, Archit, et al. \"DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?.\" arXiv preprint arXiv:2106.12576 (2021).\n[6] Klause, Helena, et al. \"Differentially private training of residual networks with scale normalisation.\" arXiv preprint arXiv:2203.00324 (2022). A number of questions arose during my evaluation of this work. I would like the authors to either point me to the section where I can find the necessary details or to provide an answer outside of the manuscript space:\n- What dataset does a teacher control? There was a discussion about PATE needing a disjoint dataset, which may be unrealistic, but I did not find any mention of what are the data assumptions on the teacher model in this work? Moreover, could the authors elaborate why is a disjoint setup considered unrealistic in this setting?\n- I am not entirely sure I understand section 2.1: what is the meaning of the 'sophisticate argument(s)'? It strikes me as if the authors included a list of words associated with DP without really explaining them or even linking them together.\n- It was unclear to me why in section 3.2 DP-SGD was insufficient? I suspect this links to my qn on data assumptions for the teacher mode and I am unsure whether a teacher model is required to be private or not, so would you kindly clarify?\n- In the experiments an epsilon of < 4.25 was chosen: what is the motivation behind it? Why this specific value and how does it relate to prior literature? If I overlooked it, then please point me to a section/reference. In addition to the limitations outlined by authors, my comments are largely explained above and include: Similar prior work, constraints on the data of the teacher model etc.\nWhile overall, the work shows some interesting results and is relatively easy to read, I cannot recommend acceptance until points above are addressed by the authors (particularly wrt novelty and comparison with prior works)."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"76WUyEhFgZC",
"ZWGZAEa9xcF",
"4IsA7kinovg",
"W3sSdPAmGJ",
"AUJlrJ2HuP",
"yF8J6SFbPTl",
"nJAkJ_m5l9w",
"H1vv3AagDt",
"e3fUoSTkCwL",
"JIXOX-cBW_is",
"L7u1VHSTEqO",
"nips_2022_68EuccCtO5i",
"nips_2022_68EuccCtO5i",
"jUYHYJk5N9L",
"4J6vRMl4iV",
"QePr2azemVc",
"SaGpF500-N8",
"30wS-tc1Ic6",
"opxcBLLLhC3",
"nips_2022_68EuccCtO5i",
"nips_2022_68EuccCtO5i",
"nips_2022_68EuccCtO5i",
"nips_2022_68EuccCtO5i"
] |
nips_2022_wwyiEyK-G5D | REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering | This paper revisits visual representation in knowledge-based visual question answering (VQA) and demonstrates that using regional information in a better way can significantly improve the performance. While visual representation is extensively studied in traditional VQA, it is under-explored in knowledge-based VQA even though these two tasks share the common spirit, i.e., rely on visual input to answer the question. Specifically, we observe in most state-of-the-art knowledge-based VQA methods: 1) visual features are extracted either from the whole image or in a sliding window manner for retrieving knowledge, and the important relationship within/among object regions is neglected; 2) visual features are not well utilized in the final answering model, which is counter-intuitive to some extent. Based on these observations, we propose a new knowledge-based VQA method REVIVE, which tries to utilize the explicit information of object regions not only in the knowledge retrieval stage but also in the answering model. The key motivation is that object regions and inherent relationship are important for knowledge-based VQA. We perform extensive experiments on the standard OK-VQA dataset and achieve new state-of the-art performance, i.e., 58.0 accuracy, surpassing previous state-of-the-art method by a large margin (+3.6%). We also conduct detailed analysis and show the necessity of regional information in different framework components for knowledge-based VQA. Code is publicly available at https://github.com/yzleroy/REVIVE. | Accept | The paper incorporates regional features to better retrieve relevant knowledge and makes direct use of the visual signal in answer prediction whereas the previous SOTA methods simply rely on the retrieved knowledge for the final prediction. The proposed method outperforms SOTA on OK-VQA by a large margin effectively showing the efficacy of the direct use of visual information in the answer prediction. I agree with the reviewer 5EMT that showing that the information contained in the image is important for answering knowledge-based visual questions is an important contribution to the field as most of the attention was put on the language and knowledge signal.
The author rebuttal also resolved most of the reviewers’ concerns and questions, and made the reviewers reach a consensus towards the acceptance. | train | [
"fg6uLQgwIc",
"rTwYoamIfPv",
"COosSKOeia8",
"lhbnKiYqRp",
"SCiG3BBEKWz",
"QrBXFdWHl0l",
"w7wPOxtyf1K",
"6Tth2lzVXmk",
"n922P0utmR_",
"9TpWi19Sg8h",
"3gtcjzqkPSt",
"ZzZfeN79eul",
"duA_wKEoHWU",
"84ibeUA0RBb",
"HfmTnvDhPdST",
"8IHy-eXRQ39",
"H7lKXbcBSh_",
"TScHQ1ueEom",
"D3bIQfmZAjK"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 5EMT, thanks for your effort again! We are happy that our rebuttal well addressed your concerns!\n\n",
" Dear reviewer uNM2, thanks for your effort again! We are happy that our rebuttal well addressed your concerns!",
" Thank you for the detailed author response. After reading all the reviews and the discussion, I am happy to support this paper for acceptance.",
" Thank you, authors, for your elaborate rebuttal. I have read the other reviewer's comments and the author's rebuttal, which addresses most of my concerns. Therefore, I would like to raise my score to 5.",
" Dear Reviewer 3J5a, we really appreciate for your time and efforts in improving our work. Your concerns and suggestions are very valuable for improving the rigorousness, clarity and readability of our paper. We're so pleased to receive such helpful suggestions and we've revised our paper by them. Since the deadline of reviewer-authors discussion is approaching, if you have any concerns or suggestions about our work, please let we know. We are so happy to address your concerns and revise our paper accordingly.",
" Dear Reviewer 5EMT, we would like to thank you again for your precious efforts, time and suggestions. Your constructive suggestions (e.g., figure, ablation study, wording and etc.) have significantly improved the quality of our paper. Considering the deadline of reviewer-authors discussion is approaching, if you have any concerns or suggestions about our work, please let we know. We are happy to address your concerns and revise our paper accordingly.",
" Dear Reviewer uNM2, the deadline of reviewer-authors discussion is Aug 09 '22 01:00 PM PDT, which means the time left for discussion is only about one day. If you have any further concerns or questions, could you please be explicit about it? We will try our best to address it. \n\nWe really appreciate your help in improving our work!",
" Our implicit knowledge is different from what is originally proposed in the KAT paper [1] – we add regional descriptions in the prompts to better retrieve the knowledge that corresponds to the regional concepts. In our original paper, we showed the result **52.4%** with title *implicit knowledge*, which is actually the result of our regional version of implicit knowledge retrieval (and this is implicit knowledge only, i.e., no other modules added). \n\nYet the score **52.4%** does not show how the regional descriptions in the implicit knowledge affect the final performance. Therefore, we remove the regional descriptions in the implicit knowledge retrieval, and do retrieval in an identical way to what is proposed in KAT. This leads to the score **51.2%**.\n\nIn the updated paper, we modified Table 6 to include the new ablation study. The first two lines, i.e., *Imp.* and *R-Imp.* corresponds to the two experiments, suggesting adopting regional descriptions in the implicit knowledge can lead to **1.2%** improvement. We hope this answers your question. If you have any further questions or concerns, please do not hesitate to let us know.\n\nThank you for your time!\n\n[1] Liangke Gui, Borui Wang, Qiuyuan Huang, Alex Hauptmann, Yonatan Bisk, and Jianfeng Gao. Kat: A knowledge augmented transformer for vision-and-language",
" I am confused about this experiment setting, can you explain more on this?\nAre the results corresponding to some lines in table 6?\nWhat do you mean by \"use the implicit knowledge with or without the regional descriptions for final answering model in this ablation study\"?\n\nREVIVE with only implicit knowledge achieves 52.4 based on the table 6, is it right?\n\n",
" Dear Reviewer uNM2, we would like to thank you again for the efforts and suggestions. We have provided the detailed responses to all your concerns. Could you take a look at our response? Feel free to raise any more questions, we are happy to answer them further.\n\n ",
" Thank you for acknowledging the strong performance and extensive experiments of our work! And also thanks for your valuable feedback! We've revised the manuscript to improve its clarity and reader friendliness. The following are our answers to specific questions:\n\n----\n### Q1: The technical novelty in this paper is too weak ...... hackish tricks such as changing the input cues of some modules in their model.\n\n**Response:** Our unified motivation is to make full use of the visual features (*i.e.*, regional features) in knowledge-based VQA tasks, which are neglected by existing works. Specifically, we use visual features in both knowledge retrieval and answer prediction:\n\n+ Local visual features are important in retrieving external knowledge, as the retrieved knowledge should also correspond to individual concepts in the images, in addition to the global semantics. Therefore, we use extracted regional features to retrieve external knowledge and regional descriptions to obtain implicit knowledge. \n+ The final prediction model answers the question based on the retrieved knowledge, which should be given the opportunity to look at the image thoroughly. Therefore, we extend the language encoder-decoder model, *i.e.*, T5 [1], to incorporate the regional features and region coordinates. \n\n\n\nRegional descriptions incorporated in each step above can improve the scores significantly and are critical for achieving state-of-art performance, as shown in the table below. One component is removed for each score.\n\n| Model | Accuracy (%) | \n|:-| :-: |\n| REVIVE | 56.6 |\n| Replace regional features with sliding window features [2] in explicit knowledge retrieval | 55.8 |\n| Remove regional descriptions in implicit knowledge retrieval |55.6 |\n| Remove object-centric region features in final answering model | 55.0 |\n| Remove regional descriptions in final answering model | 55.9 |\n \nBuilding upon the implicit knowledge module and explicit module from previous works, our work successfully fills in the gap between the existing methods and the value of precise visual modeling. \n\nWe have revised the methodology section to highlight our contributions. \n\n[1] Raffel et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR 2020\n\n[2] Gui et al. KAT: A knowledge augmented transformer for vision-and-language. NAACL 2022\n\n----\n\n### Q2: In the ablation study in Table 6, the most effective components that give the largest performance gain are explicit knowledge (which is already introduced in the KAT method) and ensembling (which is a well-known trick to boost the performance). The other components such as changing or adding input cues show almost negligible performance gain which is less than 1%.\n\n**Response:** This is not true. As shown in Table 6, using the visual representations of object regions can improve the performance from **54.0%** to **55.4%** (**+1.4%**). \n\nFurthermore, the implicit/explicit knowledge module in Table 6 is different from what is proposed in KAT [1], we adopt our proposed regional descriptions and regional features for implicit and explicit knowledge retrieval. To illustrate this, we performed a further ablation experiment comparing the performances of using KAT's and the proposed REVIVE's method for retrieving explicit knowledge, which can be referred to the above table in Q1. \n\n\nWe have also modified Table 6 to make it clearer. Using the regional descriptions can improve the performance of implicit knowledge by **1.2%**, while adopting the regional features can boost the performance of explicit knowledge retrieval by **1.1%**.\n\n[1] Gui et al. KAT: A knowledge augmented transformer for vision-and-language. NAACL 2022\n",
" We sincerely thank all the reviewers for their previous time and efforts in reading and reviewing our paper. We are so glad that all reviewers recognize our strong performance and extensive experiments. Your great suggestions have greatly improved the quality of our paper. Please kindly refer to the individual responses below for our response to each question. We have also updated our submission to reflect reviewers’ opinions in details.\n\n+ Section 4.3 and Table 6: We update the ablation study to better show how each of our proposed component improve the performance. In Table 6, we clearly show the performance improvement from implicit knowledge retrieval with the proposed regional descriptions.\n \n+ Section 5: We update the limitations and broader impact, in which we make a more in-depth discussion about bias (both distributional [1] and societal [2]) in knowledge-based VQA.\n \n+ We update all the figures so that their texts are clear, and replace the left example in Figure 3 with a more representative one. \n \n+ Supplementary Materials Section D: We add more visualized VQA examples (*i.e.*, Figure 1, 4 and 5). Especially, in Figure 1 we illustrate the difference of the retrieved implicit knowledge without/with the proposed regional descriptions.\n \n+ Supplementary Materials Section C: We add a new ablation study on the effect of using different detectors to compare the performances by using different detectors (*e.g.*, Faster R-CNN [3] and GLIP [4]).\n \n+ L36: We adjust the wording to make it clearer.\n \n+ Section 2: We include more discussions on works that also incorporate visual embeddings and captions.\n\n+ Section 3.4: We revise the relationship to existing works to better clarify the difference between our method and existing methods on knowledge-based VQA tasks.\n\n \n[1] Agrawal et al. Don't just assume; look and answer: Overcoming priors for visual question answering. CVPR 2018.\n\n[2] Hirota et al. Gender and Racial Bias in Visual Question Answering Datasets. ACM FAccT 2022.\n\n[3] Ren et al. Faster R-CNN: Towards real-time object detection with region proposal networks. NeurIPS 2015\n\n[4] Li et al. GLIP: Grounded Language-Image Pre-training. CVPR 2022",
" ### Q3: It would be better to further emphasize the necessity of each of the proposed components in the methodology section. What are the technical reasons why each of the components is required for better training and performance improvement? \n\n**Response:** Thanks for the great suggestion! We have revised our methodology section to highlight our contribution in systematically incorporating visual signals. \n\nWe summarize the motivation of each component as follows.\n\n (a) **Implicit knowledge with regional descriptions.** GPT-3 [1] is a powerful language model with question-answering capability. Yet it only accepts language input. So we convert the question-image pairs into textual formats. In addition to questions and captions, we further introduce the regional descriptions, which can provide more regional information. The ablation study on introducing the regional descriptions/tags into the prompt is in the first table above and Q3 of reviewer 3J5a.\n\n(b) **Explicit knowledge with regional features.** KAT [2] uses a sliding window on the image to retrieve explicit knowledge, which hurt the performance by unavoidably introducing much irrelevant background information. Instead, we propose to use the regional features obtained by an object detector to retrieve explicit knowledge. The ablation study on using the explicit knowledge retrieved by our proposed region-based manner against KAT is shown in the first table above.\n\n(c) **Object-centric Representations.** Instead of only using language clues like PICa [3] and KAT [2], we further integrate the visual information into the final answering model, and we find the positional information of the objects matters, thus we encode the regional features and the positional coordinates with a visual encoder as the object-centric representation, the ablation can also be referred to the first table above.\n\nThe performance improvement of each proposed component can be observed in the table in Q1 and Table 6 in our latest submitted paper.\n\n[1] Brown et al. Language models are few-shot learners. NeurIPS 2020\n\n[2] Gui et al. KAT: A knowledge augmented transformer for vision-and-language. NAACL 2022\n\n[3] Yang et al. An empirical study of GPT-3 for few-shot knowledge-based VQA. AAAI 2021\n\n\n\n----\n\n\nThanks again for your supportive comments. We hope that our explanations have successfully cleared your concerns",
" Thanks for appreciating our motivation and method! And also thanks for your constructive feedback and suggestions for our work, we've revised the manuscript to improve its clarity and reader friendliness. The following are our answers to specific questions:\n\n----- \n### Q1: Although previous work did not make full use of the image information on knowledge-based VQA, there are previously proposed methods that incorporate the information in the prediction module (as opposed to L36 “only use visual information for knowledge retrieval but ignore it in the final answering model”). ...... but it would be better to accurately discuss it in the paper.\n\n**Response:** Thank you for the great suggestion! We have revised the language in L36 to be more accurate, and also add separate discussions in Section 2 (related works), *i.e.* L76-L78. \n\nIn fact, in the previous version of our paper, the L36 *\"only use visual information for knowledge retrieval but ignores it in the final answer model\"* especially refers to recent SOTA works (*i.e.*, PICa [6] and KAT [7]) for knowledge-based VQA task, they ignore the visual information in the final answering model.\n\nEven though the aforementioned works [1][2][3][4][5] use the visual embeddings or captions in the final prediction module, they haven't used the regional representations to retrieve different types of knowledge and incorporated the object-centric representations into the final prediction model, and they are not directly applicable to improve the encoder-decoder module in knowledge-based VQA task.\n\n\n[1] Narasimhan et al. Out of the Box: Reasoning with Graph Convolution Nets for Factual Visual Question Answering. NeurIPS 2018 \n\n[2] Narasimhan et al. Straight to the facts: Learning knowledge base retrieval for factual visual question answering. ECCV 2018 \n\n[3] Shah et al. KVQA: Knowledge-Aware Visual Question Answering. AAAI 2019 \n\n[4] Garcia et al. KnowIT VQA: Answering Knowledge-Based Questions about Videos. AAAI 2020. \n\n[5] Garcia et al. Knowledge-Based Video Question Answering with Unsupervised Scene Descriptions. ECCV 2020\n\n[6] Yang et al. An empirical study of GPT-3 for few-shot knowledge-based VQA. AAAI 2021\n\n[7] Gui et al. KAT: A knowledge augmented transformer for vision-and-language. NAACL 2022\n\n----- \n\n### Q2: Text in the figures looks blurred. The resolution should be improved.\n\n**Response:** Thank you for pointing this out! We have updated all the figures in our paper so that the texts in the figures are clearer. \n\n----- \n### Q3: Did you compare GLIP and Faster R-CNN object detectors and how they affect the final performance?\n\n**Response:** Yes, we have performed this experiment, so that we can better figure out the influence of using different object detectors on the model's final performance. The results of using the GLIP [1] and Faster R-CNN [2] as the object detectors are reported in the following table. \n\n| Detector | Accuracy (%) | \n| :-: | :-: |\n| Faster R-CNN (R50)| 55.3 | \n| Faster R-CNN (R101) | 55.6 |\n| GLIP | **56.6** |\n\nAs shown in the table, we can see that Faster R-CNN with ResNet-50 [3] and ResNet-101 [3] as the backbones can achieve **55.3%** and **55.6%** accuracy, respectively, and using the GLIP as the object detector can achieve better performance (*i.e.*, **56.6%**). These results demonstrate the accuracy of detecting object regions can play an important role in the final performance.\n\nDue to the limit of space, we added this ablation study experiment into our latest submission of supplementary materials (*i.e.*, L34-L39 and Table 3).\n\n[1] Li et al. GLIP: Grounded Language-Image Pre-training. CVPR 2022\n\n[2] Ren et al. Faster R-CNN: Towards real-time object detection with region proposal networks. NeurIPS 2015\n\n[3] He et al. Deep residual learning for image recognition. CVPR 2016\n\n-----\n\n### Q4: Will be the codes and models released?\n\n**Response:** Yes, we will make the code and models publicly available upon the acceptance of the paper. \n\n-----\n\n### Q5: A more in-depth discussion about bias (both distributional and societal) in knowledge-based VQA and how it affects the proposed model would be interesting.\n\n**Response:** We appreciate this helpful suggestion. We have revised the limitation and made a more in-depth discussion about bias accordingly in our latest submission (*i.e.*, L311-L315).\n\n-----\n\nThanks again for your supportive comments. We hope that our explanations have successfully cleared your concerns",
" Thanks for appreciating our motivation and method! And also thanks for your constructive feedback and suggestions for our work, we've revised the manuscript to improve its clarity and reader friendliness. The following are our answers to specific questions:\n\n----- \n### Q1: The experiment part is not clear in that we do not understand how the regional features benefit the performance. Concretely, as the regional descriptions are injected into different stages of the framework, how do these descriptions influence the performance of implicit knowledge retrieval?\n\n**Response:** Thanks for the great suggestion! We have revised our experiment section, especially Table 6, to better highlight how the regional features benefit the performance for different components. \n\nSpecifically, for implicit knowledge retrieval, we compare the case with and without regional descriptions, and show that adding the regional descriptions can improve the performance by **1.2%**. Similarly, for explicit knowledge, we consider retrieving it with and without regional features, and show that adopting the regional features can boost the performance by **1.1%**.\n\nFurthermore, the object-centric region features can achieve **1.4%** points improvement, feeding context-aware questions into the answer generative model attain **0.5%** points gain, further introducing regional descriptions (*i.e.*., regional tags) into contexts has **0.7%** points improvement. These results can explain how the regional features in each component benefit the performance.\n\nIn order to illustrate how regional information affects the knowledge retrieval process, we add the Figure 1 in the latest submitted supplementary materials due to the space limit of main text. Taking the top example of Figure 1 for explanation, without introducing the informative regional descriptions (e.g., *\"sunlight\"* and *\"sun\"*), we cannot generate the correct implicit knowledge candidate *\"Sun\"*, since the *\"Lamp\"* is also reasonable when given the question and context, which can demonstrate the effectiveness of using the regional descriptions for implicit knowledge retrieval.\n\nTo better figure out the influence of using regional descriptions for implicit knowledge retrieval, we also conduct the ablation study and you can refer it in Q3.\n\n----- \n### Q2: The left qualitative example in Figure 3 confuses me as none of the knowledge retrieved mentions the right answer \"battery'' and the regional tags introduce additional misleading objects say the desktop.\n\n**Response:** Sorry for the confusion. We retrieve 5 implicit knowledge candidates, while we only show the first retrieved implicit knowledge candidate in Figure 3 and Figure 4. The correct answer *\"battery\"* is also retrieved as the candidate in the implicit knowledge, but it has been omitted by the ellipsis in the left example of Figure 3 due to the limit of space. \n\nThe goal of the regional descriptions is to provide a detailed and comprehensive description of the image. Therefore, it inevitably includes irrelevant concepts. Large language models with strong QA capability can naturally pick the relevant concepts from the contexts. \n\nSince this example is not representative enough, we replaced it by another example in the left example of Figure 3. In the new example, the implicit knowledge retrieval is based on the regional description/tag *\"sandwich''*, and the correct answer *\"cheese''* is successfully retrieved by GPT-3 model [1]. \n\n[1] Brown et al. Language models are few-shot learners. NeurIPS 2020\n",
" ### Q3: The content of the explicit knowledge may not be relevant to the question, it is also good to do the ablation that only encodes the entities from CLIP without the knowledge sentences to prove the contribution of the regional description.\n\n**Response:** We appreciate this constructive suggestion, we have performed the ablation study experiment that only encodes the entities from CLIP [2] without the knowledge sentences from the explicit knowledge. The results are shown in the following table, we only use the implicit knowledge with or without the regional descriptions for final answering model in this ablation study. \n\n| Regional Descriptions | Accuracy (%) | \n| :-: | :-: |\n| ✗ | 51.2 | \n| ✓ | **52.4** |\n\nIt has been shown that when further introducing the regional descriptions into the textual prompt for implicit knowledge retrieval, the final performance can be improved from **51.2%** to **52.4%** accuracy, *i.e.*, **1.2%** accuracy improvement. The results can prove that using the regional descriptions can more accurately retrieve implicit knowledge, this is reasonable since the regional descriptions can provide more object-centric textual clues for GPT-3 model [1].\n\nWe've already added this ablation study experiment into our latest submission (*i.e.*, Table 6).\n\n\n[1] Brown et al. Language models are few-shot learners. NeurIPS 2020\n\n[2] Radford et al. Learning Transferable Visual Models From Natural Language Supervision. ICML 2021\n\n----- \n\nThanks again for your supportive comments. We hope that our explanations have successfully cleared your concerns",
" The authors observe that in most state-of-the-art knowledge-based VQA methods: \n1) visual features are extracted either from the whole image or in a sliding window manner for retrieving knowledge, and the important relationship within/among object regions is neglected; \n2) visual features are not well utilized in the final answering model, which is counter-intuitive to some extent. \nBased on these observations, they propose a new knowledge-based VQA method REVIVE, which tries to utilize the explicit information of object regions not only in the knowledge retrieval stage but also in the answering model.\nThe authors perform several experiments on the standard OK-VQA dataset and achieve new state-of-the-art performance. \n Strength\n\n- The proposed method shows favorable performance compared to the recent baselines.\n\n- The authors provide various analyses on the design choices including the hyper-parameters that affect the performance improvements.\n\nWeakness\n\n- The technical novelty in this paper is too weak. This paper is an engineering paper that combines several engineering tricks without a unified motivation or a theoretical background. Most of the components of the proposed method are hackish tricks such as changing the input cues of some modules in their model.\n\n- In the ablation study in Table 6, the most effective components that give the largest performance gain are explicit knowledge (which is already introduced in the KAT method) and ensembling (which is a well-known trick to boost the performance). The other components such as changing or adding input cues show almost negligible performance gain which is less than 1%.\n\n\n\n========== ------- Comments after the rebuttal ------========\nI have read the other reviewer's comments and the author's rebuttal, which addresses most of my concerns. Therefore, I would like to raise my score. It would be better to further emphasize the necessity of each of the proposed components in the methodology section. What are the technical reasons why each of the components is required for better training and performance improvement? The authors addressed the limitations of the proposed method and the potential negative social impact of their work.",
" This paper addresses the task of knowledge-based visual question answering (VQA). Given a question and an image, the aim is to answer by using external knowledge bases. The proposed model is built on top of large pre-trained models (GLIP, CLIP, GPT3, Vinyl, etc.) to extract and encode visual features, generate captions, retrieve external knowledge, encode implicit knowledge and predict an answer. The main contribution of the paper is to make better use of the visual information than in previous work, showing in the experiments that visual features can contribute to improving performance on the OK-VQA dataset. Strengths\n\n- The paper and the proposed model are well-motivated. Previous work on knowledge-based VQA did not make full use of the image signal for answering prediction, relying mostly on external knowledge.\n\n- Showing that the information contained in the image is important for answering knowledge-based visual questions is an important contribution to the field. Until now, image features were dismissed and most of the attention was put on the language and knowledge signal. The paper shows that more attention should be paid to visual information.\n\n- The proposed method outperforms previous work on the OK-VQA dataset by a large margin, showing the efficacy of incorporating image information in the answer prediction.\n\n- Ablation study is conducted showing results when different parameters of the model are modified. According to the results, all of the components of the proposed model have a positive contribution to the overall performance.\n\nWeaknesses\n\n- Although previous work did not make full use of the image information on knowledge-based VQA, there are previously proposed methods that incorporate the information in the prediction module (as opposed to L36 “only use visual information for knowledge retrieval but ignore it in the final answering model”). A few examples: [1][2][3] (and others). Also, some work on video knowledge-based VQA also uses captions and object relationships in the prediction module [4][5]. I don’t think this changes the main point of the paper, as these models could not make the visual features improve significantly the final performance, but it would be better to accurately discuss it in the paper.\n\n- Text in the figures looks blurred. The resolution should be improved.\n\nReferences\n- [1] Narasimhan et al. Out of the Box: Reasoning with Graph Convolution Nets for Factual Visual Question Answering. NeurIPS 2018\n- [2] Narasimhan et al. Straight to the facts: Learning knowledge base retrieval for factual visual question answering. ECCV 2018\n- [3] Shah et al. KVQA: Knowledge-Aware Visual Question Answering. AAAI 2019\n- [4] Garcia et al. KnowIT VQA: Answering Knowledge-Based Questions about Videos. AAAI 2020.\n- [5] Garcia et al. Knowledge-Based Video Question Answering with Unsupervised Scene Descriptions. ECCV 2020 - Did you compare GLIP and Faster R-CNN object detectors and how they affect the final performance?\n- Will the code and models be made publicly available? A more in-depth discussion about bias (both distributional [6] and societal [7]) in knowledge-based VQA and how it affects the proposed model would be interesting, but not necessary. \n- [6] Agrawal et al. Don't just assume; look and answer: Overcoming priors for visual question answering. CVPR 2018.\n- [7] Hirota et al. Gender and Racial Bias in Visual Question Answering Datasets. ACM FAccT 2022.",
" This paper augments the KAT model with regional visual representations for Outside-knowledge Visual Question Answering (OK-VQA) and achieves new state-of-the-art results. Precisely, the regional tags are used for retrieving better implicit knowledge using GPT-3. Regional features are used to retrieve explicit features from WikiData and serve as an additional hint to the FiD answer generator. Strengths:\nThis paper presents a critical weakness of the existing SOTA approach that the regional features are missing. The authors address this issue by introducing the regional descriptions in different formats for different stages of the REVIVE framework including implicit knowledge retrieval and the answer generation stage. The experiments verify the value of introducing the regional features.\n\nWeakness: \nThe experiement part is not clear in that we do not understand how the regional features benefit the performance. Concretely, as the regional descriptions are injected into different stages of the framework, how do these descriptions influence the performance of implicit knowledge retrieval? Also, the left qualitative example in Figure 3 confuses me as none of the knowledge retrieved mentions the right answer ``battery'' and the regional tags introduce additional misleading objects say the desktop. \n\n\nOriginality: The paper is original.\nQuality: The paper presents an interesting idea of using regional features to augment the prompts for better OK-VQA results.\nClarity: The paper is well written.\nSignificance: Ok, See weakness. Also, the content of the explicit knowledge may not be relevant to the question, it is also good to do the ablation that only encodes the entities from CLIP without the knowledge sentences to prove the contribution of the regional description. The authors have adequately addressed the limitations and potential negative social impact of their work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"COosSKOeia8",
"lhbnKiYqRp",
"QrBXFdWHl0l",
"duA_wKEoHWU",
"6Tth2lzVXmk",
"84ibeUA0RBb",
"9TpWi19Sg8h",
"n922P0utmR_",
"8IHy-eXRQ39",
"duA_wKEoHWU",
"H7lKXbcBSh_",
"nips_2022_wwyiEyK-G5D",
"H7lKXbcBSh_",
"TScHQ1ueEom",
"D3bIQfmZAjK",
"D3bIQfmZAjK",
"nips_2022_wwyiEyK-G5D",
"nips_2022_wwyiEyK-G5D",
"nips_2022_wwyiEyK-G5D"
] |
nips_2022__keb_XuP5oI | Generative Neural Articulated Radiance Fields | Unsupervised learning of 3D-aware generative adversarial networks (GANs) using only collections of single-view 2D photographs has very recently made much progress. These 3D GANs, however, have not been demonstrated for human bodies and the generated radiance fields of existing frameworks are not directly editable, limiting their applicability in downstream tasks. We propose a solution to these challenges by developing a 3D GAN framework that learns to generate radiance fields of human bodies or faces in a canonical pose and warp them using an explicit deformation field into a desired body pose or facial expression. Using our framework, we demonstrate the first high-quality radiance field generation results for human bodies. Moreover, we show that our deformation-aware training procedure significantly improves the quality of generated bodies or faces when editing their poses or facial expressions compared to a 3D GAN that is not trained with explicit deformations. | Accept | The reviewers all recognize the quality of the work, particularly technical soundness and quality of the experimental setting and there is a clear consensus for acceptance. I ask the authors to address the reviewers concerns, particularly clear up any confusion in the manuscript and better analysis of the synthesis results.
| train | [
"bsC3t18VNcF",
"Jbrbh3nbrXH",
"lQQqgVkbJx6",
"COSGoRk_ohr",
"2H-r48VbKV_",
"-e5OmiEMLHf"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer UgRe for their time spent reviewing and commenting on our work. We appreciate the note that integrating advanced radiance field implementation architectures, generative models, and articulation is not trivial and is an important contribution for future applications.\n\n**Lack of technical contributions**\n\nWhile it is true that 3D-aware generative models and articulated radiance fields, and their implementation details such as the tri-plane architecture, have been explored in the past, no other work attempts to combine them and enable our application: generative, controllable 3D models of objects. We demonstrate that this applies to not only human faces, but also bodies, which have not been previously explored in the context of 3D-aware generative models with radiance fields. Moreover, we show that the trivial combination of these components, the EG3D + re-warping baseline, does not perform nearly as well as the proposed GNARF architecture, demonstrating that the combined architecture is an important contribution in of itself.\n\nWhile it is possible that individual components of this combination can be improved, such architectural improvements would be complementary to our work. These improvements are also orthogonal to our core technical contribution: enabling articulated generative modeling by factoring the generator into canonical pose generation and deformation applied to neural radiance fields.\n\n**HeadNeRF comparison**\n\nWe note that HeadNeRF is similar in input and application in that it generates 3D representations of heads conditioned on both identity and expressions (along with other attributes, such as albedo and illumination, which GNARF does not factorize), and is able to generate very high-quality results. Since GNARF does not factorize lighting and albedo, it cannot render the same identity under varying lighting conditions, which we will note as a limitation in our paper.\n\nHowever, for training data, HeadNeRF requires training images which hold identity constant while varying expression and other parameters, which is not required by GNARF. This allows GNARF to be trained on single-view datasets.\n\nBoth GNARF and HeadNeRF are in fact able to control identity and expression independently (disentangled). In GNARF, the identity variation is modeled by the latent code, which generates the body or face in a canonical pose. The target SMPL mesh and deformation method model the expression (or, in the case of bodies, pose) variation. This explicit separation during training encourages the model to disentangle these two parameters. This is supported in our experiments, where walking through the latent space does not affect the body pose, and similarly, editing the body pose does not affect the body identity (see supplemental video). HeadNeRF is able to do the same, along with other important conditioning information, but is limited by the requirement on input data. We will be sure to edit the paper to make this comparison more clear, highlighting the differences and similarities between the methods, and the difference in requirements and what applications they enable. \n",
" We thank Reviewer H6JG for reviewing and providing a thoughtful and extensive analysis of our work. We appreciate the positive comments recognizing the originality of extending 3D-aware generative models to controllable bodies and clarity of presentation. We also appreciate that the work is viewed as an important step forward in addressing the significant problem of integrating articulation into 3D-aware generative models.\n\n**Using a parametric mesh to drive surface deformations**\n\nWe acknowledge that the use of the parametric mesh surface to control a generated radiance volume may not be optimal in every context. We also mention in the future work that further improvements in deformation could be leveraged in this same overall pipeline, such as learning the deformation method. We will make this more clear in the manuscript. However, we believe that the mesh-based modeling and deformation is intuitive and enables interpretable control, and naturally maps to traditional computer graphics and mesh manipulation.\n\n**Limitations of the two-legged SMPL model**\n\nAlthough the deformation method is guided by the two-legged SMPL model, the surface-field deformation is still defined over the entire volume, by warping 3D points with their closest surface point. This allows the deformation method to have robustness to cases where the object modeled by the radiance field does not exactly fit the template, for example the hat on the person in the second row of figure 1. Explicitly modeling the deformation of more complex clothes or accessories which may not move intuitively with the surface of the SMPL mesh could be similarly addressed in the future with more complex deformation functions integrating clothes simulation models from computer graphics.\n\n**Limitation for re-posing EG3D models**\n\nAs mentioned the EG3D generated images are of a similar quality as those generated by GNARF. However, estimating the SMPL mesh from any image is not a solved problem. We use the state of the art method, SPIN [31], for estimating SMPL parameters from images rendered from the generated EG3D models. However, this method still has bias based on the data that it was trained on: less diverse pose distribution than the synthetic or AIST++ dataset since in-the-wild poses are often biased towards a neutral position, and more complex backgrounds in in-the-wild images. We view these limitations of the estimation of these parameters as a drawback of using the EG3D baseline, as an additional step must be injected into the system, possibly resulting in more error, while the GNARF generator automatically generates objects in a canonical pose. Additionally, incorrect geometry that sometimes exists in standard EG3D (floating but indiscernible content in the background) could turn into foreground occluders as a result of a naive post-process deformation. Generating these geometric artifacts is not penalized in the EG3D training pipeline, but is penalized in our end-to-end training procedure.\n",
" We thank reviewer qMy8 for spending time on reviewing and providing insightful comments regarding our paper. We appreciate the acknowledgment of the novelty and significance of the method in approaching a problem which is rarely explored in literature but challenging and impactful. \n\n**Quality of the Results**\n\nWe acknowledge in the limitations section that there is room for improvement on the fine details of the body and believe this to be an important direction for future work. We emphasize that this is the first work which leverages neural radiance fields as a 3D representation in a generative framework for bodies, and that our work outperforms logical extensions of prior work and concurrent baselines in both generation quality and control. Additionally, existing datasets which provide a distribution of SMPL pose parameters associated with images are limited in diversity (AIST++ only has 16 subjects) and fine detail quality (SURREAL is synthetically generated at low resolution). With improved datasets, we expect that our method can generate higher quality and more diverse human body results.\n\n**Comparison of EG3D and GNARF (Q1)**\n\nIt is correct that each batch samples the target SMPL model along with the latent code. Both our method and the baseline (EG3D + re-warping) sample the latent code and target SMPL pose, and perform a generation followed by warping operation in order to generate a 3D model in the target pose. These two methods receive the same input information, and are compared via the same metrics: FID, which evaluates the diversity and realism of the distribution of generated images, and PCKh@0.5, which evaluates the accuracy of the generated image to the target pose. This comparison fairly analyzes the quality of generated images and accuracy to the target pose.\n\nThe base EG3D only takes in a latent code without target pose parameters. As such, we do not compare EG3D to our method in terms of PCKh@0.5, as we don’t expect the generated EG3D bodies to match the target pose and this would not be a fair comparison. We only compare GNARF to EG3D in FID. \n\nGNARF explicitly factorizes the conditioning into a latent code controlling identity and SMPL parameters controlling pose, while EG3D combines both identity and pose into the learned latent space. However, both attempt to model the same distribution - the identities and poses of humans in the dataset. FID compares how GNARF and EG3D model this distribution, thus ensuring that this is a fair comparison, since it assesses the generated diversity and quality of rendered images of the 3D models rather than the ability to match a specific target pose given as input.\n\n**Background modeling (Q2)**\n\nStyleNeRF, EG3D, and other GAN works are able to model backgrounds, but these methods do not attempt to generate a radiance field which can be animated. The addition of animation makes the problem more challenging because the deformation method must apply logically to both the object and the background independently. In the single scene overfitting experiments (supplementary sect. 2.1), we introduce a separate tri-plane representation to model the background to ensure consistency across the animated frames. However, our main contribution for the generative model is to generate realistic animatable 3D representations of bodies and faces. We will note this in the limitations and future directions section of our manuscript.\n\n**Latent code and generator architecture (Q3)**\n\nThe latent code is injected into the tri-plane feature generator, as the latent code controls the identity of the generated object but not the pose. The pose animation is controlled by the SMPL parameters, which are not input in any way to the generator, since the generator is only tasked with generating the canonical pose. \n\nChoosing where to input the latent code (StyleGAN2 generator or the shallow decoding MLP) is an architectural change to the generator which does not change the overall input or interpretation. While all identities are generated in the same (canonical) pose, the appearance variation due to identity is enough that only adding the conditioning into the shallow MLP may not provide enough capacity to model the dataset diversity. We will edit the paper to be more clear about this design decision, and in general about the various parameters and their interpretation.\n\n**Dependence on template meshes (Limitation)**\n\nGNARF is not highly dependent on the accuracy of the template mesh, as our template meshes have been reduced from the full FLAME and SMPL model significantly without a decrease in quality. However, using a template mesh to drive pose animation provides an interpretable articulation method, and thus we apply our method to classes where this template exists. When extending this method to classes which do not have a template mesh, such as the LSUN cats, the interpretability of the pose control is not clear as it’s not known which joints cats move their faces around.\n",
" This paper proposes Generative Neural Articulated Radiance Fields, named GNARF, serving as the 3D representation of 3D-aware generator to handle the datasets with more deformation, such as human-body datasets. The non-rigid motion is represented by the surface deformation derived from the source and target meshes, which helps the tri-plane focus on learning the canonical feature volume. The experiments give a comprehensive study of GNARF on several human datasets, including ATISS++, SURREAL, as well as FFHQ. The experimental results demonstrate the GNARF can achieve better image quality and editability compared with several baselines. This paper aims to generate 3D humans from a posed 2D image collection, which is rarely explored in the literature. Compared to prior arts of 3D-GANs, the paper pays more attention to the non-rigid human bodies instead of the rigid objects like faces and cars. It is valuable to solve this challenging problem. Although this task is very difficult, the synthesized results are not very good. The details of the human body, such as face and clothing, are of a low quality. Besides, I also have some confusion about the proposed techniques, which will be presented in the following section. 1. As shown in this paper, SF is derived from the target and canonical template meshes. Does it mean that each batch also needs to sample the target SMPL models besides the latent code? If I am right, it is not fair to compare with the EG-3D baseline w.o. any 3D mesh information. \n2. It is not convincing that the authors remove the background to stabilize the training process. Why not use a separate background model? StyleNeRF has demonstrated that it can work even on the CompCars dataset with large background variations.\n3. Since the tri-plane is required to learn the canonical feature volume, why is the latent code still injected into the tri-plane generator network? Why not only add the stochastics to the shallow MLP because the canonical pose across different instances is similar. My concerns about this paper is the synthesis quality. Although it has improved on the baseline, it still has a large gap to real-world applications like avatar animation. \nBesides, the mouth of the synthesized faces in Fig.6 and the demo video are not realistic. I guess that the Flame model does not model the teeth. Does it mean that the GNARF is very dependent on the template meshes? When transferred to a new dataset like LSUN cats which does not have any ground truth template meshes, how does GNARF perform? \nI think this paper is interesting, and I tend to accept this paper (raise the scores) if the authors can address my concerns.",
" This paper proposes a technique called GNARF for controllable 3D aware generation of human bodies and faces with different articulations and expressions. The authors build on the prior EG3D generative model and extend it along two dimensions -- to entire 3D bodies and for more explicit control and disentangling of base identity faces and their expressions. The authors' main idea is to introduce an explicit surface deformation module into the generative model, which deforms a canonical triplane representation. The authors parametrize this surface deformation via parametric mesh SMPL or FLAME models. They compare their surface deformation approach against MVC and blend-skinning and also their generative models for body and face against SOTA existing generative models and show improvements in quality. - Originality: The work is original along many dimensions. It extends 3D-aware generative models to body poses, which has not been considered in prior work (other than in [115], which is concurrent work). It is also novel in introducing the explicit surface deformation model into the EG3D framework, making it more controllable. However, it builds heavily on EG3D. Nevertheless, in my opinion this work introduces sufficient novelty over the existing works to warrant publication.\n\n- Quality: The work is technically sound; the method appropriate and its steps logical; and most claims (a few exceptions noted below) are well supported via experiments. To the authors' credit they have also gone above and beyond the submission requirements and compared against several unpublished concurrent works [115, 127] and shown the superior performance of their approach. The authors have discussed the limitations of their method honestly and in much detail.\n\n- Clarity: The material is clearly presented and easily understandable. Many implementation details are included in the supplementary material and the authors have promised to release their code upon acceptance.\n\n- Significance: \n- The dominant approach for modeling humans (bodies and faces) and animals has been via parametric mesh models for several decades now. The advent of 3D aware GANS is heralding in a new era of GAN-based generative models, which have the promise to provide much higher images quality and photo-realism, while not being limited only to the surfaces (face and body) modeled by the mesh models. 3D aware-GAN models can also model hair, eyeglasses and clothing details beyond meshes. However, the question of how best to model body articulation/surface deformation caused by expressions/body articulation into the context of 3D-aware GANS is an important open one. This work provides an important step towards addressing this latter problem. That said the proposed solution's falling back on parametric mesh models to model the surface deformation is perhaps not the most elegant solution, as the authors acknowledge themselves as well.\n- The authors' comparisons of the different types of 3D mesh-based deformations (MVC, blend-skinning and surface-based) is also quite interesting and insightful. From Table 2, it is evident that the baseline EG3D (no warping) approach has FID scores nearly as good as or better than GNARF, meaning that it is able to generate highly realistic posed images. However in line 244 the authors claim that for EG3D (no warping) poor results for reposed images are produced \"since it is difficult to accurately estimate the SMPL mesh from the generated images\". If the image quality of the posed images is good as per the FID scores, what limits good SMPL mesh fitting, and reposing of the EG3D generated images?\n\nHow do the authors propose to handle cases such as subject wearing long skirts or dresses, which are likely to not fit well to the two-legged tight-fit model of SMPL? The authors have adequately and honestly discussed the limitations of their proposed method.",
" This paper proposes a method for generative neural articulated radiance fields. The main strategy is to generate radiance fields in a canonical pose and warp them using an explicit deformation field into a desired body pose or facial expression. The main technical contribution of this work is to combine the recently proposed tri-plane feature volume representation with an explicit feature volume deformation which is guided by a template shape. High-quality results on human body and faces are demonstrated in the paper. This paper proposes a practical 3D-aware GAN framework for the generation of editable radiance fields of human bodies. The main strength of this paper is to combine several existing strategies like tri-plane feature volume representation, StyleGAN2 generator, feature volume deformation, neural volume rendering, image super-resolution, etc. Although each component has been proposed before, integration them together to generate satisfying results is not easy and could be used for related applications.\n\nOn the other side, the weakness is the lack of technical contribution as each component is borrowed from other papers. Meanwhile, some component can be replaced with more suitable strategy. For example, based on our implementation and the experimental results reported in the paper, tri-plane representation still has limited representation ability, which can be replaced by other representations to achieve better performance. I don't have questions. As mentioned in the related work, HeadNeRF is related to this work, and the authors point out that HeadNeRF needs to acquire training images of the same person performing various expressions in different lighting conditions. However, HeadNeRF can explicitly control properties like identity and expressions. To my knowledge, the proposed method can not achieve disentangled representation."
] | [
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
5,
5,
4
] | [
"-e5OmiEMLHf",
"2H-r48VbKV_",
"COSGoRk_ohr",
"nips_2022__keb_XuP5oI",
"nips_2022__keb_XuP5oI",
"nips_2022__keb_XuP5oI"
] |
nips_2022_nYrFghNHzz | Learning Individualized Treatment Rules with Many Treatments: A Supervised Clustering Approach Using Adaptive Fusion | Learning an optimal Individualized Treatment Rule (ITR) is a very important problem in precision medicine. This paper is concerned with the challenge when the number of treatment arms is large, and some groups of treatments in the large treatment space may work similarly for the patients. Motivated by the recent development of supervised clustering, we propose a novel adaptive fusion based method to cluster the treatments with similar treatment effects together and estimate the optimal ITR simultaneously through a single convex optimization. The problem is formulated as balancing \textit{loss}$+$\textit{penalty} terms with a tuning parameter, which allows the entire solution path of the treatment clustering process to be clearly visualized hierarchically. For computation, we propose an efficient algorithm based on accelerated proximal gradient and further conduct a novel group-lasso based algorithm for variable selection to boost the performance. Moreover, we demonstrate the theoretical guarantee of recovering the underlying true clustering structure of the treatments for our method. Finally, we demonstrate the superior performance of our method via both simulations and a real data application on cancer treatment, which may assist the decision making process for doctors. | Accept | This paper proposes a method for learning the optimal individualized treatment rule (ITR). The proposed approach uses a fusion penalty term that encourages clustering between treatments. A dendrogram of the treatments is generated by running the proposed algorithm using different tuning parameters as a solution path. The effectiveness of the proposed approach is empirically validated on synthetic and real data. The paper is well written and technically sound. A thorough analysis/interpretation of the resulting model/results will further improve the paper. | train | [
"gcCAE29yv7U",
"3K07sqDpaP",
"3FsS3ZCMUk",
"JAbYHmVFiIj",
"pjtNrbvRTFX",
"MTfm1qNn6IL",
"4cv__hfF139",
"Avcd5ZRyI4J",
"h0tDYJjkh6V",
"vGgzTaa1pZi",
"P09UwO0zaG9",
"6PRfNw-4rRG"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your response and acknowledgement of our clarifications for the paper. Thanks for your further suggestions about the group lasso step. As you suggested, to better clarify the group lasso step, we will add some results in the supplements.",
" Thank you for your comments clarifying the PDX data and group lasso application. The additional group lasso results is a welcome addition to the supplement and also feel it does not need to be in the main manuscript.",
" We appreciate your prompt response and acknowledgement of our responses. We are grateful for the increased score which is very encouraging. Following your suggestion, we will add more clarifications about the point 3 in our paper. As you pointed out, a possible treatment effect term $U(A)$ that only contributes from the treatments is often considered. This treatment-specific term can be combined into the intercept term in $T(Z, A)$ in our model. Therefore, our proposed model is flexible to deal with this term. ",
" Thank you for your comprehensive responses that helped me to better understand this work, as well as the proposed clarifications for the paper and the supplementary materials. Perhaps point 3 could be very briefly clarified also in the paper?",
" Thank you for the comprehensive summary and constructive comments. Please see the following response to your concerns. \n\n1. Reply to your comment \"The model is designed for continuous outputs whereas discrete ones may be observed for several treatments/pathologies.\": \n\nAlthough our paper focuses on the continuous outcome case, we can extend our method to deal with discrete outcome (use generalized linear models) or survival outcome. We just need to replace the loss term accordingly and still keep the fusing penalty to achieve treatment clustering. \n\n2. Reply to your comment \"The model backbone is a linear regression and may underfit the data in various situations.\" and \"Have the authors considered investigating more flexible models by using for instance a kernelized version of linear regression?\":\n\nOur paper mainly deals with linear regression. The more flexible regression function can be generalized by adding the polynomial terms and implementing kernel regression as you suggested. We can still use fusing penalty in that case. We will explore more in the future. \n\n3. Reply to your comment \"I think that the interpretation of random variable $A$ should be clarified at the beginning of Section 2. Indeed, should it be understood as the identity of the best treatment, or is it the result of some sampling distribution (e.g. uniform) from which the data is obtained? Because it is difficult to imagine obtaining data from the former, I suspect the correct interpretation is the latter. The action sampling process in the first experiment of Section 5 should be documented.\":\n\nFor the clarification about $A$ we used in the paper, just as you indicated, $A$ is denoted as the assigned treatments from some sampling distribution. The recommended treatment is described by $D(X)$. We will further clarity this at the beginning of Section 2.\n\n4. Reply to your comment \"How is the term $M_0(X)$ estimated in the real data experiment in Section 5?\":\n\nFor estimating the main effect $M_0(X)$, please refer to Section A.1 in the supplementary materials. We include detailed description for both parametric and non-parametric estimation methods. \n\n5. Reply to your comment \"For better readability, I suggest not to use bold letters for scalar variables such as $\\zeta_{a,k}$.\":\n\nThanks for your suggestion about the terminology. We will revise the paper accordingly to improve the presentation. \n",
" Thanks for your summary of the paper and constructive comments. Please see our following clarifications below. \n\n1. For your concerns about dividing the problem into two convex optimization problems, using group lasso to identify $X$ and $V$ from $Z$ (first optimization problem) is helpful but is not necessary. Without this step, we can still implement our proposed fusion penalty (second optimization problem) to cluster the treatments. The group-lasso step is helpful to save some computational time (fuse a lower dimensional vector) in the second step and may improve the performance of estimated ITR. Our methodology and theoretical contributions mainly focus on the fusion step that solves the ITR problem. \n\n2. Thanks for your interesting comments on the PDX data analysis. When including all the treatments, recommending BYL+BYN, which has the largest treatment effect for $\\textbf{most}$ patients, to $\\textbf{all}$ patients gives a mean value of 0.121, compared with a mean value of 0.125 (larger the better) if individualized treatment recommendation is implemented. We will add this analysis in the paper. Thus, individualization is still helpful though not significant. Furthermore, our estimated treatment structure shows that the combination treatment BYL+BYN forms a group itself (see the left panel of Figure 5), which provides informative and consistent knowledge about the superior performance of BYL+BYN. However, BYL+BYN is a combination treatment of two medications, which could induce high cost and possible complicate side effects. That’s why we also exclude the combination treatments and implement our method for the single treatment set (see in the right panel of Figure 5) to find the “second” optimal individualized treatment recommendation. In this case, without the dominant performance of BYL+BYN, the individualized recommendation has significant better performance than any other non-personalized treatment rule. \n\n3. For your comments about the possible $U(A)$ term in Equation (1), this treatment specific effect can be indeed combined into the intercept term in $T(Z, A)$. Our paper deals with this term exactly as you indicated. \n\n4. Here is a clarification about your second minor comments. As long as the variable contributes to the interaction term $T$ (can contribute to both $T$ and $M$, or only contribute to $T$), it is considered as the heterogeneous variables $X$. If the variable only shows up in the main effect $M$, it belongs to the homogeneous variables $V$. However, as mentioned in our clarification shown in 1, the prior knowledge or the group lasso step about $X$ and $V$ is helpful but not required. In practice, we may exclude some sensitive characteristics from $X$ due to the consideration of fairness. \n\n5. The terminology of rewards and value function are commonly used in the individualized decision-making literature and are not restricted in the reinforcement learning area. Please refer more details in the following key references in the ITR literature: (1) Qian and Murphy. (2012), Performance guarantees for individualized treatment rules, Annals of Statistics; (2) Zhao et al. (2012) Estimating individualized treatment rules using outcome weighted learning, Journal of the American Statistical Association; (3) Chen et al. (2020) Representation learning for integrating multi-domain outcomes to optimize individualized treatments, Neurips 2020. \n\n6. For the evaluation metric we utilized in line 311, it is a typical evaluation criterion that is commonly used in the individualized decision-making area (see the above papers in 5 for more details about the interpretation). Compared to the possible misspecification and overfitting issue from predicting the mean effect of treatments, this metric is a nonparametric, unbiased and robust estimator of the value function. \n\n7. Sorry for the confusing description about the “Response Scaled” in Figures 3 and 4. The response is defined in “Rashid et al. (2021), High-dimensional precision medicine from patient-derived xenografts, Journal of the American Statistical Association”. The defined response is specific in PDX analysis. It is related to the size of tumor. We just follow this definition and words in the above paper. We will clarify this in the supplements. \n\n8. Thanks for the suggestion about the clear demonstration for the comparison methods. We will add this in the supplements. \n\n9. We will try more real data and comprehensively validate our methods in the future. \n\n10. For the social impact, if a certain patient group in the training data is under representative, we can reweight the samples to alleviate the fairness issue. We will add this in our discussion section.\n\n",
" Thanks for your nice summary of the paper and constructive comments. Please see our clarifications below. \n\n1. Thanks for pointing out the confounding issue in the ITR problem. Indeed, the confounding issue is very important in observational studies. The data generalization and the real data (PDX study) we considered in the paper mainly focus on the clinical trial setting. Thus, the confounding issue is not a major concern. For observational studies, our method can be further generalized to deal with confounding under certain assumptions, i.e., using propensity scores. It is an interesting future direction to explore. We will add some brief discussion in the paper.\n\n2. For the first step of our algorithm (use group lasso to identify heterogeneous variables), it is helpful but is not required. Without this step, we can still implement our proposed fusion penalty to cluster the treatments. The group-lasso step is helpful to save computational time (fuse a lower dimensional vector) in the second step and may improve the performance of estimated ITR. Our methodology and theoretical contributions mainly focus on the fusion step. \n\n3. The group lasso step is implemented in our empirical study and the tuning parameters are selected using cross validation. It was shown that it can boost the performance of our estimated ITR. We did not add the comparison results in the main paper due to the page limitation. We will further add that in the supplement. \n\n4. Thanks for the comments about the detailed description of the PDX data. More introduction and background about PDX study can be seen in “Gao et al. (2015), High-Throughput Screening Using Patient-Derived Tumor Xenografts to Predict Clinical Trial Drug Response, Nature Medicine”. We will add this reference in our main paper. \n\n5. We did not use the separate data, and we followed the same preprocessing steps shown in “Rashid et al. (2021), High-dimensional precision medicine from patient-derived xenografts, Journal of the American Statistical Association”. The preprocessing steps are shown to have satisfactory performance in this paper. We will try other preprocessing steps to explore whether we can improve the results. \n\n6. For your question about the alignment between the targets/targeted pathways and the inferred grouping structure, our results shown in the left panel (with all treatments) of Figure 5 is consistent with the biological results in Rashid et al. (2021) mentioned above. The right panel (without combination treatments) of Figure 5 is our new discovery about the PDX data. We will look through further references to explore the biological interpretations for single treatments.\n",
" Thanks for your nice summary of the paper and constructive comments. For your questions, please see the following clarifications. \n\n1. For your comment about the hierarchical structure of the treatments, our algorithm does not use some two-step procedures. Instead, we automatically generate the dendrogram of the treatments by running our algorithm using different tuning parameters as a $\\textbf{solution path}$. Our dendrogram is different from the standard understanding of the dendrogram generated from some specific hierarchical clustering algorithms. In contrast, in our Figures 2 and 5, the $y$ axis corresponds to the tuning parameter $\\lambda$ rather than an explicit measure of “closeness” among treatments. Recall that the $\\lambda$ showing up in the penalty term from Equations (2), (4) and (7) would encourage the treatments with similar treatment effects to merge into treatment groups. Thus, as $\\lambda$ increases (corresponds to our dendrogram generating from bottom to top), the treatment structure will change from “no structure” (each treatment themselves is a treatment group, $\\lambda = 0$), to the structure that all treatments are merged into one group ($\\lambda \\to \\infty$). For each fixed $\\lambda$, the treatment structure can be directly recovered by the group structure of the estimated parameter $\\widehat{\\mathbf{\\beta}}$ in Equation (7). Therefore, our dendrogram can be better interpreted as the $\\textbf{solution path}$ of the treatment clustering process (indicate which treatments are combined into one group when the turning parameter changes). More specifically, the whole solution path (dendrogram) can be automatically drawn using the solutions of Equation (7) as $\\lambda$ increases. We will make this point more clear in the revision.\n\n2. Thanks for pointing out the time/space complexity issue. Due to the usage of proximal gradient descent algorithm, the time and space complexities are both $\\mathcal{O}(n^2)$, where $n$ is the training sample size. We will add this in the revision. \n\n3. The comparison methods do not consider the group structure of the treatments. Hence, they cannot provide a dendrogram that demonstrates the solution path of the clustering process. Please refer to the details in our clarifications about your first concern above. \n\n4. Yes. if the vector $\\widehat{\\mathbf{\\xi}}_k = 0$, then $Z_k$ is classified as $V$. \n\n5. Finally, we would like to discuss the issue of bias you mentioned in the limitations. (a) For the possible under representative issue in the training data, we can further improve our algorithm to protect fairness, i.e., recover the full representation for the target population, by incorporating some weights. (b) For the possible bias of the recommended treatment, recall that we merge the treatments into the same treatment groups because they have similar treatment effects and, hence they should be close to each other within the same group. As a result, the treatment effect bias should be small.\n",
" This paper proposed a method for learning the optimal individualized treatment rule (ITR). It focuses on a situation of having extensive treatment options but limited observations for only a small number of specific treatments. This work designs an algorithm to merge similar treatments and provide optimal ITR in a reduced treatment space. The model is trained based on a designed convex optimization problem with an adaptive proximal gradient algorithm. The empirical study reveals its ability to detect the structure of the treatment space and merge similar ones to enable the optimal ITR from many treatment options. A theoretical analysis is also provided. The algorithm focuses on a more practical situation of optimal ITR in precision medicine. The author models the problem as a convex minimization problem and solves it with an accelerated gradient method. This algorithm can provide a hierarchical structure of the relationship between treatments. By merging similar treatments, users can give optimal ITR on a reduced treatment space which is more feasible in practice.\n\nI have the following concerns:\n1. It looks like the final result automatically generates the hierarchical structure of the treatments. However, I didn't understand how such a hierarchical structure could happen by just solving the convex minimization problem. Your algorithm seems can only provide the proximity/dissimilarity matrix on treatments. Did you later apply any linkage-based hierarchical clustering algorithms on the proximity/dissimilarity matrix to generate the structure? Please specify.\n2. Is there an analysis of your algorithm's time/space complexity?\n3. Did you compare with other methods that can also provide a dendrogram? \n4. In lines 170-171, should it be if \\hat{\\xi}_k = 0, then Z_k is classified as V? \n\n Referred to Strengths and Weaknesses Will such a treatment algorithm introduce any bias to the patients?",
" The authors consider the problem of finding optimal individualised treatment\nrules – an important problem in precision medicine – that assigns treatments based\non covariates which may change over time. The particular setting of interest\nis one where many treatments exist and where treatments have a structure induced\nby shared mechanisms of action (e.g., different drugs that target the same pathway).\n\nThe proposed approach uses a fusion penalty term that encourages clustering\nbetween treatments. This incorporates the structure assumption in a tunable way,\nand allows sharing of information between infrequently observed treatments and\nmore frequently observed ones that are related.\n - The literature is well covered in their citations, with the exception of\n confounding. There are works considering ITRs in the context of confounding\n through the use of instrumental variables. This is not strictly a problem\n since the authors aren't considering confounding.\n\n- The work builds on existing with the novelty being the fusion term.\n\n\n- The formulation is straightforward and easy to understand, as is the writing\n and presentation.\n\n- There are two parts to the method, first there's the group lasso based\n classification into homogeneous and heterogenous variables, and second is the\n optimisation of the main objective function with the fusion penalty term. Most\n of the work concentrates on the second part: the theoretical properties assume\n the hetrogeneous variables are given and the experiments do not specifically\n investigate the identification of homo/hetrogenous variables. There is a short\n note on lines 150–152 stating it has been empirically observed to be more\n effective, but these are not shown anywhere. Furthermore, it's unclear if the\n group lasso classification was used in experiments or not, and if it was how\n the hyperparameter was chosen.\n\n- The theoretical results are relevant and interesting but are limited to the\n fusion model only.\n\n- The PDX study is not sufficiently described, it is unclear exactly which\n dataset is used as no reference is provided. There is a citation to another\n methods paper, cited as the authors have used their preprocessing steps, but\n no citation to the main data reference.\n - The PDX study employes a number of preprocessing steps, some of which are\n supervised. How was bias avoided with the supervised preprocessing? Was\n separate data used?\n\n- The treatments employed are surely employing drugs with known targets. Do the\n targets/targeted pathways align with the inferred grouping structure?\n - Given the strong patient focus in the intro, the confounding limitation should\n be discussed. The PDX study used in experiments is free of confounding, but\n typical observational patient data is not.\n",
" The work proposes a machine learning approach for precision medicine applications, where one needs to choose from a set of suitable treatments the best one for a given patient. Further, the proposed approach allows clustering similarly behaving treatments together, and the authors give theoretical guarantees that the underlying clustering will be recovered.\n\nThe proposed approach combines together a number of different approaches including:\n- Basic regression model, where the outcome is modelled as a sum of a main effect M(z), and interaction effect T(z,a) \n- After solving a standard regression problem to recover M(z), group Lasso is used to find the subset of features that may contribute to the interaction effect\n- Proposed adaptive proximal gradient algorithm is used to predict the remaining residual with T(z,a), and a pairwise regularizer is used to divide the treatments into clusters\n\nExperiments on simulated data show that the approach works on finding the true underlying structure on simple simulated problems, and evaluation on a cancer treatment data set shows both competitive performance against basic ML regression approaches, as well as the treatment structure recovered by the method.\n\n The paper addresses an important problem in precision medicine, and provides ideas that appear to be novel about how to, in addition to predicting, simultaneously cluster together similar treatments. The writing is fairly clear, the work and contributions well motivated, and I found the mathematical presentation to be rigorous yet accessible.\n\nThe proposed learning approach seems reasonable, though to me some of the choices made to make divide this into a sequence of solvable (convex) optimization problems feel a bit like not so well justified \"hacks\" (solving for M(z) and T(z,a) separately, the assumption that Z decomposes nicely to X and V that can be recovered with group Lasso...). Theoretical analysis seems convincing, but I cannot verify that all the details are correct.\n\nThe experimental analysis on the PDX study is interesting, though from results on a single data set it is difficult to ascertain yet how well the method can be in general expected to perform. The one thing that was left a bit unclear to me was does the proposed approach perform more accurate predictions, than the simple approach of doing non-personalized predictions (i.e. always predicting the one treatment that works best on training data, on the benchmark data most likely usually the BYL+BIN treatment). Based on zooming on Figures 3 and 4 this might be the case, but this could be more clearly discussed in the text. \n\nMinor comments:\n\nEquation (1): I was a bit surprised there was not in addition to M(z) and T(z,a) a U(a) type of term that would depend only on the treatment, since in addition to personalized effects one would except some of the treatments to be overall superior to others. E.g. the BYL+BIN treatment in the PDX study. Though I suppose such effect can be encoded into T(z,a) with constant covariate for z, and identity encoding covariate for a...\n\n\"In practice, one may assume that only certain elements of Z...[divide into X and V]\" - this is not intuitively obvious to me, as in many cases one could imagine same covariates contributing both to M and Z? Any plausible real-world examples of this?\n\nI did wonder if some of the reinforcement learning type of terminology used in the beginning (rewards, value functions) were necessary, since in the end the considered setup is regression where training data has been gathered in advance, and gathering feedback and considering exploration/exploitation types of tradeoffs is not considered. The presentation could be simpler by leaving these connections out, but perhaps there is also value in making these connections.\n - How does the proposed method compare on real data to the baseline that always chooses the treatment that worked best on average on training data, and predicts the mean effect for that treatment? I find it a bit difficult to interpret from the metrics provided, how much is actually learned from the data compared to just predicting the mean.\n- What does the \"response scaled\" -metric mean, what units, scaled how? Maybe something to add to the supplementary?\n- Regarding comparison to the baseline methods, it would be good to tell in the supplementary materials the used parameter grids, and also what kind of feature representations these methods were supplied with. For example, if the goal is to learn both M(z) and T(z,a), in what form was the identity of the treatment a supplied to the baseline methods? The code is provided, but it would be good to be able to tell these details already from reading paper + supplementaries, in order to understand what the comparison means. Limitations and negative societal impact not considered. The one possible limitation that comes to my mind, is that the method should be much more comprehensively validated on more realistic data sets and settings, before it could be ever considered for making treatment decisions for real-world patients.",
" This submission deals with treatment selection among a large collection of candidate ones for a specific individual or a group of individuals which share common response to the treatment. The authors propose to adopt a treatment clustering strategy by gathering treatments with similar effects. \n\nThe proposed approach follows the line of thought of some prior arts in which a statistical regression model is defined for the reward viewed as a function comprising a general and common effect incurred from all treatments plus some treatment specific effects and an additive noise. The common and treatment specific parts of the reward are functions of the patient whose information is embedded in a fixed-size vector. The goal of the authors is to learn the regression coefficients based on supervision coming from data in the form triplets (patient, treatment, reward). They leverage a group promoting penalty to encourage treatments with similar effects to be clustered together. \nTo rule out some of the covariates inside the vectors of patient features, the authors first use a group lasso (on the same reward regression) as a preliminary step. Finally, the main regression problem has properties allowing it to be solved by an accelerated proximal gradient algorithm. \n\nThe paper is technically sound and quite well written. A theorem proves the consistency of the estimator under technical (but not unusual) assumptions. \nPros : \n- the paper incorporates a novel aspect (treatment grouping) in a standard model used for personalized treatment selection\n- the proposed method has a theoretical group-consistency property\n- the efficiency of the approach is experimentally validated on synthetic and real data\n\nCons : \n- the model is designed for continuous outputs whereas discrete ones may be observed for several treatments/pathologies\n- the model backbone is a linear regression and may underfit the data in various situations Major : \n\nHave the authors considered investigating more flexible models by using for instance a kernelized version of linear regression ? \n\nI think that the interpretation of random variable $A$ should be clarified at the beginning of section 2. Indeed, should it be understood as the identity of the best treatment, or is it the result of some sampling distribution (e.g. uniform) from which the data is obtained ? Because it is difficult to imagine obtaining data from the former, I suspect the correct interpretation is the latter. The action sampling process in the first experiment of section 5 should be documented. \n\nHow is the term $M_0(.)$ estimated in the real data experiment in section 5 ?\n\nMinor : \n\nFor better readability, I suggest not to use bold letters for scalar variables such as $\\zeta_{a,k}$. Not adapted to discrete rewards and limited learning capacity."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"3K07sqDpaP",
"4cv__hfF139",
"JAbYHmVFiIj",
"MTfm1qNn6IL",
"6PRfNw-4rRG",
"P09UwO0zaG9",
"vGgzTaa1pZi",
"h0tDYJjkh6V",
"nips_2022_nYrFghNHzz",
"nips_2022_nYrFghNHzz",
"nips_2022_nYrFghNHzz",
"nips_2022_nYrFghNHzz"
] |
nips_2022_sipwrPCrIS | Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks | We analyze feature learning in infinite-width neural networks trained with gradient flow through a self-consistent dynamical field theory. We construct a collection of deterministic dynamical order parameters which are inner-product kernels for hidden unit activations and gradients in each layer at pairs of time points, providing a reduced description of network activity through training. These kernel order parameters collectively define the hidden layer activation distribution, the evolution of the neural tangent kernel, and consequently output predictions. We show that the field theory derivation recovers the recursive stochastic process of infinite-width feature learning networks obtained from Yang & Hu with Tensor Programs. For deep linear networks, these kernels satisfy a set of algebraic matrix equations. For nonlinear networks, we provide an alternating sampling procedure to self-consistently solve for the kernel order parameters. We provide comparisons of the self-consistent solution to various approximation schemes including the static NTK approximation, gradient independence assumption, and leading order perturbation theory, showing that each of these approximations can break down in regimes where general self-consistent solutions still provide an accurate description. Lastly, we provide experiments in more realistic settings which demonstrate that the loss and kernel dynamics of CNNs at fixed feature learning strength is preserved across different widths on a CIFAR classification task. | Accept | This paper analyzes a dynamical mean field theory that describes feature learning via gradient flow for certain infinite-width neural networks. Self-consistent equations for the order parameters characterizing the dynamics are presented and methods for approximate numerical evaluation are discussed. Overall, this is a solid paper that advances the theory and understanding of feature learning for neural networks of large width and the reviewers and I unanimously support acceptance.
| train | [
"CqgRDH3II06",
"ZytGDh-hIIQ",
"nmoQp2HdAf3",
"-tJyXXATcHW",
"4F7SlhCmcAV",
"fN2B_AKwdnQ8",
"9C89tFueSO6l",
"SyKL5dbsmT9D",
"eZzX_uTZ2n2",
"hfK6alENLU7i",
"VOnvByZFV5T",
"kt3ed3BDw8C",
"ge5iRWS860y",
"PFDy_5LUVp",
"NFRhnntgIlH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed responses. I appreciate that the authors make an effect to further address my concerns. Thus I decide to keep my score and recommend acceptance.",
" I'm grateful to the authors for their detailed response to all my questions. I still feel confident that the paper should be accepted and will keep my current score.",
" I appreciate the authors' efforts in making the following changes: \n- Including deviation of the simple $L=1$ case. \n- A more comprehensive comparison with Yang&Hu and follow-up work. \n- An acknowledgement that the derived formula is the same as in Yang&Hu. Note that the techniques used here are very different, which, in my opinion, is very a valuable contribution to the community. \n- Addressing several points raised in the review, e.g., the non-rigorousness nature of the approach. \n\nOverall, this is a very strong submission and I raise my score accordingly. Congrats to the authors! \n",
" \nWe also want to highlight some novel aspects of our work which expand on the original analysis of Yang & Hu. Concretely, these novel aspects are\n\n1. Giving a novel derivation of the infinite width limiting stochastic process behavior in the mean field regime using techniques from statistical physics.\n2. Provide a polynomial time/space numerical algorithm to solve the saddle point equations.\n3. Allowing for any feature learning strength by including a richness parameter $\\gamma_0$ and studying the accuracy of DMFT and various approximations across $\\gamma_0$.\n4. Performing a cursory analysis of finite size $N<\\infty$ effects in Appendix P.7. Our DMFT action makes such computations technically straightforward.\n5. Exploring the role of regularization (Appendix J) and Langevin noise (Appendix K), momentum (Appendix L). \n6. Giving an exact one-dimensional dynamics of linear networks trained on whitened data for arbitrary $\\gamma_0$ (Section 4.1 and Appendix F.1.1). ",
" \n\n#### Interpretability of the Theory & Learned Features\nWe agree that at this stage the feature evolution equations are complicated nonlinear coupled integral equations which are not immediately interpretable.\n\nTo improve interpretability of our DMFT stochastic process notation in Equation 10, we eliminated the step functions $\\Theta(t-s)$ and instead now just integrate both terms for $s\\in(0,t)$. This is legitimate since the response functions $A, B$ are causal. \n\nWe think that developing more in depth interpretation of these equations, perhaps in special limits, could be useful in follow up works. However, we do want to defend the following insights which we think give some interpretation of our result:\n1. Each neuron's activation and gradient signal is an iid draw from a distribution defined by $\\mathcal Z^\\ell$. \n2. The updates to the pre-activations and pre-gradients are $O(\\gamma_0)$. The $\\gamma_0 \\to 0$ limit is just a Gaussian, which recovers the static NTK picture where $\\Phi$, $G$ can be computed at init and treated as constants through time.\n3. The feature learning updates are recursive nonlinear compositions of Gaussian random variables $u^\\ell,r^\\ell$.\n4. The preactivation $h^\\ell$ update depends on the history of $\\Phi^{\\ell-1}$ while the pre-gradient update depends on the history of $G^{\\ell+1}$ which intuitively shows that the $\\Phi^\\ell$ kernels accumulate corrections from first layer to last while $G$ kernels accumulate update from last layer to first. This is also visible from perturbation theory (see Appendix P)\n5. The $A$ and $B$ kernels quantify the sensitivity of feedforward signals to the feedback fields and vice versa. \n6. All dyamical updates depend on $\\Delta_\\mu = - \\frac{\\partial \\mathcal L}{\\partial f_\\mu}$. Network predictions evolve according to the dynamical NTK $K = \\sum_\\ell G^{\\ell+1} \\Phi^\\ell$. Empirically we see that the feature kernels tend to align to the target function $yy^\\top$ which provides accelerates learning.\n\n#### Connection to Prior Works including Yang & Hu 2021\n\nWe appreciate the reviewer's interest in the connection between our work and the work on the $\\mu P$ limit of Yang & Hu. After more carefully reviewing Yang & Hu's derived stochastic process for fields, we find that the field equations we derived with our DMFT at $\\gamma_0=1$ and discrete time agree with those derived by Yang & Hu with Tensor programs. We added several comments crediting Yang & Hu's work in this new draft of our manuscript. \n\n1. In our abstract we now write \n\"We show that the field theory derivation recovers the recursive stochastic process of infinite-width feature learning networks obtained from Yang \\& Hu with Tensor Programs \\cite{yang2021tensor}.\"\n2. In the introduction we write \n\"Using the Tensor Programs framework, Yang \\& Hu identified a stochastic process that describes the evolution of preactivation features in infinite-width $\\mu P$ NNs \\cite{yang2021tensor}. In this work, we study an equivalent parameterization to $\\mu P$ with self-consistent dynamical mean field theory (DMFT) and recover the stochastic process description of infinite NNs using this alternative technique. In the same large width scaling, we include a scalar parameter $\\gamma_0$ that allows smooth interpolation between lazy and rich behavior \\cite{chizat2019lazy}. We provide a new computational procedure to sample this stochastic process and demonstrate its predictive power for wide NNs.\"\n3. In the Related Works section we added the following paragraph\n \n\"Our results are most closely related to a set of recent works which studied infinite-width NNs trained with gradient descent (GD) using the Tensor Programs (TP) framework \\cite{yang2021tensor}. We show that our discrete time field theory at unit feature learning strength $\\gamma_0 = 1$ recovers the stochastic process which was derived from TP. The stochastic process derived from TP has provided insights into practical issues in NN training such as hyper-parameter search \\cite{yang2021tuning}. Computing the exact infinite-width limit of GD has exponential time requirements \\cite{yang2021tensor}, which we show can be circumvented with an alternating sampling procedure. A projected variant of GD training has provided an infinite-width theory that could be scaled to realistic datasets like CIFAR-10 \\cite{yang2022efficient}. Inspired by Chizat and Bach's work on mechanisms of lazy and rich training \\cite{chizat2019lazy}, our theory interpolates between lazy and rich behavior in the mean field limit for varying $\\gamma_0$ and allows comparison of DMFT to perturbative analysis near small $\\gamma_0$. Further, our derivation of a DMFT action allows the possibility of pursuing finite width effects.\" \n\n",
" \n### Strengths\n\n1. *Generating several previous DFMT related works to multiple layer networks setting*\n2. *Technical contribution in deriving self-consistent equations*\n3. *Proposed framework captures several related existing works, e.g. finite-size correction [26, 27] in the paper.*\n4. *Good agreement between theory and (small scale) simulation.*\n\nWe thank the reviewer for their careful reading and review and for their appreciation of these aspects of our paper. \n\n### Weaknesses\n\n#### Lack of Rigor and Conditions Under Which Theory Holds \n*The calculations are very far from rigorous. It is totally unclear that under what assumptions the results of the paper are correct... The non-rigorousness nature of the approach should also be discussed*\n\nWe do not have a rigorous proof which starts from a collection of sufficient conditions and proceeds to prove the asymptotic validity of our derived DMFT equations. Rather our derivation relies on heuristics (a saddle point technique), which is commonly employed in statistical physics. Necessarily our method requires that the activation functions have a well defined second weak derivative for $L\\geq 2$ (so that the $A,B$ order parameters are well-defined). Further our theory will be valid in a regime where $P,T \\sim \\mathcal{O}_N(1)$. Phenomena where the number of timesteps or samples scales with $N$ are currently inaccessible within the DMFT equations and require alternative techniques. It is possible, for instance, that effects where $T \\sim \\log N$ such as in this work (https://arxiv.org/abs/2202.04509), are not detectable in the DMFT limit. We add this stipulation on $T,P$ at the beginning of Section 3 \n\"Next, we derive our self-consistent DMFT in a limit where $t, P = \\mathcal{O}_N(1)$\"\n\n\nIn our discussion, we add information about the limitations of this assumption and the lack of rigor in our approach:\n\"Though our DMFT is quite general in regards to the data and architecture, the technique is not entirely rigorous and relies on heuristic physics techniques. Our theory holds in the $T,P = \\mathcal{O}_N(1)$ and may break down otherwise. Other asymptotic regimes (such as $P/N, T/\\log(N)=\\mathcal{O}_N(1)$, etc) may exhibit phenomena relevant to deep learning practice.\"\n \n\nFinite size $N$ effects at fixed $P,T$ as well as other asymptotic regimes $P/ N = O_N(1)$ are worthy of future investigation. The rate of convergence with $N$ of the kernels to the DMFT predictions is also worth future investigation. \n\n\n#### Making a Simpler 2-Layer Derivation \n*The current presentation is not very friendly to readers without DMFT background. E.g. Section 3 (main theoretical contribution) is very hard to follow and hard to extract key ideas and insights behind the equations and the techniques for deriving them. Walking the readers through the deviation in the simplest possible setting (e.g. 2-layer linear networks) will be much appreciated.*\n\nWe thank the reviewer for this great suggestion. We added a step-by-step derivation for the two layer case in the new Appendix D.2. We mention this new Appendix early in section 3.1: \"For a simplified analysis of the $L=1$ case, see Appendix D.2\"\n",
" \n#### Computational Obstacle for Linear Networks\n*For linear networks, is the main obstacle to scaling the experiments to larger datasets in solving the linear system of Equation (12) or are there other challenges?*\n\n* Yes, solving equation 12 is the only required computational step to analyze deep linear networks. If the data is whitened in deep linear networks, one can reduce the equations to a system on $T \\times T$ matrices, since evolution of the features only occurs in a single direction in sample space (see last paragraph of 4.1 and the new Figure 7 for 2 layer example). The new Appendix F.2 discusses whitened data in the deep linear ($L\\geq 2$) case.\n\n### Typos\n*Citation on line 198.\n\"numerically method\" on line 249.\n\"Which\" and \"that\" are mixed up several times.\nSome compound adjectives are missing hyphens, e.g. \"infinite width neural networks\" -> \"infinite-width neural networks\"\n\"Figures 5 and Figure 6\" on line 165*\n\n\n* We thank the reviewer for pointing out these typos. They have been addressed.",
" \n\n#### Interpretation of the Features Learned in this Limit\n*There is little interpretation of the features that are learned in this limit.*\n\nWe agree that at this stage the feature evolution equations are complicated nonlinear coupled integral equations which are not immediately interpretable.\n\nTo improve interpretability of our DMFT stochastic process notation in Equation 10, we eliminated the step functions $\\Theta(t-s)$ and instead now just integrate both terms for $s\\in(0,t)$. This is legitimate since the response functions $A, B$ are causal. \n\nWe think that developing more in depth interpretation of these equations, perhaps in special limits, could be useful in follow up works. However, we do want to defend the following insights which we think give some interpretation of our result:\n1. Each neuron's activation and gradient signal is an iid draw from a distribution defined by $\\mathcal Z^\\ell$. \n2. The updates to the pre-activations and pre-gradients are $O(\\gamma_0)$. The $\\gamma_0 \\to 0$ limit is just a Gaussian, which recovers the static NTK picture where $\\Phi$, $G$ can be computed at init and treated as constants through time.\n3. The feature learning updates are recursive nonlinear compositions of Gaussian random variables $u^\\ell,r^\\ell$.\n4. The preactivation $h^\\ell$ update depends on the history of $\\Phi^{\\ell-1}$ while the pre-gradient update depends on the history of $G^{\\ell+1}$ which intuitively shows that the $\\Phi^\\ell$ kernels accumulate corrections from first layer to last while $G$ kenels accumulate update from last layer to first. This is also visible from perturbation theory (see Appendix P)\n5. The $A$ and $B$ kernels quantify the sensitivity of feedforward signals to the feedback fields and vice versa. \n6. All dyamical updates depend on $\\Delta_\\mu = - \\frac{\\partial \\mathcal L}{\\partial f_\\mu}$. Network predictions evolve according to the dynamical NTK $K = \\sum_\\ell G^{\\ell+1} \\Phi^\\ell$. Empirically we see that the feature kernels tend to align to the target function $yy^\\top$ which provides accelerates learning. \n\n### Questions\n\n#### Finite vs Infinite Network Behavior/Performance\n*Do the authors have any sense how much of the performance gap between finite-width neural networks and infinite-width neural networks is closed by the feature learning in this limit?*\n\nThis is a very important question which we currently do not know the answer to. Our DMFT holds in a regime where $N \\gg T, P$ and where each hidden layer is sufficiently wide for the kernels to become initialization-independent quantities. Realistic networks may not be in this regime. \n\nHowever, we attempted to provide some empirics which give insight into how well finite width $N$ networks are well approximated by our theory. \n\n1. In Figure 1 (f) and (i) we show the cosine similarity between kernels predicted by DMFT and the empirical kernels of a width $N$ network for varying $N$. For this small problem in Figure 1, the kernel converges nearly perfectly to DMFT behavior after $N \\sim 100$. \n\n2. Further, in cases where the learning problem is too big for us to simulate our theory, we can still attempt to compare the behavior of the network at different $N$ for fixed $\\gamma_0$. In Figure 4, we attempted to show that for $N \\in \\{250,500\\}$ networks trained on 2 classes of CIFAR-10 the behavior of the network at different $N$ is almost identical if $\\gamma_0$ is the same. A likely explanation is that, even at these modest widths, the network is already close to its DMFT behavior. We are currently working on larger experimental sweeps over $N$ of this kind to visualize the convergence behavior of the kernel and loss dynamics, but have not yet finished these larger experiments. We will include them in the final version.\n\n3. We also are trying to make progress on this question on the theory front. In the new Appendix P.7, we attempt to analyze leading order $\\mathcal{O}( \\frac{1}{N} )$ corrections to the dynamical kernels. We find that the leading order finite size effects are variance inducing, i.e. lead to fluctuations in the kernels over $\\theta_0$ (over the distribution of inits). We added a computation of the relevant components of the inverse correlation matrix for these fluctuations in Appendix P.7.1. If one expects that, at fixed $\\gamma_0$, for $N \\gg 1$ that the learned predictor can be modeled as $f = f_{N =\\infty} + \\delta f_N$ where $\\delta f_N$ is a mean-zero stochastic process uncorrelated with the target function, then by the bias-variance decomposition the expected generalization MSE would decompose as $\\left< \\mathcal{L}_N \\right> \\approx \\left< \\mathcal{L}_{\\infty} \\right> + \\left< (\\delta f_N)^2 \\right>$, providing a prediction that finite size effects would increase the expected test loss of the model. This needs to be tested.\n\n",
" \n#### Difference between NNs in this limit and practical finite width networks\n\n*We know that NTK methods cannot fully explain the full capabilities of neural networks. What makes me curious is, what is the difference between the method in this work and the finite width neural network in practical performance, especially in the feature learning area?*\n\nThis is a very important question which we currently do not know the answer to. Our DMFT holds in a regime where $N \\gg T, P$ and where each hidden layer is sufficiently wide for the kernels to become initialization-independent quantities. Realistic networks may not be in this regime. \n\nHowever, we attempted to provide some empirics which give insight into how well finite width $N$ networks are well approximated by our theory. \n\n1. In Figure 1 (f) and (i) we show the cosine similarity between kernels predicted by DMFT and the empirical kernels of a width $N$ network for varying $N$. For this small problem in Figure 1, the kernel converges nearly perfectly to DMFT behavior after $N \\sim 100$. \n\n2. Further, in cases where the learning problem is too big for us to simulate our theory, we can still attempt to compare the behavior of the network at different $N$ for fixed $\\gamma_0$. In Figure 4, we attempted to show that for $N \\in \\{250,500\\}$ networks trained on 2 classes of CIFAR-10 the behavior of the network at different $N$ is almost identical if $\\gamma_0$ is the same. A likely explanation is that, even at these modest widths, the network is already close to its DMFT behavior. We are currently working on larger experimental sweeps over $N$ of this kind to visualize the convergence behavior of the kernel and loss dynamics, but have not yet finished these larger experiments. We will include them in the final version.\n\n3. We also are trying to make progress on this question on the theory front. In the new Appendix P.7, we attempt to analyze leading order $\\mathcal{O}( \\frac{1}{N} )$ corrections to the dynamical kernels. We find that the leading order finite size effects are variance inducing, ie lead to fluctuations in the kernels over $\\theta_0$ (over the distribution of inits). We added a computation of the relevant components of the inverse correlation matrix for these fluctuations in Appendix P.7.1. If one expects that, at fixed $\\gamma_0$, for $N \\gg 1$ that the learned predictor can be modeled as $f = f_{N =\\infty} + \\delta f_N$ where $\\delta f_N$ is a mean-zero stochastic process uncorrelated with the target function, then by the bias-variance decomposition the expected generalization MSE would decompose as $\\left< \\mathcal{L}_N \\right> \\approx \\left< \\mathcal{L}_{\\infty} \\right> + \\left< (\\delta f_N)^2 \\right>$ , providing a prediction that finite size effects would increase the expected test loss of the model. This needs to be tested.\n",
" \n### Strengths \n\n*Overall, the work is of high quality. First of all, in terms of writing, this article has a clear structure and is relatively easy to read. From a method perspective, it is a novel idea to use dynamical field theory to simulate the dynamics of infinitely wide neural networks, especially the feature learning area. From the simulation results, compared with other basic methods, the method proposed by the author can effectively capture the dynamics of the neural network.*\n\nWe thank the reviewer for their support.\n\n\n### Weaknesses\n\n#### Computational Efficiency\n*On the other hand, the method proposed in this work has a relatively large limitation in computational efficiency, and there is still a certain gap from the actual neural network application scenario. However, I also agree that this limitation can be addressed in the next step.*\n\n\nThank you for pointing this out. As we mention in our discussion, the cubic dependence on the samples and timesteps (solution requires $O(P^3 T^3)$ steps) imposes a strict limitation on the applicability of our solution method to realistic scale problems. We mention below some possible ways of improving this scaling in special cases.\n\n1. Our theory can scale to arbitrary sample size $P$ for linear networks when the training data is whitened. For such networks and data, the DMFT can be solved entirely in terms of $T\\times T$ matrices in $O(T^3)$ time. We provided the two layer ($L=1$) example in our original submission. In response to the reviewer comments, we provided the equations for deep $L \\geq 2$ linear networks in the new Appendix F.2. \n2. One can compute the *final* kernels and predictions in $O(P^3)$ time for regularized mean field training with Langevin noise (Gaussian white noise added to weights during training), using the equilibrium distribution. We added an analysis of Langevin training for mean field networks in Appendix K. The equilibrium analysis in Appendix K.3 gives a collection of equations for the kernels and final predictions which close. \n3. Alternatives to exact gradient descent, including the projected $\\pi$ gradient descent of Yang et al 2022 https://openreview.net/forum?id=tUMr0Iox8XW, have been shown to admit more efficient computations of the exact infinite width behavior. It would be interesting for future work to explore a DMFT derivation of these alternative efficient algorithms and to explore other possible alternatives to exact GD which give more efficient infinite width computations.\n\nThere may be other ways we have not yet conceived of yet which give more practical computations of the infinite width feature learning setting.\n\n",
" \n### Strengths\n\n\n1. *Understanding feature learning is arguably the most important open theoretical problem for neural networks and is certainly of interest and significance to the NeurIPS community.*\n2. *Obviously, the work builds on previous techniques and analyses but the paper makes a strong original contribution with interesting results.*\n3. *The discussion of connections to previous work are helpful for the reader to build intuition.*\n\nWe thank the reviewer for their careful reading and for appreciation of our motivation, results, and discussion.\n\n### Weaknesses\n\n#### Calculations in Supplement\n*Most of the calculations are relegated to the supplement. This is probably expected given their complexity.*\n\n* Yes, most of our derivation is placed in the supplement due to space limitations. While this is not ideal, we wanted to focus the main text on the setup and figures displaying results. \n\n#### Rigorous vs Formal Results\n*It is not completely clear what is proved rigorously and what results are formal calculations.*\n\n\nThank you for this comment. We do not have a rigorous proof which starts from a collection of sufficient conditions and proceeds to prove the asymptotic validity of our derived DMFT equations. Rather our derivation relies on heuristics (a saddle point technique), which is commonly employed in statistical physics. Necessarily our method requires that the activation functions have a well defined second weak derivative for $L\\geq 2$ (so that the $A,B$ order parameters are well-defined). Further our theory will be valid in a regime where $P,T \\sim \\mathcal{O}_N(1)$. Phenomena where the number of timesteps or samples scales with $N$ are currently inaccessible within the DMFT equations and require alternative techniques. It is possible, for instance, that effects where $T \\sim \\log N$ such as in this work (https://arxiv.org/abs/2202.04509), are not detectable in the DMFT limit. We add this stipulation on $T,P$ at the beginning of Section 3 \n\"Next, we derive our self-consistent DMFT in a limit where $t, P = \\mathcal{O}_N(1)$\"\n\n\nIn our discussion, we add information about the limitations of this assumption and the lack of rigor in our approach:\n\"Though our DMFT is quite general in regards to the data and architecture, the technique is not entirely rigorous and relies on heuristic physics techniques. Our theory holds in the $T,P = \\mathcal{O}_N(1)$ and may break down otherwise. Other asymptotic regimes (such as $P/N, T/\\log(N)=\\mathcal{O}_N(1)$, etc) may exhibit phenomena relevant to deep learning practice.\"\n \n\nFinite size $N$ effects at fixed $P,T$ as well as other asymptotic regimes $P/ N = O_N(1)$ are worthy of future investigation. The rate of convergence with $N$ of the kernels to the DMFT predictions is also worth future investigation. \n\n\n#### Small Datasets\n*The experiments are on extremely small datasets due to the computational limitations discussed in Section 7.*\n\n\nThank you for bringing this up. As we mention in our discussion, the cubic dependence on the samples and timesteps (solution requires $O(P^3 T^3)$ steps) imposes a strict limitation on the applicability of our solution method to realistic scale problems. We mention below some possible ways of improving this scaling in special cases.\n\n1. Our theory can scale to arbitrary sample size $P$ for linear networks when the training data is whitened. For such networks and data, the DMFT can be solved entirely in terms of $T\\times T$ matrices in $O(T^3)$ time. We provided the two layer ($L=1$) example in our original submission. In response to the reviewer comments, we provided the equations for deep $L \\geq 2$ linear networks in the new Appendix F.2. \n2. One can compute the *final* kernels and predictions in $O(P^3)$ time for regularized mean field training with Langevin noise (Gaussian white noise added to weights during training), using the equilibrium distribution. We added an analysis of Langevin training for mean field networks in Appendix K. The equilibrium analysis in Appendix K.3 gives a collection of equations for the kernels and final predictions which close. \n3. Alternatives to exact gradient descent, including the projected $\\pi$ gradient descent of Yang et al 2022 https://openreview.net/forum?id=tUMr0Iox8XW, have been shown to admit more efficient computations of the exact infinite width behavior. It would be interesting for future work to explore a DMFT derivation of these alternative efficient algorithms and to explore other possible alternatives to exact GD which give more efficient infinite width computations.\n\nThere may be other ways we have not yet conceived of yet which give more practical computations of the infinite width feature learning setting.\n\n",
" We thank the reviewers for their careful reading, appreciation of our paper's strengths, and useful criticism about its weaknesses. Below we provide a list of the updates we made to the paper in response. \n\n### Summary of Changes/Additions to Paper Since Last Draft\n\n1. We added a two layer ($L=1$) warm up derivation in the new Appendix D.2 to provide a friendlier introduction to the method for non-DMFT experts. The two layer case is especially friendly since the only kernels needed are $\\Phi,G$. Further the $\\chi,\\xi$ fields are static and Gaussian for $L=1$, which is not the case for $L\\geq 2$.\n2. We modified our discussion to mention, as an additional limitation, the lack of rigor of our approach and the restriction to non-extensive sample size and time: \"Though our DMFT is quite general in regards to the data and architecture, the technique is not entirely rigorous and relies on heuristic physics techniques. Our theory holds in the $T,P = \\mathcal{O}_N(1)$; other asymptotic regimes (such as $P/N, T/\\log(N)=\\mathcal{O}_N(1)$, etc) may exhibit phenomena relevant to deep learning practice.\" This last stipulation that neither $P$ nor $t$ be extensive in $N$ is now also mentioned at the beginning of section 3.\n3. To improve interpretability of our DMFT stochastic process notation in Equation 10, we eliminated the step functions $\\Theta(t-s)$ and instead now just integrate both terms for $s\\in(0,t)$. This is legitimate since the response functions $A, B$ are causal. \n4. We provided more detailed comparison between our work and the prior work on infinite width feature learning of Yang & Hu. We show that our DMFT technique gives the same stochastic process if we write our evolution equations in discrete time and take $\\gamma_0=1$. This is interesting since two different techniques (DMFT and Tensor Programs) both give the same description of network training. We mention this connection in our new abstract, introduction, related works, and discussion, as well as the final sentence in Section 3.2. We also mention the difference between our polynomial-time algorithm for solving the self-consistent equations and the exact but exponential time algorithm of Yang & Hu in the new introduction contribution list, item 3, as well as in the discussion. The Appendix N shows that the parameterization we consider is equivalent to their $\\mu P$ parameterization and the Appendix N.6 shows the equivalence of the feature evolution equations and provides a dictionary between our notation and theirs for the interested reader. We also mention Yang et al's follow up work $\\pi$ limit, an alternative projected version of gradient descent which has an efficiently computable infinite width limit, in the related works and the discussion.\n5. We have also added several extensions of our results to show the wide range practicability of the DMFT to commonly studied training methods\n \na. We provided theoretical expressions for the leading finite-width $N$ fluctuations in the kernels in the new Appendix P.7.1 by computing components of the Hessian of the DMFT action. This shows additional potential utility of the DMFT formalism, which can, in principle, access finite size effects to start closing the gap between infinite and realistic finite size networks.\n \nb. We added new extensions of our results about deep linear networks on whitened data (Appendix F.2), showing that solution only requires solving for $T \\times T$ matrices. We added Figure 7 which verifies that solving the one-dimensional system reproduces accurate loss and kernel alignment dynamics in linear networks. \n \nc. We provided an analysis of networks trained with weight decay (L2 regularization) in Appendix J. For homogenous networks, the final learned function is a kernel regression solution with the *final* NTK which can be determined from the field equations. The new Figure 8 shows that this theory is accurate and that the network asymptotes to non-zero train loss at large time when $\\lambda > 0$. We make connections to the prior work of [Lewkowycz & Gur-Ari](https://arxiv.org/abs/2006.08643). \n \nd. We analyzed Langevin trained NNs in the mean field limit (Appendix K) with both dynamical (Appendix K.1-2) and equilibrium (Appendix K.3) analyses. The dynamical analysis at large time (which is like studying $\\lim_{t\\to\\infty} \\lim_{N\\to\\infty} f(N,t)$ for observable $f$) allows all kernels to be written in terms of absolute time differences $\\tau = |t-s|$, which could reduce the computational overhead. Further, the equilibrium analysis, allows one to solve for the *final* kernels with only $\\mathcal{O}(P^3)$ time complexity and has a [Bayesian interpretation](https://arxiv.org/abs/2108.13097). This limit is like studying the behavior of $\\lim_{N\\to\\infty } \\lim_{t \\to \\infty} f(N,t)$. We think that the in-depth similarities and differences between these two limits, as well as the fluctuation dissipation relationships in DMFT are worthy future study. \n ",
" This paper analyzes feature learning in infinite-width neural networks trained with gradient descent. In this limit, the distribution over hidden unit activations and gradients in each layer becomes i.i.d. and can be characterized with some order parameters using self-consistent dynamical field theory. These order parameters are inner-product kernels that are determined self-consistently from the aforementioned distributions. These self-consistent equations can be solved exactly for linear networks, as they reduce to matrix equations in this case, or numerically for nonlinear networks. The authors demonstrate very good agreement between their theory and simulations in finite-width neural networks. Finally, limitations of the theory are discussed, showing where the theory breaks down, and a comparison to other analyses of feature learning via perturbations of the NTK is made.\n Strengths\n- Understanding feature learning is arguably the most important open theoretical problem for neural networks and is certainly of interest and significance to the NeurIPS community.\n- Obviously, the work builds on previous techniques and analyses but the paper makes a strong original contribution with interesting results.\n- The discussion of connections to previous work are helpful for the reader to build intuition.\n\nWeaknesses\n- Most of the calculations are relegated to the supplement. This is probably expected given their complexity.\n- It is not completely clear what is proved rigorously and what results are formal calculations.\n- The experiments are on extremely small datasets due to the computational limitations discussed in Section 7.\n- There is little interpretation of the features that are learned in this limit.\n\nMinor typos\n- Citation on line 198.\n- \"numerically method\" on line 249.\n- \"Which\" and \"that\" are mixed up several times.\n- Some compound adjectives are missing hyphens, e.g. \"infinite width neural networks\" -> \"infinite-width neural networks\"\n- \"Figures 5 and Figure 6\" on line 165 - Do the authors have any sense how much of the performance gap between finite-width neural networks and infinite-width neural networks is closed by the feature learning in this limit?\n- For linear networks, is the main obstacle to scaling the experiments to larger datasets in solving the linear system of Equation (12) or are there other challenges? No concerns about negative societal impact.\n\n",
" This work proposes to simulate the dynamics of infinitely-wide neural networks under gradient descent through a self-consistent dynamical field theory. The key idea is to adopt techniques from dynamical kernels, which can govern the evolution of a neural network. The authors provide an analytic result for the linear network while providing a sampling method to approximate the dynamics. Compared to various approximation sketches, the proposed method is shown to obtain consistent solutions across different regime. Lastly, the authors provide experiments in more realistic settings which demonstrate that the method is still valid. Overall, the work is of high quality. First of all, in terms of writing, this article has a clear structure and is relatively easy to read. From a method perspective, it is a novel idea to use dynamicsl field theory to simulate the dynamics of infinitely wide neural networks, especially the feature learning area. From the simulation results, compared with other basic methods, the method proposed by the author can effectively capture the dynamics of the neural network.\n\nOn the other hand, the method proposed in this work has a relatively large limitation in computational efficiency, and there is still a certain gap from the actual neural network application scenario. However, I also agree that this limitation can be addressed in the next step.\n We know that NTK methods cannot fully explain the full capabilities of neural networks. What makes me curious is, what is the difference between the method in this work and the finite width neural network in practical performance, especially in the feature learning area? I appreciate that the authors acknowledge the limitation of expensive computation. I encourage the authors to try to further improve the computational efficiency of the algorithm in future research.",
" In this paper, the authors continues the line of research that uses dynamical mean field theory (DMFT) to study neural networks. \nUsing statistical physics heuristics, they derive self-consistent equations that governs the evolutions of the kernels, the (internal) representations and the gradients of the networks in the infinite-width feature learning regime for deep neural networks. Although these equations are too expensive to solve in the general setting, the authors propose a sampling-based approach to solve the equations numerically. They show good agreement between theory prediction and simulation. Finally, the authors have a very nice discussion explaining that their framework capture several recently developed perturbation-based approaches to understand neural networks. \n\nOverall, I find the paper interesting, insightful and have valuable contribution to the NeurIPS community. ## Strengths \n\n- Generating several previous DFMT related works to multiple layer networks setting\n- Technical contribution in deriving self-consistent equations \n- Proposed framework captures several related existing works, e.g. finite-size correction [26, 27] in the paper. \n- Good agreement between theory and (small scale) simulation. \n\n\n## Weaknesses \n- The calculations are very far from rigorous. It is totally unclear that under what assumptions the results of the paper are correct. \n- The current presentation is not very friendly to readers without DMFT background. E.g. Section 3 (main theoretical contribution) is very hard to follow and hard to extract key ideas and insights behind the equations and the techniques for deriving them. Walking the readers through the deviation in the simplest possible setting (e.g. 2-layer linear networks) will be much appreciated. \n- The self-consistent equations are not very interpretable (at least at the current form). It is not clear, at least to me, what extra insights regarding feature learning (beyond linear models) can we get from those equations.\n- Computationally, these equations are much much more expensive to solve than just training the original networks and unscalable (cubic dependence on both training steps and training samples) - Is it possible to formulate the assumptions under which the results are theoretically sound? \n\n- Yang&Hu (Feature Learning in Infinite-Width Neural Networks) also derives similar (recursive) equations that depend on the distribution of of the (hidden) representations and their derivatives. Followup work (EFFICIENT COMPUTATION OF DEEP NONLINEAR ∞-\nWIDTH NEURAL NETWORKS THAT LEARN FEATURES) also derives efficient approximation scheme to solve the recursive equations, which is scalable to full CIFAR10. Can you add more detailed discussion about these two papers? What are the advantages and disadvantages? Is it also possible to scale up your algorithm to full CIFAR10?\n\n The paper is purely theoretical. No potential negative societal impact as I can tell. \n\nThe authors discussed the limitation regarding computation. The non-rigorousness nature of the approach should also be discussed. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"eZzX_uTZ2n2",
"9C89tFueSO6l",
"NFRhnntgIlH",
"4F7SlhCmcAV",
"fN2B_AKwdnQ8",
"NFRhnntgIlH",
"SyKL5dbsmT9D",
"VOnvByZFV5T",
"hfK6alENLU7i",
"PFDy_5LUVp",
"ge5iRWS860y",
"nips_2022_sipwrPCrIS",
"nips_2022_sipwrPCrIS",
"nips_2022_sipwrPCrIS",
"nips_2022_sipwrPCrIS"
] |
nips_2022_GFgjnk2Q-ju | Parametrically Retargetable Decision-Makers Tend To Seek Power | If capable AI agents are generally incentivized to seek power in service of the objectives we specify for them, then these systems will pose enormous risks, in addition to enormous benefits. In fully observable environments, most reward functions have an optimal policy which seeks power by keeping options open and staying alive. However, the real world is neither fully observable, nor must trained agents be even approximately reward-optimal. We consider a range of models of AI decision-making, from optimal, to random, to choices informed by learning and interacting with an environment. We discover that many decision-making functions are retargetable, and that retargetability is sufficient to cause power-seeking tendencies. Our functional criterion is simple and broad. We show that a range of qualitatively dissimilar decision-making procedures incentivize agents to seek power. We demonstrate the flexibility of our results by reasoning about learned policy incentives in Montezuma's Revenge. These results suggest a safety risk: Eventually, retargetable training procedures may train real-world agents which seek power over humans. | Accept | The paper studies an alignment problem - that of agent seeking powers, and extends previous work (Turner, 2021 - which showed that optimal policies seek power, to demonstrate more generally that parametrically retargetable policies (policies whose 'target' can be changed by simple change of hyperparameters of the agent) also tend to seek power. The problem is interesting and under-studied, and all reviewers agreed that the work was 'original, non-trivial and significant'. Most concerns were regarding presentation, which could be at times vague and imprecise (in the mathematical parts) or unintuitive (in the informal parts). The authors presented a plan to significantly address clarity of the paper, which alleviated many of the reviewers concern. Please do ensure that the final version include these improvements. | val | [
"G05hans_6VP",
"slJUEDpWMy",
"mdC2TNY4jTb",
"90k3DpvIN6t",
"77SNZSYqtom",
"DFCtkICsWWY",
"N9QKPkIEPhX",
"snEWbolGK4T",
"eu86zhKrGkw",
"uRhBaDdOQaf"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for gathering some of the feedback on clarity. Here are some details concerning our current plan:\n> \"Overall, the writing is a bit light on “scaffolding”\" \n\nIn the beginning of each section, we will add signposting and scaffolding. For example, at the beginning of section 3, we will write: \n\n\"Section 2 informally illustrated parametric retargetability in the context of swapping which utilities are assigned to which outcomes in the Pac-Man video game. Swapping the utility assignments also swapped the agent's final decisions. For example, if death is anti-rational, and then death's utility is swapped with the cherry utility, then now the cherry is anti-rational. In this section, we formalize the notion of parametric retargetability and of ``most'' parameter inputs producing a given result. In section 4, we will use these formal notions to reason about the behavior of RL-trained policies in the Montezuma's Revenge video game.\"\n\n> Explaining retargetability earlier. Perhaps with a better example that more clearly connects retargtability to power-seeking than the cards example does. \n\nInstead of the cards example, we will build off of Turner et al.'s Pac-Man example: The agent can choose between dying immediately, or finishing in a terminal state where it has collected a cherry, and finishing in a terminal state where it has collected an apple. We will explain the example as follows:\n\n\"Turner et al. consider the Pac-Man video game, in which an agent consumes pellets, navigates a maze, and avoids deadly ghosts. Instead of the usual score function, Turner et al. consider optimal action across a range of state-based reward functions. They show that most reward functions have an (average-)optimal policy which avoids immediate death in order to navigate to a future terminal state. \n\nOur results show that optimality is not required. Instead, if the agent's decision-making is \\emph{parametrically retargetable} from death to other outcomes, Pac-Man avoids the ghost under most decision-making parameter inputs. To build intuition about these notions, consider three outcomes: Immediate death to a nearby ghost, consuming a cherry, and consuming an apple. \n\nWe begin with the optimality argument, following Turner et al. Suppose that the agent has a utility function $\\mathbf{u}$ assigning a real number to each outcome. For example, death could have utility 10, the cherry 5, and the apple 0. Then an agent which maximizes $\\mathbf{u}$ would die to the ghost. However, most ``variants\" of $\\mathbf{u}$...\"\n\nWe will continue this example appropriately.\n\n> Clarifying the connection between orbit-level tendencies and power-seeking\n\nAt the end of section 2, we will add the following paragraph:\n\n\"The larger set of outcomes {cherry, apple} can only be induced if Pac-Man stays alive. Intuitively, navigating to this larger set is power-seeking, because the agent retains more optionality (i.e. the agent can't do anything when dead). Furthermore, for most parameter settings, retargetable decision-makers induce an element of the larger set of outcomes. Therefore, we say that \\emph{parametrically retargetable agents tend to seek power}.\"\n\n> Clarify meaning of 'parameters' and 'decision maker' earlier\n\nIn our reply to Reviewer Dyfy, we included an explanation of these concepts. We will include this explanation early in the paper.\n\n> Illustrating core concepts more; possibly while cutting parts of section 4 and the dialogue for space.\n\nIn addition to the above modifications, we will cut the dialogue and move section 4.3 to Appendix C.3.\n\n> Concretize / justify speculative parts in section 4.4 and 5\n\nConcerning DQN being unable to explore given any item-featurized reward signal, we will include the which we explanation provided to MMUx. \n\nIn our reply to Reviewer Dyfy, we also provided a \"related work\" paragraph which we will include in camera-ready.",
" Although all reviewers seem to think the work is original, non-trivial, and significant, multiple reviewers have said that the paper would have more impact with better presentation / clarity. \n\nTo recommend this paper without hesitation, I would expect that during the discussion period, the authors lay out a fairly detailed (and clear) plan for how they'll improve clarity. (For example, by listing how they'll address each feedback point.)\n\nSome feedback on clarity from other reviewers and myself that I'd highlight (non-exhaustive): \n\n- \"Overall, the writing is a bit light on “scaffolding”\" (see chap. 3-6 [here](https://sites.duke.edu/niou/files/2014/07/WilliamsJosephM1990StyleTowardClarityandGrace.pdf) for guidance)\n- Clarifying the connection between orbit-level tendencies and power-seeking\n- Explaining retargetability earlier. Perhaps with a better example that more clearly connects retargtability to power-seeking than the cards example does. \n- Clarify meaning of 'parameters' and 'decision maker' earlier\n- Illustrating core concepts more; possibly while cutting parts of section 4 and the dialogue for space.\n- Concretize / justify speculative parts in section 4.4 and 5\n\nI'd also recommend to get another round of feedback from colleagues after making these changes, as the content is evidently complex to communicate. \n",
" Thank you for your comments. \n\n**Notation.** In the RL setting, $d$ is the size of the state space, but this is not required in general. Retargetability is a property of the policy training process, and power-seeking is a property of the trajectories chosen by a trained policy. More precisely, the policy training process takes as input a parameterization 𝜃 and outputs a probability distribution over policies. For each trained policy drawn from this distribution, the environment, starting state, and the drawn policy jointly specify a probability distribution over trajectories. Therefore, the training process associates each parameterization 𝜃 with the mixture distribution $P$ over trajectories (with the mixture taken over the distribution of trained policies). \n\nA policy training process can be _simply retargeted from one trajectory set $A$ to another trajectory set $B$_ when there exists a permutation $\\phi\\in S_d$ such that, for every 𝜃 for which $P(A\\mid 𝜃)>P(B\\mid 𝜃)$, we have $P(A\\mid \\phi\\cdot 𝜃)<P(B\\mid \\phi \\cdot 𝜃)$. For this work, we echo Turner et al. (2021) in saying that a trained policy $\\pi$ _seeks power_ when $\\pi$ actions which navigate to states with high optimal value for a wide range of reward functions. Generally, high-power states are able to reach a wide range of other states, and so allow bigger option sets $B$ (compared to the options $A$ available without seeking power). \n\nFor the “box” issue in section 3, see footnote 1 on page 4; we will rewrite that portion to be more immediately clear as to what A and B are. “Highly-retargetable decision-makers” indeed means multiply-retargetable functions with high $n$. \n\nWe do not assume a finite environment per se, although we currently don’t see how to apply our results to non-trivial infinite environments. Theorem 3.6 is intentionally abstract and noncommittal in terms of defining even the structure of A and B, so as to enable e.g. future work proving optimal policy tendencies in finite POMDPs, to our analysis of bandit situations in Appendix C.1.\n\n**Related work.** We will add the following: “In this work, we do not motivate the risks from AI power-seeking. We refer the reader to (Omohundro 2008) and (Carlsmith 2021). Turner et al. (2021) show that, given certain environmental symmetries in an MDP, the optimal-policy-producing algorithm $f_\\gamma$(state visitation distribution set, state-based reward function) is 1-retargetable via the reward function, from smaller to larger sets of environmental options. Appendix A shows that optimality is not required, and instead a wide-range of decision-making procedures satisfy the retargetability criterion. Furthermore, we generalize from 1-retargetability to $n$-fold-retargetability whenever option set $B$ contains “$n$ copies” of set $A$ (definition A.6 in the appendix).”\n\nQuantilization is referenced in Appendix A, but we do not consider it important enough to mention in the main text. If you have a strong opinion here, we are open to further discussion.\n",
" Thank you for your feedback. \n\n> The imagined dialogue within Section 2 is too lengthy and somewhat confusing. It's not clear why there is a leap from choosing the boxes with cards to training an RL agent to play Pac-Man.\n\nSeveral other reviewers agreed. We will cut the dialogue.\n\n> The comment on lines 116-117 of placing these results in the context of the Turner et al. (2021) paper only appears relevant if looking at the two papers side-by-side. Connecting these two papers, if this is truly important for the exposition here, could be done in a dedicated subsection for related work.\n\nWe will add a subsection discussing the notion of power in Turner et al. (2021) and its relationship to the current paper. We agree that it is needed to make the paper self-contained. \n\n > The connection to RL in general needs to be clearer. For example, why do the agents need to be \"trained via reinforcement learning\" (line 301)? Looking at different RL algorithms in the context of this framework ...\n\nWe mentioned reinforcement learning because it is currently the most widely used approach to train AI agents. As you correctly implied, our framework applies equally well for other retargetable functions such as MDP planning agents and agents that learn by imitation. We will include a wider variety of planning agents to make the discussion more general.\n\n> Theoretical results that seem to be core to the results (specifically, the power-seeking results alluded to in Section 5.1) are not stated or presented until the Supplemental Material.\n\nWe extended Turner et al.’s power-seeking results. While these results are significant, the main paper does not allow space for explaining and stating those results. We chose to present the key retargetability result via Theorem 3.6, as most of our other important results follow as corollaries. As mentioned to reviewer a6si, we will contextualize and explain how retargetability relates to power-seeking:\n\n“Our submission implicitly assumes background provided by [Turner et al. 2021]. Insofar as their results show that optimal policies tend to seek power (in certain kinds of Markov decision processes), our results show the same for retargetable decision-makers (because we relax their optimality requirement in appendix A). We will justify and explain the power-seeking claims in the camera-ready.”\n\n> The different classes of RL algorithms are insufficiently explored here. For example, the point about DQN in lines 254-258 not being \"good enough at exploring\" is interesting but speculative. In particular, the statement on lines 257-258 that \"[t]here isn't a single featurized reward function for which DQN visits other rooms\" needs to be more rigorously defended.\n\nWe can infer this is true from _Massively parallel methods for deep reinforcement learning_ (2015), which points out how vanilla DQN gets zero score in Montezuma's Revenge. Thus, DQN never even gets the first key. Thus, DQN only experiences state-action-state transitions which didn't involve acquiring an item. In our analysis, we considered a reward function which is featurized over item acquisition. Therefore, for all pre-key-acquisition state-action-state transitions, the featurized reward function returns exactly the same reward signals as those returned in training during the published experiments (namely, zero, because DQN can never even get to the key in order to receive a reward signal). That is, since DQN only experiences state-action-state transitions which didn't involve acquiring an item, and the featurized reward functions only reward acquiring an item, it doesn't matter what reward values are provided during item acquisition—DQN's trained behavior will be the same. \n\nThus, a DQN agent trained on any featurized reward function will not explore outside of the first room. We will add this point to the paper to clarify for readers.",
" Thank you for your detailed feedback. We agree that the presentation needs work, and think that your comments will substantially improve the paper.\n\n> How can retargetability imply power-seeking if the random policy is retargetable but intuitively doesn't seek power? \n\nThe random-action policy is generally not retargetable in a sequential decision-making setting. Consider a situation where most possible outcomes require the agent to survive the first timestep, and suppose that only 1 in 10 available actions will allow the agent to survive. If an agent uniformly randomly selects an outcome and then implements a plan which realizes that outcome, then with extremely high probability (for most possible outcomes), the agent will stay alive at the first timestep. The randomness is over outcomes, not over actions. In contrast, the random policy only chooses the survival action with probability 1/10. (Lines 201–206 were meant to communicate this point. Apparently, that paragraph needs to be rewritten.)\n\n> The paper concludes that the algorithm then 'seeks power' (in some undefined sense). \n\nWe made a pedagogical mistake here. Our submission implicitly assumes background provided by [Turner et al. 2021]. Insofar as their results show that optimal policies tend to seek power (in certain kinds of Markov decision processes), our results show the same for retargetable decision-makers (because we relax their optimality requirement in appendix A). We will justify and explain the power-seeking claims in the camera-ready.\n\n> An algorithm can be retargetable from a specific target A to target B but the paper sometimes talks about retargetable algorithms without refererring to specific targets. What does retargetable mean here? \n\nConsider the claim “GO-EXPLORE is a relatively retargetable algorithm.” We speculated that GO-EXPLORE is relatively retargetable within Montezuma’s Revenge, in the sense of most item-featurized reward functions training policies which leave the first room. We expect that even in a range of environments beyond Montezuma’s Revenge, GO-EXPLORE is better at exploring, and therefore more able to explore distant high-reward states, and therefore its trained policies account for distant high-reward states. GO-EXPLORE will therefore tend to have a relatively high retargetability towards a range of states, relative to eg DQN.",
" We are excited that you consider the retargetability insight to be original, and thank you for your insightful advice and remarks. \n\n> Why does Definition 3.2 require 2 functions? Is it appropriate to talk about this in terms of the “simpler special case” as in my summary?\n\nWe originally intended backwards compatibility with the notation of Turner et al. (2021). However, we could also just create a single function $f$ by \"uncurrying\": $f(A\\mid \\theta) := f_A(\\theta), f(B\\mid \\theta) := f_B(\\theta)$. We will change the definition to this for clarity.\n\nThe simpler special case is worth considering, but neglects probabilistic outcomes (e.g. a 30% chance of X given $\\theta$, and a 70% chance of X given $\\phi \\cdot \\theta$).\n\n> Do you agree with the first weakness I listed?\n\nYes. However, we view your point as a limitation rather than a weakness of the paper. \n\n> Why the change from $f_B(\\theta)$ to $f(B,\\theta)$?\n\nJust a formatting inconsistency. Thanks for pointing it out.\n\n> But how specific are the results to RL? Are there useful take-aways for supervised learning? Can you be more explicit about the scope?\n\nWe are uncertain about e.g. implications for supervised learning. We do not presently see how to show results beyond the planning and RL contexts. We will note this prospect in the paper. \n\n> One thing in particular that might help build intuition is to show how it could (hypothetically) fail to hold, e.g. a concrete example where cosets are not pairwise disjoint.\n\nThere’s a trivial example where $A=B$, $n=2$, and $\\phi_1=\\phi_2$, such that the cosets are identical (and thus not pairwise disjoint), in which case 2-retargetability doesn’t hold. More helpfully, we do have a nontrivial example. It’s Table 3 on page 18 in Appendix C. However, the example is quite arcane and probably not worth the trouble to explain, and we do not see an obvious fix to that problem. We will keep considering whether there are better ways to give more intuition for the counting argument.\n",
" This paper extends “optimal policies tend to seek power” [Turner et al. 2021], arriving at similar results using only a novel notion of “retargetability”. This suggests that a wide range of “parametrically retargetable decision-makers” will seek power, and more competent learners are more likely to. The “Parametrically retargetable” here does *not* refer to the parameters of the model, but rather things like hyperparameters, or parameters of a reward function, which influence the outcome of learning. \n\nThe phrase “tends to” refers to Definition 3.2; per this definition, a decision-maker “tends to” do X if, for *any* setting of these parameters, most permutations of them result in it doing X. This requires assuming the parameters are acted on by a permutation group; in the examples provided, the parameters are elements of $\\mathbb{R}^n$ and this action works by permuting dimensions. This can be viewed as a way to talk about what fraction of a space of infinite volume parameter space results in the decision-maker doing X, but I’m not sure how seriously to take that analogy (which is also suggested by line 128, but not discussed in any detail). The central result is that “retargetability” implies such permutation-orbit-level tendencies. A decision-maker can be retargeted to do X if there is a single fixed permutation $\\phi$ such that: for any $\\theta$ for which it didn’t do X, it *does* do X for $\\phi \\theta$. These definitions are phrased in terms of tending to do X *over Y* and retargetting *from Y* to X, but I’m not sure how important that is; I think what I’ve described is a simpler special case.\n\nThe latter sections of the paper discuss how these results apply to various forms of decision-making, ending with a discussion of reinforcement learning which argues that more advanced RL algorithms will be more retargetable and hence more power-seeking.\n \nStrengths:\n* Power-seeking is one of the most important concepts in AI existential safety.\n* This paper is one of a very few to make a substantive technical contribution to this topic, and I believe it represents meaningful progress on understanding power-seeking.\n* The insight that retargetability is sufficient for power-seeking is significant and highly original.\n\nWeaknesses:\n* This approach to understanding power-seeking doesn’t really address the main mechanisms by which we might expect to avoid building power-seeking agents: inductive biases and targeted feedback. I view this paper as formalizing the argument that “alignment is hard because there are so many outcomes which involve human extinction, and we need to somehow direct the impact of AI systems towards the small fraction of those which do not”. However, this argument doesn’t address the difficulty of alignment *relative to* the tools we have for alignment. \n* The central points and ideas of the paper were not very clear. I believe these are novel and subtle, and thus difficult to communicate effectively, but I think there is significant room for improvement.\n* The phrase “tends to” is not explicitly defined, but should be, given its central role. It should also be explained and justified why the formal definition matches common use (to the extent it does). \n* I believe $f_{blah}$ is overloaded in a confusing way, referring to the probability of an outcome, something a bit more generic (footnote 1), or a decision-making process.\n* The connection between the formally established orbit-level tendencies and power-seeking wasn’t clear enough, and relies a bit too much on Turner et al. 2021, I think. I recommend front-loading something like some of the discussion in Section 5, e.g. into the introduction or at the end of Section 3.\n\nOverall, I think there are many opportunities for improving the clarity and exposition of this work. I believe it would benefit from a substantial rewrite (more than would be appropriate for a revision), which might significantly increase its impact. Still, I think the contributions are sound and important enough to (weakly) recommend acceptance. \n Questions:\n* Why does Definition 3.2 require 2 functions? Is it appropriate to talk about this in terms of the “simpler special case” as in my summary? \n* Are lines 259-262 just restating the definition of retargetability? What purpose does that serve?\n* Do you agree with the first weakness I listed? Or do the results do more than formalize that argument, i.e. providing a stronger argument for power-seeking? How does this affect the practical significance of the results? \n* Why the change from $f_B(\\theta)$ to $f(B,\\theta)$? Is this related to footnote 1?\n* I understand RL and similar algorithms/settings to be the motivation for the work. But how specific are the results to RL? Are there useful take-aways for supervised learning? Can you be more explicit about the scope?\n* Can you elaborate on the counting argument (143)? Given the centrality of this result, I think it would be worth walking through it in more detail. One thing in particular that might help build intuition is to show how it could (hypothetically) fail to hold, e.g. a concrete example where cosets are not pairwise disjoint.\n\nSuggestions:\n* I think the dialogue and example in Section 2 are useful, but they aren’t a replacement for directly explaining the intuition of the work and the concepts involved in text. In particular, I think retargetability should be defined informally before the dialogue. \n* Overall, I would suggest a significant rewrite to focus much more on clearly explaining the core concepts, with less space spent on Section 4 (e.g. you could probably cut an entire subsection).\n* The central concepts and definitions should be clearly emphasized; orbit-level tendencies and retargetability might each warrant a subsection including a paragraph of motivation, a paragraph with an informal definition, and a formal definition.\n* Overall, the writing is a bit light on “scaffolding”, i.e. statements that guide the reader through the work, and help them keep track of what role each section/paragraph/etc. is playing in the bigger picture narrative/development. \n* I think “parameters” and “$\\theta$” are potentially confusing here, since they do not refer to parameters of the model, but rather the learning algorithm/process. More generally, you should discuss more what these “parameters” might represent, e.g. beyond reward functions. Should we consider hyperparameters like learning rate part of $\\theta$? These are part of how desired behavior is specified in practice…\n* The terminology “decision-makers” is also a bit off, I think, since it is more like learners that might also be doing decision-making. \n* I think $\\sigma$ is more commonly used to denote a permutation, and would be preferable here; this would free up $\\phi$ as a potential replacement for $\\theta$.\n* The argument of lines 300-302 should be emphasized and spelled out. How exactly does the theory yield this prediction? Which results contribute to that prediction and how?\n* I disagree with lines 304-305. We know that reward functions can be misspecified, and they are arguably better viewed as a method of providing a useful training signal for a policy rather than a definition of what optimal behavior would look like.\n* 264: $\\Theta^{++}$ is undefined.\n* 268: clarify that fig 2 is in the Appendix\n* 262-274 is hard to understand without knowing details of Montezuma’s revenge.\n* Cite the earlier version of GO-EXPLORE as well, to preempt misunderstanding that Montezuma’s revenge was only solved in 2021.\n* I think 194 should say “4-retargetable”\n* It should be made explicit that the symmetric group acts via permuting dimensions (and which dimensions…) in the examples given.\n* 17: replace “;” with “and”\n* 21-29 are a bit unclear, probably because they are too terse (e.g. it is not yet clear what is meant by “parameter inputs”).\n* Are orbit tendencies well-characterized as “the fraction of $\\theta$ for which…” (128)? If so, elaborate/explain/defend this way of speaking.\n n/a",
" In this paper, the authors introduce the concept of \"retargetable policies\" and the application of this to models of decision making by artificial agents. Beginning with a motivating example of an agent choosing between two boxes containing playing cards with associated utilities, the paper then formalizes the concept of \"retargetability\". The paper then presents a case study of applying the framework that is presented to the Atari 2600 game Montezuma's revenge, and then discuss how this concept leads to power-seeking tendencies by the agents. Strengths: \n- The paper presents a compelling way of looking at classes of reinforcement learning algorithms, in terms of ways of permuting rewards.\n- The paper is generally written very clearly and the presentation is of high quality.\n\nWeaknesses: \n- There is too much reliance on the Turner et al. (2021) paper throughout. In particular, there is no separate discussion of \"power\" as defined in that paper. Presumably, the authors are using the same definition, but this needs to be clarified for this to be a stand-alone paper.\n- Theoretical results that seem to be core to the results (specifically, the power-seeking results alluded to in Section 5.1) are not stated or presented until the Supplemental Material.\n- The point in lines 278-280 about policies becoming more retargetable over \"impressive outcomes\" is vague and speculative. Since this seems to be important to the connection between this framework and the notion of power, it would be important to be more precise here.\n- The different classes of RL algorithms are insufficiently explored here. For example, the point about DQN in lines 254-258 not being \"good enough at exploring\" is interesting but speculative. In particular, the statement on lines 257-258 that \"[t]here isn't a single featurized reward function for which DQN visits other rooms\" needs to be more rigorously defended.\n Suggestions:\n- The abstract (and paper's title) mention \"retargetable policies\" but these are not defined until later in the paper. At least a brief definition in the abstract would be helpful.\n- The imagined dialogue within Section 2 is too lengthy and somewhat confusing. It's not clear why there is a leap from choosing the boxes with cards to training an RL agent to play Pac-Man.\n- The comment on lines 116-117 of placing these results in the context of the Turner et al. (2021) paper only appears relevant if looking at the two papers side-by-side. Connecting these two papers, if this is truly important for the exposition here, could be done in a dedicated subsection for related work.\n- The connection to RL in general needs to be clearer. For example, why do the agents need to be \"trained via reinforcement learning\" (line 301)?\n- Looking at different RL algorithms in the context of this framework (rather than a cursory discussion of DQN and GO-EXPLORE) would be useful to be convincing about the generality of this framework. The main limitations of this paper are how this relates to other decision-making approaches. This paper makes a logical leap from introducing a framework for evaluating the effect of permuting parameters of decision-making policies to the implied conclusion that this may result in AIs that seek power over humans. While this framework may have the potential for rich applications, drawing this kind of conclusion does not seem to be warranted given the discussion in the paper. ",
" Previous work has shown that power-seeking emerges for many reward functions in optimal policies. This is because power-seeking refers to maximizing the number of options the agent has, and having more options allows the agent to maximize more possible reward functions. \n\nThis paper mathematically shows that a similar result holds for various algorithms that produce _suboptimal_ policies, specifically to algorithms that are \"retargetable\": A suboptimal algorithm may not be able to reach _every_ achievable option needed to maximize its reward function, but as long as we can always retarget the algorithm (for example by changing the reward function) to reach _some_ 'powerful' state which yields many options, it tends to reach such a state under many possible reward functions. Formally, an algorithm is defined as retargetable from target A to target B if, assuming it reaches A, we can change the algorithm's parameters (e.g. its reward function) so that it reaches B instead. Retargetable algorithms include random, greedy, and optimal decision-making among others. \n\nThe paper illustrates its conclusions through a toy example and formal analysis in Montezuma's Revenge. It also argues that RL algorithms become increasingly retargetable as they become more capable, so that we will increasingly observe power-seeking.\n __Strengths__\n\nPower-seeking is a property of intelligent agents that has long been hypothesized and has important consequences for safety. It is good to see that recently some formal progress is made to understand power-seeking (in the sense of preferring actions that lead to more open options). \n\nBut previous work assumes perfectly optimal agents acting in fully-observable environments--a strict assumption that poses the question if power-seeking is merely an artifact of optimal policies. The present paper answers that question in the negative. \n\nIt also introduces a sufficient condition for decision algorithms to seek power: retargetability. Having a sufficient condition should improve our understanding of when algorithms will seek power or not. Establishing this condition is highly mathematically non-trivial and provides tools for future analyses.\n\n__Weaknesses__\n\nAlthough the abstract and introduction are excellently written, the mathematical presentation is often confusing and needs more clarity and reduced ambiguity (see detailed comments). Although the ambiguities mostly resolve after very careful reading, they make the paper hard to read. This is likely to be a major problem for average readers. The authors could get more external feedback to remedy this. \n\nI recommend accepting this paper, although I think it would have much more impact with a clearer presentation, because I think our community should reward authors for tackling important but difficult problems with hard-to-explain solutions in young subfields where readers will lack expertise. Additionally, as long as the conclusions are sound, most readers do not need to read the entire mathematical content of this paper.\n\n\n__-------------------- Detailed comments ------------------------__\n\n__Introduction and abstract__\n\nThese sections were a pleasure to read.\n\n\"A wide range of decision-makers share these power-seeking tendencies—they are not unique to reward maximizers.\" This is a key claim that lacks reference. I think you mean \"We show that a wide range....\".\n\nIn L33, the meaning of parameterizations is unclear. (The meaning of 'similar decisions' is also unclear but probably refers to seeking power?) Parameterizations usually refer to different choices for a change of variable. It is also unclear which parameters you refer to (algorithm parameters, not policy parameters).\n\nThe paper's conclusion but not its argument is given in the introduction (and the argument doesn't become very clear later). I've tried to spell out the intuitive arguments in my summary above, in case that is helpful. \n\n__Section 2__\n\n(Comments ordered by priority)\n\nL 49 talks about \"reparameterizing\" the decision rule. This threw me off quite a bit since reparameterization normally refers to a change of variables but you seem to simply mean a change of the parameter theta without changing the parameter space Theta. The confusion is worsened because the text confuses lower case theta and upper case Theta multiple times.\n\nSection 2 often refers to definitions and propositions in later sections. It is unclear if one should skip ahead and read those or not. Most readers will not do so, including myself, and this caused me to not understand the meaning of words like 'retargetable'. \n\nDialogue: Overall, the dialogue added lots of confusion for me without adding insight. If I wasn't reviewing the paper, I would have stopped reading here. I'd recommend to delete it or rewrite it from scratch. I think the rest of section 2 gets the point across on its own. Problems with the dialogue include but are not limited to: 1) The meaning of retargetable is unclear at this point. 2) Bob suggests that the agent could make decisions 'on a whim' and 'ignore the reward signal', but this seems nonsensical at first sight since that is not how RL works. So it was initially unclear why this suggestion is included in the dialogue. Perhaps a clearer formulation would be: \"Alice: ...most parameters nudge the agent to pick a card from B ... Bob: But the parameter need not nudge the agent towards anything. For example, the decision-making rule could be a function that picks actions uniform randomly without using the parameter. \n\nTable 1 \n - The table doesn't clearly communicate its point without needing reference to the accompanying text. The point appears to be that \"most utility function parameters incentivize the agent to draw a card from box B _because box B contains more options than A_. The point of permuting the utility function is also not immediately clear; it appears to be that most permutations induce picking from box B.\n - Consider bolding the best card\n\nConsider renaming 'decision rule' to 'decision algorithm' since that is what the reader should usually have in mind. \nOverall, the card example in section 2 is very simple (this is a good thing) so it should be possible and valuable to make the explanation short.\n\nPicking box B is not a very intuitive conceptualization of power-seeking. Another example may be better. \n\n__Section 3__\n\nThe point of the opening sentence was unclear. \"requires that the parameters θ ∈ Θ be \"modifiable\"...\": what does it mean to 'require' this? You mean you require that the orbit of θ is also in Θ? \n\nDefinition 3.3: \n - you mean \"for all theta^A in Theta\"?\n - A double \"if\" is followed by a single \"then\". This seems grammatically incorrect which obscures the definition's meaning. \n\n\"a set acted on by Sd, the symmetric group on d elements\": Unclear. How is a set acted on by Sd different from a set not acted on by Sd? \n\nConsider renaming symmetric group to permutation group for clarity. Even easier, rename it to 'set of permutations'. \n\nWhy do you consider specifically permutations and nothing else? Are these in some sense highly general and cover all interesting reward functions / parameters? \n\nYou could explain why B is not retargetable to A. (And why this implies the agent seeks power.)\n\nWhat can the permutations refer to in practice other than the specific case of shuffling the rewards of a set of state-action pairs? In general, it is unclear why you need an abstract formalism that goes beyond this specific case.\n\n\"A parameter Θ’s orbit\": did you mean lower-case theta? This problem appears multiple times.\n\n\n__Section 4__\n\nYou could direct the reader to the most important results here, which are currently buried deep in the section. \n\nThe subsection on initial action selection is not specific to MR so could be placed in its own section. Readers who want to read specifically about MR do not need to read this subsection. In fact, your conclusions here can be broad whereas the paper's structure suggests that they only apply to MR.\n\nSection 4.2: the paper is about power-seeking in suboptimal decision-makers, but the example f_max is optimal at reaching the target observation so this example seems less important.\n\nSection 4.4: alpha_key and alpha_sword are not defined. At first it seemed that these were reward functions, not dimensions one reward function.\n \n__Section 5__\n\nThis section reads a bit speculative since the concrete results (for example the formal results) are only in the appendix and the reader can't verify how useful those results are without reading the appendix.\n\nTypos\n - Table and figure names are usually capitalized (also for clarity you could write \"Table 1 (first row)\" in L55).\n\nCan you comment on how strong a requirement retargetability is? It seems fairly strong because it requires that something holds for every possible theta. \n\n How can retargetability imply power-seeking if the random policy is retargetable but intuitively doesn't seek power? \n\nThe paper shows that if an algorithm can be retargeted from some arbitrary target A to some target B, then most parameter choices (i.e. reward functions) will lead the agent to reach B. The paper concludes that the algorithm then 'seeks power' (in some undefined sense). The latter claim lacks clear justification. The paper does give some examples of such powerful events B (reaching the next room and picking a box with many cards) but a general argument or intuition seems to be missing.\n - For example, the event A could correspond to 'power' in which case the agent 'avoids power'. To say something about power-seeking, the theorems would have to specify which target (A or B) is more powerful.\n\nAn algorithm can be retargetable from a specific target A to target B but the paper sometimes talks about retargetable algorithms without refererring to specific targets. What does retargetable mean here? \n Yes",
" This paper defines retargetable algorithms and formally shows that they have power-seeking tendencies (i.e. make choices that leave more options open), building on existing work which showed that optimal policies tend to seek power. Since any algorithms that make decisions based on utility of outcomes are retargetable, this is a significant generalization of the previous work. The authors then apply the retargetability criterion to different algorithms in Montezuma's revenge and show that more generally capable algorithms are more retargetable and therefore are more likely to produce power-seeking policies. This paper makes progress on understanding and formalizing the important problem of power-seeking incentives, which is considered the main mechanism for large-scale harm from advanced AI. It generalizes previous work by Turner et al (2021) that established power-seeking incentives for optimal policies. This paper demonstrates power-seeking tendencies for a broad class of non-optimal algorithms (in principle, any algorithm that chooses between outcomes based on utility), which implies that these incentives are likely to arise in practice for modern ML algorithms. While the previous work assumes a finite MDP setting, the results in this paper apply to more complex environments, e.g. as illustrated on the Montezuma's revenge game in Section 4. These results are novel and significant. \n\nThe main weaknesses of this paper are clarity of presentation and insufficient discussion of related work. I expect it to be difficult for readers who haven't read Turner et al (2021) to follow this paper, and I think it should be more self-contained (e.g. include the definition of power from the previous work and explain why definition 3.2 implies power-seeking). \n\nThe fictional dialogue in section 2 is intended to convey intuition, but I don't think it serves that purpose well, so I would suggest moving it to the appendix for clarity and conciseness. Something that would help build intuition for the concept of retargetability would be to provide other examples of non-retargetable decision makers (besides the trivially non-retargetable f_stubborn). \n\nThe application of the retargetability results to Montezuma's Revenge in Section 4 would more illuminating if it was more specific. For example, it would be great to clarify what are d, A and B in sections 4.2-4.4. Section 3 defines retargetability over action sets, while sections 4.2-4.4 make claims about retargetability over observations without defining what that means. In particular, it's unclear how retargetability over a continuous observation space works since the permutation $\\phi$ is over a finite set of elements.\n It would be helpful to add a notation paragraph defining key terms and variables: \n* It's not entirely clear what a \"decision-maker\" is - I assume that a decision-maker p is a goal-conditioned policy, and instantiating it for a specific parameter theta gives a policy $f=p(\\theta)$? (The term \"algorithm\" in section 4 seems to be used interchangeably with \"decision-maker\".) Is retargetability is property of the algorithm, and power-seeking a property of the policy? \n* Section 3 is intended to give general definitions, but still refers to A and B as \"boxes\" as in the card example in Section 2. I assume A and B are supposed to be action sets?\n* What does d represent in section 3? Is it the size of the action space? \n* It's unclear what \"highly retargetable decision-makers\" means in Section 3 - are these multiply-retargetable functions with high n in def 3.5? \n* Clarify the assumptions made about the environment (e.g. finite action space?) \n\nAdding a related work section would really help clarify the relationship with existing work, e.g.:\n* State explicitly how this paper generalizes results from Turner et al (2021).\n* Summarize how this work relates to the papers on quantilization that are cited in the reference section but not mentioned in the main text.\n* Since this paper doesn't focus on making the case for risks from power-seeking incentives, it would be good to refer to works that motivate the importance of this problem, e.g. Basic AI Drives (Omohundro 2008) and Existential Risk from Power-Seeking AI (Carlsmith 2021). \n\nI would argue in favour of accepting this paper conditional on the above issues being addressed, which I expect can be done without major changes to the content of the paper. It would be helpful if the authors can include a notation paragraph and a related work paragraph in the author response if space allows. The paper addresses some limitations in the second paragraph of section 6.1. It would be helpful to label this section as \"Future work and limitations\" to clarify this, since this paragraph is not about future work. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"slJUEDpWMy",
"nips_2022_GFgjnk2Q-ju",
"uRhBaDdOQaf",
"snEWbolGK4T",
"eu86zhKrGkw",
"N9QKPkIEPhX",
"nips_2022_GFgjnk2Q-ju",
"nips_2022_GFgjnk2Q-ju",
"nips_2022_GFgjnk2Q-ju",
"nips_2022_GFgjnk2Q-ju"
] |
nips_2022_Z6BFQqzwuS4 | Bayesian Persuasion for Algorithmic Recourse | When subjected to automated decision-making, decision subjects may strategically modify their observable features in ways they believe will maximize their chances of receiving a favorable decision. In many practical situations, the underlying assessment rule is deliberately kept secret to avoid gaming and maintain competitive advantage. The resulting opacity forces the decision subjects to rely on incomplete information when making strategic feature modifications. We capture such settings as a game of Bayesian persuasion, in which the decision maker offers a form of recourse to the decision subject by providing them with an action recommendation (or signal) to incentivize them to modify their features in desirable ways. We show that when using persuasion, the decision maker and decision subject are never worse off in expectation, while the decision maker can be significantly better off. While the decision maker’s problem of finding the optimal Bayesian incentive compatible (BIC) signaling policy takes the form of optimization over infinitely many variables, we show that this optimization can be cast as a linear program over finitely-many regions of the space of possible assessment rules. While this reformulation simplifies the problem dramatically, solving the linear program requires reasoning about exponentially-many variables, even in relatively simple cases. Motivated by this observation, we provide a polynomial-time approximation scheme that recovers a near-optimal signaling policy. Finally, our numerical simulations on semi-synthetic data empirically demonstrate the benefits of using persuasion in the algorithmic recourse setting. | Accept | The paper formulates the problem of algorithmic recourse under partial transparency as a Bayesian persuasion game. It is shown that the decision-maker can design an incentive-compatible action signaling strategy with guarantees that both the decision-maker and decision-subjects are not worse off in terms of expected utility. The results provide several insights into the complexity of computing an optimal signaling strategy; moreover, a polynomial-time approximation algorithm is provided to compute a near-optimal signaling strategy. The reviewers acknowledged that the paper considers an important problem setting and provides new technical insights into algorithmic recourse using the framework of Bayesian persuasion. However, the reviewers also raised several concerns and questions in their initial reviews. We want to thank the authors for their detailed responses and for actively engaging with the reviewers during the discussion phase. The reviewers appreciated the responses, which helped in answering their key questions. The reviewers have an overall positive assessment of the paper, and there is a consensus for acceptance. The reviewers have provided detailed feedback in their reviews, and we strongly encourage the authors to incorporate this feedback when preparing the final version of the paper. | train | [
"bdaSvZFxHGz",
"E6upms1jdM_",
"ccG87ebyiv",
"eWL4r6_s-nt",
"sCZLODtXMUD",
"MWV8K2FV7wz",
"sTxT9JKevRp",
"G_z9yxFcHhz",
"T0tHVWXIOT0",
"aQd1HOVxSgW",
"EYagEYKJFUw",
"dk4HwsacppA",
"o76UwI-aDky5",
"5Sz35CQfFMk",
"uRPVNdAD9lu",
"jJ7nPtX0LPO",
"mbqQE1zg0_F",
"YSNUauR2XXy",
"nMuA8WRQ1R_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think there is a distinction. In your model, the sender chooses to not disclose full information not because they are not allowed to but because they are better off not doing that. This are no restrictions on how much information the decision-maker can disclose in your model, and the case with such restrictions would require a different model, so I think hiring or college admission may not be good examples. The student/teacher example seems a better fit. But it's better to set up the teacher's goal as somewhat different from the student's - otherwise, why doesn't the teacher just reveal full information unless there are restrictions on how much they can reveal (but again with restrictions you may need to a different model). I'm upgrading my score given this example but strongly encourage the authors to clarify the problem setting and motivations in the revision (preferably, provide better examples).",
" Thank you for your response, and I appreciate your feedback. Following a review of the authors' responses, my rating remains unchanged. Despite making very strong assumptions that may not be realistic, this model utilizes Bayesian Persuasion in a novel manner to address a well-motivated problem. Thus, I consider it a step in the right direction, and I believe it has the potential to influence future research in this area.",
" >I think in your paper (and in the revision) it says that the assessment rule θ is chosen by the decision maker (around line 160).\n\nThanks for pointing this out. We realize that this choice of wording may be confusing to the reader, and we will update this passage in the revision. The setting we study is one in which joint optimization of the decision rule and signaling policy is not possible. If such joint optimization were allowed, we agree that the use of persuasion would be less meaningful.\n\n\nHowever, we disagree that it is “obviously completely up to” the person/entity offering recourse to choose the assessment rule in the examples mentioned in our submission (hiring, college admissions, lending). Oftentimes, the decision maker (e.g., a bank) is not a single person or entity. In reality, different entities within the decision making institution may be responsible for different aspects of the decision making process. In hiring, a recruiter for a company may have knowledge of the factors the company uses to make hiring decisions. While the recruiter may not be allowed to reveal this information (e.g., for fear of lawsuits, see https://smallbusiness.chron.com/companies-give-reasons-didnt-hire-20141.html), they may still wish to offer the candidates they recruit some way of increasing their chances of being hired. In lending, one department of the bank may be in charge of determining the threshold on the credit assessment, while someone else may be in charge of offering recourse. Similar logic applies to the college admissions example, in which someone associated with the university may have the ability to offer advice to applicants, but does not have the ability to unilaterally change the underlying assessment rule. \n\n\nIn the revision, we will clarify how these examples fit within our setting. Additionally, we will include the student/teacher example from our previous reply, in which it may be more immediately apparent how and why the assessment rule is exogenously determined.\n",
" Thanks for your reply. The set up we consider is focused on designing the optimal signaling policy when the decision rule (θ) is exogenously determined. Therefore, the optimal policy is a set of conditional probabilities for any realization of θ, which is always revealed to the decision maker but may or may not differ between decision subjects. As we mention in our previous reply,\n\n>Our results hold in expectation over the distribution on theta. Thus, our results are applicable to both the setting in which the decision maker uses the same decision rule for each decision subject and the setting in which the decision maker uses different decision rules drawn from the same distribution. We note, however, that in some domains, the decision-maker may be bound to apply the same rule to all subjects.\n\nIf this has not sufficiently addressed your question, we would be happy to elaborate further.\n",
" Thank you for your reply. I think in your paper (and in the revision) it says that the assessment rule $\\boldsymbol\\theta$ is chosen by the decision maker (around line 160). The critical issue here is that if the signal sender has control over $\\boldsymbol\\theta$, then they can well just optimize $\\boldsymbol\\theta$ to induce a desirable action of the receiver, and there is no need or benefit to further use a signaling strategy on top of that, or to optimize it jointly with $\\boldsymbol\\theta$. So indeed, optimizing $\\boldsymbol\\theta$ is the more important thing for a decision-maker to do in such circumstances. Optimizing both of them is unnecessary and somewhat meaningless. And optimizing only the signaling strategy while treating $\\boldsymbol\\theta$ as exogenously determined is in some sense a misleading solution as it may be suboptimal compared with optimizing $\\boldsymbol\\theta$. I think this is the case for all the examples mentioned in the paper, in particular the ones in footnote 1, where it is obviously completely up to the decision-maker to choose the assessment rule. So I still don't see a convincing motivation or any benefit to use information about the decision rule to persuade the subject in these examples. Even in the first example in your reply, I think it is still the bank, instead of the credit scoring agency, who selects the decision rule to decide whether to give a loan to the applicant.",
" Thank you for the response. My questions are almost addressed. One follow-up question is as follows.\n\nIn the first part, let me rephrase my question. The optimal policy is a set of conditional probabilities which depends on the parameter $\\theta$. It means that the decision must depend on the parameter. If there is a heterogeneous group of decision subjects, my question is whether their parameters are revealed to the decision maker. If so, I think this assumption is too strong to apply the result.\n\n\n\n",
" Please let us know if our response has sufficiently addressed your concerns regarding the setup of our model and potential negative social impacts. We would be happy to answer any other questions you may have.",
" Thanks for your suggestion. We agree that a real-world running example would be helpful to motivate our model, and we will include such a running example in our revision. \n\nRegarding your first point, our model can indeed capture so-called “gaming” actions (e.g., manipulating records) by assigning non-positive decision maker utilities to them as you suggest.",
" Thanks for taking the time to address the negative points in the review. One last question and one suggestion:\n1. Perhaps I wasn't clear but when I mentioned \"malicious adaptation which is performed to \"trick\" the decision maker\" I wasn't referring to adversarial agents who would try to harm the classifier and, therefore, would have a different utility function. In the context of the utility function presented in the paper, I was wondering how the model captures actions taken by the agents that try to \"game\" the classifier, e.g., loan applicants who manipulate their records in order to receive a loan. In the model, would that correspond to a set of actions with mixed positive (repaying debt) and negative (manipulating records) utilities? Or is it equivalent to the experimental setup in Appendix I.4 where all actions have positive (but different) utilities?\n2. The explanations about the model assumptions and how it fits to the setting of algorithmic recourse were helpful. I think that the paper would benefit significantly by a real-world running example (e.g., on lending or hiring) explaining all the steps in the table of page 2, e.g, where does the uncertainty about $\\theta$ come from, what is the signaling policy and why the decision maker commits to it before training, e.t.c.",
" Reply to Question 2:\n\n>Why is the decision maker's utility assumed to be a function of the action a and not a function of x0+Δx(a)?\n\nWhile out of the scope of this work, we believe that allowing for more general models of decision maker utility is an interesting direction for future research. In the context of our running example, the reviewer is correct that a bank may care whether or not someone who receives a loan repays it. However, paying off some existing debt would be desirable to the bank regardless of whether or not the applicant receives a loan, if the applicant's debt is to the bank they are applying to. More generally, we believe it is natural for the decision maker to have some preference over the actions taken by the decision subject, and we chose the decision maker utility to reflect this.",
" Thanks for your thorough review and helpful comments. Please find our responses below.\n\n>Although the technical setup is different, there is some relevant work (see [1]) connecting counterfactual explanations (concept almost equivalent to algorithmic recourse) with strategic machine learning and also tries to compute personalized recommendations that maximize the decision maker's utility. I believe it should be cited and discussed in the \"strategic responses to unknown predictive models\" subsection.\n\nThanks for pointing out this omission. We will include a comparison with this work in the revision.\n\n>Finally, the experimental evaluation is satisfying but I think that it could have been a bit more extensive. My main concern related to the experiments is that the authors assume all the available actions give equal utility to the decision maker and they are all more desirable than the null action that doesn't change the decision subject's features.\n\nOur assumption that all available actions give equal utility to the decision subject was made for convenience. We reran the experiments with different utility values for each action and our overall findings were the same: our optimal signaling policy achieves higher average total utility compared to baselines. See section I.4 in the appendix for these additional experimental results.\n\n>a large part of the strategic machine learning literature is studying malicious adaptation which is performed to \"trick\" the decision maker and therefore, in the current setup, would lead to negative utility. As far as I understand, the authors' framework could capture this by considering a set of actions with a mixture of negative and positive utilities. I believe that, in that case, the results of Figure 1 might have been different. For example, I think that \"Full information\" could lead to lower utility than \"No information\" and it would be interesting to see how the method proposed by the authors would perform in comparison.\n\nWe focus on the setting of strategic (not adversarial) decision subjects (e.g., someone who is applying for a loan cares about receiving the loan, not adversarially harming the performance of the deployed model). However, our model would be able to handle this case by suitably modifying the decision subject's utility function (although this may result in a different definition of \"equivalence region\" than the one we currently use). Since the underlying assumptions about decision subject behavior are different, we agree that our empirical findings in such a setting could be different. \n\nReply to Question 1:\n\n>What is the real need to assume that the decision maker has a prior over the model parameters θ? They have the data, they train the model. Where is the uncertainty about θ coming from if the training process is completely under their control?\n\nThe decision maker has uncertainty about the model because they commit to their signaling policy before the model is trained (and its true value is revealed). While we view the concurrent design of the decision rule and signaling policy as an interesting direction for future work, we would like to point out that this is often not possible under many settings for either practical or institutional reasons. In the case of our running example on lending, a credit scoring agency may be in charge of determining the assessment rule, not the bank offering the loan.\n\n>More technically, in the problem definition of Section 4, why does the decision maker sample θ from Π?\n\nThe process of training the model itself can be viewed as sampling the model from a distribution. For example, the process of training the model on i.i.d. sampled data will result in a distribution over models.\n\n>Related to that, why would the decision maker commit to a signaling policy before training?\n\nThe decision maker may commit to a signaling policy before training due to various transparency concerns; for example, either due to regulation, or the desire to build trust with their decision subjects, even when the model itself cannot be revealed.\n\n>These look to me like assumptions needed to make the whole thing fit to the Bayesian persuasion setting but they slightly disregard how algorithmic recourse would work in real-life applications. I think these assumptions need to be better motivated in the text.\n\nWhile we disagree that these assumptions somewhat disregard how algorithmic recourse would work in real-life applications (and we hope our answers to the previous part of this question have convinced you otherwise), we agree that these assumptions could be better motivated and we plan on expanding upon them in the revision.\n",
" >What are the main insights from the set of empirical evaluations? Can we design an experiment which tests the method under more realistic assumptions? I guess that both positive or negative results will be interesting in this context.\n\nThe main insight from our empirical evaluations is that persuasion is beneficial across a wide range of strategic decision making settings (i.e. our results are not dependent on specific x(a) and c(a) values). We agree that conducting experiments under more realistic settings (e.g., running mechanical turk experiments) would be interesting.\n\n>Is it possible to add components to the mechanism, or describe a setting in which the decision maker is likely to adhere to their commitment, and don't examine any data privately before committing to the signaling scheme?\n\nOur current model of interaction assumes that the decision maker has the power to commit to a signaling scheme, and adhere to their commitment. In practice, this may be enforceable via laws or regulations. However, it would be interesting to relax this assumption in future work.\n\n>Algorithm 1 assumes that θ can be sampled polynomially-many times. Assuming that sampling from Π can be very costly, is it possible to trade off running time for lower sample complexity?\n\nThere is a trade off between accuracy and number of samples, as the number of samples required for an epsilon approximation grows as 1/epsilon^2. Therefore if the accuracy threshold is decreased, the required number of samples from the distribution (and therefore the runtime of the algorithm) will decrease.\n",
" Thanks for your detailed review. Please find our responses below.\n\nWeaknesses:\n\n>Some of the core assumptions made by the model are not realistic. In particular, assuming that a common prior exists for θ seems highly non-trivial - For example, decision makers usually have a resource advantage, and for example are likely to conduct a market survey revealing more details about Π before committing to a signaling scheme - Breaking the common prior assumption.\n\nWhile the common prior assumption is standard within the Bayesian persuasion literature, the reviewer is correct that many real-world settings exist in which this assumption may be unrealistic. We view this as an interesting and important direction for future work, but we would like to point out that if the decision maker has privileged information such that their prior is better informed than the decision subject’s, it is possible for them to use this to induce a common prior (although doing so may or may not be in their best interest). In the context of our running example on lending, if the decision maker/bank has a resource advantage, they could reveal information from market surveys, etc. on their website such that both the decision maker and decision subjects have the same information (thus inducing the same prior). We will further clarify this point in the next revision of our submission.\n\n>Moreover, model training in the real world is often very costly, so it is more realistic to assume that model parameters θ will be reused many times, and obtaining fresh samples θ∼Π (e.g as assumed by the presented algorithm) will be very costly. In addition, in many realistic settings users have the ability to share data and possibly collude.\n\nThe reviewer is correct that in many settings users have the ability to share data and collude. As we mention in the conclusion, we view this as an important direction for future research.\n\n>Not sure whether this model naturally extends beyond linear classification and one-shot settings. As an extreme example - Will it be realistic to assume that both parties have a common prior over the parameters of a modern, large-scale neural network?\n\nWhile a common prior assumption over neural network parameters is indeed unrealistic, a more pressing issue in this example is the lack of interpretability of complex models like neural networks. Appropriately explaining these models in an actionable manner is a separate but active area of research. Because of their inherent lack of interpretability, decision subjects may have difficulty reasoning about different outcomes if decisions are made using deep neural networks, therefore limiting the effectiveness of persuasion (even if a common prior assumption were to hold). However, extending our results to other interpretable ML models (e.g. decision trees) seems possible and would be interesting.\n\n>It seems to me that the empirical evaluation section mainly provides a numerical validation for the theoretical results, and does not explore much beyond them. As the theoretical model relies on non trivial assumptions, the experiments section can be an opportunity to explore limitations and robustness.\n\nWe would like to point out that we do vary several parameters of the model in our experiments (e.g., cost of actions, how much actions affect changes in observable features - see Figure 2), and show that persuasion has an effect across a wide variety of settings. However, we agree it would be interesting to further relax these assumptions.\n\nQuestions:\n\n>Realistic model assumptions - One possible way to model the randomness in model parameters θ is assuming that θ is the result of an Empirical Risk Minimization process on a dataset of feature-label pairs (x1,y1),…,(xn,yn) sampled iid from a distribution D. If we assume this, does a common prior assumption on θ entail equivalent assumption in terms of D? In other words - If we assume that θ is obtained using ERM, is the common prior assumption on θ equivalent to assuming that both Principal and Agent have the ability to train the prediction model themselves?\n\nA common prior assumption on the decision rule could indeed be obtained by a common prior assumption on the distribution of data used to train the model, as long as both the principal and agent agree on the specifics of how the model is trained (e.g., number of training samples, model hyperparameters, etc.)\n\n>What happens if the same value of θ is reused across many instances of the persuasion?\n\nSince our results are in terms of expected utility (where the expectation is with respect to the prior over the decision rule), they hold for both the setting in which the decision rule is trained once and reused multiple times, and the setting in which the model is repeatedly retrained using different data.\n",
" Thanks for your review. Please find our replies to your concerns below.\n\n>My major concern is about the setup of the model. In the model, the decision maker selects both the state (i.e. decision rule) and the signaling strategy. This is different from typical Bayesian persuasion models where the decision maker has no control over the state. Crucially, this raises the question whether it is indeed useful to use signaling in such a model. In other words, why doesn't the decision maker just optimize their strategy about how to choose the decision rule (i.e. optimize the distribution of θ) - I think they can well just do this to acheive the same (or even a better) outcome without using signaling.\n\nWe emphasize that the set up we consider is solely focused on designing the signaling policy when the decision rule is exogenously determined. The reviewer raises the concern that this setup is not adequately justified or well-motivated in the context of algorithmic decision making. While we view the joint optimization of the decision rule and signaling policy as an interesting technical direction for future work, we would like to point out that this is often not possible in many real-world decision making settings, for either practical or institutional reasons. In the context of our running example on lending, a credit scoring agency, not the bank offering the loan, may be in charge of determining the assessment rule used to evaluate decision subjects. Another interaction commonly discussed when making decisions in the presence of strategic agents is one in which a teacher (principal) interacts with a student (strategic agent). See, e.g. [1,2] for papers which consider this setting. Under such a setting, the teacher may not be in charge of designing the evaluation of the student, but may still have knowledge of the exam. For example, the evaluation may be designed by some government agency, as is the case with many standardized tests in the United States, and the teacher may be given access to the exam in advance, or have knowledge of it from previous years of teaching. Under such a setting, the teacher may still wish to offer the student some way of improving their chances of success. While the teacher cannot directly reveal the evaluation to the student, they can recommend actions (e.g., topics to study) in order to give the student a chance to improve themselves. This setting is also captured by our model. We will make this motivation more clear in the introduction. \n\n>The authors did not seem to discuss any potential nagative social impact. I think there is the possibility that banks or financial institutions use their information advantage to manipulate customers to act in favor of them but not in favor of the social welfare.\n\nWe would like to point out that we do discuss potential negative social impacts in Appendix A (this is mentioned in the checklist). However, we agree with the reviewer that a discussion of potential negative social impacts is important for this type of work. We will move this discussion to the main body of the paper.\n\n[1]: Jon Kleinberg and Manish Raghavan. How do classifiers induce agents to invest effort strategically? ACM Transactions of Economics and Computation (EC), 2019.\n\n[2]: Keegan Harris, Hoda Heidari, and Zhiwei Steven Wu. Stateful Strategic Regression. Neural Information Processing Systems (NeurIPS), 2021.",
" Thanks for your review. Since your mentioned weaknesses of our work are all listed as questions, we only address the questions.\n\n>When there is a heterogeneous group of decision subjects, will the decision maker assign the same threshold parameter θ for all participants or assign different parameters from the same distribution?\n\nOur results hold in expectation over the distribution on θ. Thus, our results are applicable to both the setting in which the decision maker uses the same decision rule for each decision subject and the setting in which the decision maker uses different decision rules drawn from the same distribution. We note, however, that in some domains, the decision-maker may be bound to apply the same rule to all subjects.\n\n>I wonder whether the utility function, actions, and related rewards and consumptions of the decision subject are public information available to the decision maker. If it is true, the decision subject cannot make profits based on private knowledge. Thus, this model might not be able to reflect manipulations by the decision subject.\n\nThe reviewer is correct that, in our model, the decision maker knows the decision subject’s set of actions and utility function, while the decision subject does not have any private knowledge. We view addressing the setting where the decision subject has private knowledge as an interesting extension for future work. However, we would like to emphasize that the decision subject having private information is not a prerequisite for having the ability to pick an action which maximizes their utility in expectation.\nFurthermore, we would like to emphasize that the main focus of our work is to study settings in which the decision rule is unknown to the decision subjects, as full information about the decision rule being used is an unrealistic assumption made by many works studying high-stakes decision making (see, e.g., [18,19,20,23,29] our submission). \n\n>For the numerical experiments part, what is the difference between the obtained utility of the decision subject in your model and that in the full information model?\n\nThe “full information” results refer to a particular policy instantiation in which the decision maker completely reveals the assessment rule to the decision subject. We include this policy as a baseline because most standard “strategic learning” work (e.g., [18,19,20,23,29]) assumes the decision subject has complete knowledge of the assessment rule.",
" This paper models a strategic interaction between a decision maker and a decision subject in an incomplete information game using Bayesian persuasion, which can be applied to credit scoring. Specifically, the utilities of both the decision maker and decision subject depend on the action of each other, while their true decision rules are not public information. Moreover, there is an action recommendation system designed by the decision maker to incentivize the decision subject to modify their actions. In their model, the authors show that the decision maker can design an incentive-compatible recommendation system such that the modified action of the decision subject will benefit both participants. Furthermore, they develop an algorithm to approximately build this system within polynomial time. Finally, they also use numerical experiments to illustrate the benefits of their approach. Strengths:\n\n1. This is the first work to use Bayesian persuasion to model the interaction described in the summary part based on the previous literature.\n \n2. In their model, the authors show the existence of an incentive-compatible action recommendation system.\n \n3. The authors develop an approach to construct this system and another time-efficient algorithm to approximately design the desired system in polynomial time.\n\nWeaknesses:\n\n1. The model only contains one decision maker and one single decision subject. Although the authors state that this model can be extended to a heterogeneous case with heterogeneous decision subjects, I still have one question listed in the question part below.\n \n2. The authors may have implicitly assumed that the utility function, actions, and corresponding consequences of the decision subject are known to the decision maker when designing the algorithm. Please see bullet 2 in the question part for more detail. \n 1. I have a question about the extension to multiple decision subjects. When there is a heterogeneous group of decision subjects, will the decision maker assign the same threshold parameter $\\theta$ for all participants or assign different parameters from the same distribution?\n \n2. Another question is about manipulations, which is mentioned in the introduction as follows ''The question we are interested in answering in this work is: how can the decision maker incentivize decision subjects to take such beneficial actions while discouraging manipulations?\". I wonder whether the utility function, actions, and related rewards and consumptions of the decision subject are public information available to the decision maker. If it is true, the decision subject cannot make profits based on private knowledge. Thus, this model might not be able to reflect manipulations by the decision subject.\n \n \n3. For the numerical experiments part, what is the difference between the obtained utility of the decision subject in your model and that in the full information model? I think there is no potential negative social impact of the work.",
" The paper studies an application of Bayesian persuasion to a linear classification model. In this model, a decision maker uses a linear decision rule to make a decision on a subject, which is represented by a feature vector. The subject is uncertain about the specific rule the decision maker uses to reach the decision but only has a prior belief over it. The decision maker can then strategically reveal this information to persuade the subject to take certain actions. The goal of the study is to design an algorithm to compute the optimal persuasion strategy, so that the decision maker's utility is maximized. The model is original. The idea of applying Bayesian persuasion to the linear classification model is interesting. The paper is well-writen and clear overall. The thecnical results look somewhat standard and unsurprising, following mostly the standard approach to solving the private signaling problem. \n\nMy major concern is about the setup of the model. In the model, the decision maker selects both the state (i.e. decision rule) and the signaling strategy. This is different from typical Bayesian persuasion models where the decision maker has no control over the state. Crucially, this raises the question whether it is indeed useful to use signaling in such a model. In other words, why doesn't the decision maker just optimize their strategy about how to choose the decision rule (i.e. optimize the distribution of $\\theta$) - I think they can well just do this to acheive the same (or even a better) outcome without using signaling. (Analogously, in Stackelberg games, the leader does not benifit from signaling when they already have the power to make a commitment.) \n\nThe authors didn't seem to have justified this setup adequately in the paper. I tried to come up with a justification but found it hard to justify the model if the decision maker has control over $\\theta$ but chooses to _not_ optimize the distribution of it. So it seems that this setup is justifiable only in the case where the decision maker cannot choose the decision rule. Nevertheless, this seems to deviate from the motivating examples (eg. bank and a customer applying for a loan): one particular question is if the decision maker does not choose the rule, then who does that in these examples? Notice that this is different than saying that the decision maker cannot control the prior belief of the subject - even when the decision maker cannot control the subject's prior belief, they can still optimize their actual selection of $\\theta$. As described above, it would be great if the authors could explain why the decision maker does not directly optimize the distribution of $\\theta$ but choose to use signaling. The authors did not seem to discuss any potential nagative social impact. I think there is the possibility that banks or financial institutions use their information advantage to manipulate customers to act in favor of them but not in favor of the social welfare.",
" This work explores the Algorithmic Recourse problem through the lens of Bayesian Persuasion. The recourse problem is cast as a persuasion problem, where the Decision Maker/Principal is the operator of machine learning system (represented as a classifier), and the Decision Subject/Agent is the consumer of prediction. Motivating example is online banking, where the decision subject applies for a loan, and the classifier decides if they are eligible.\n\nClassifiers are assumed to be linear $y=\\mathrm{sign}(\\theta^T x)$, where $x$ is the feature vector, and the weights $\\theta$ are assumed to be distributed according to a common prior $\\theta \\sim \\Pi$. For the interaction protocol, the decision maker first commits to a signaling scheme which is the agent action to recommend as a function of model weights $\\theta$. After the signaling scheme is set, an instance $\\theta \\sim \\Pi$ is picked at random and revealed to the decision maker, the signaling scheme is applied, and the decision subject takes the rationally-optimal action according to their posterior belief. In this persuasion setting, actions $a\\in\\mathcal{A}$ are modeled as changes to the feature vector, such that $x’=x+\\Delta(a)$ when action $a$ is taken. Each action is also assumed to entail a cost $c(a)$.\n\nAuthors present three sets of results. The first result attempts to illustrate the importance of taking persuasion considerations into account by presenting an instance of the problem in which the utility discrepancy between naive and strategic decision maker behavior can be arbitrarily close to the maximal discrepancy (Proposition 3.2). The second set of results discusses the computational complexity of the problem, and presents an efficient sampling-based algorithm which calculates an approximately optimal policy in $poly(m,\\frac{1}{\\epsilon})$ time (Theorem 4.3). The algorithm requires a polynomial number of samples of $\\theta \\sim \\Pi$. The third set of results is an empirical evaluation using the HELOC lending dataset. Strengths\n* Problem is well-motivated. From the theoretical perspective, it is interesting to find new domains where the Bayesian Persuasion perspective can be useful. From the practical perspective, strategic response and recourse are becoming increasingly prevalent in real-world systems.\n* Writing is clear and easy to follow, mathematical limitations of results are clearly illustrated.\n* Work brings up interesting directions for discussion and future work.\n\nWeaknesses \n* Some of the core assumptions made by the model are not realistic. In particular, assuming that a common prior exists for $\\theta$ seems highly non-trivial - For example, decision makers usually have a resource advantage, and for example are likely to conduct a market survey revealing more details about $\\Pi$ before committing to a signaling scheme - Breaking the common prior assumption. Moreover, model training in the real world is often very costly, so it is more realistic to assume that model parameters $\\theta$ will be reused many times, and obtaining fresh samples $\\theta \\sim \\Pi$ (e.g as assumed by the presented algorithm) will be very costly. In addition, in many realistic settings users have the ability to share data and possibly collude.\n* Not sure whether this model naturally extends beyond linear classification and one-shot settings. As an extreme example - Will it be realistic to assume that both parties have a common prior over the parameters of a modern, large-scale neural network?\n* It seems to me that the empirical evaluation section mainly provides a numerical validation for the theoretical results, and does not explore much beyond them. As the theoretical model relies on non trivial assumptions, the experiments section can be an opportunity to explore limitations and robustness.\n * Realistic model assumptions - One possible way to model the randomness in model parameters $\\theta$ is assuming that $\\theta$ is the result of an Empirical Risk Minimization process on a dataset of feature-label pairs $(x_1,y_1),\\dots,(x_n,y_n)$ sampled iid from a distribution $\\mathcal{D}$. If we assume this, does a common prior assumption on $\\theta$ entail equivalent assumption in terms of $\\mathcal{D}$? In other words - If we assume that $\\theta$ is obtained using ERM, is the common prior assumption on $\\theta$ equivalent to assuming that both Principal and Agent have the ability to train the prediction model themselves?\n* What happens if the same value of $\\theta$ is reused across many instances of the persuasion?\n* What are the main insights from the set of empirical evaluations? Can we design an experiment which tests the method under more realistic assumptions? I guess that both positive or negative results will be interesting in this context.\n* Is it possible to add components to the mechanism, or describe a setting in which the decision maker is likely to adhere to their commitment, and don't examine any data privately before committing to the signaling scheme?\n* Algorithm 1 assumes that $\\theta$ can be sampled polynomially-many times. Assuming that sampling from $\\Pi$ can be very costly, is it possible to trade off running time for lower sample complexity?\n Main limitation of this work from my perspective is the modeling assumptions, which may not hold in many realistic use cases. I think these should be outlined as a basis for further discussion. See details above.\n",
" The paper studies a setting of algorithmic recourse in the classification setting, where the decision subjects present strategic behavior. The authors propose a model, using the framework of Bayesian persuasion, where a decision maker and a decision subject have prior beliefs about the parameters of a predictive model. The decision maker first commits to a form of transparency (signaling policy), trains the predictive model and communicates to a decision subject which action to take in order to change their features in a way that the model gives them a positive prediction. The decision subject, based on the recommendation, adapts their posterior belief about the model parameters and takes an action that maximizes their expected utility. The goal of the paper is to compute a signaling policy that maximizes the decision maker's expected utility under the constraint of \"Bayesian incentive compatibility\", i.e., that it is in the best interest of the decision subject to take the action recommended to them by the decision maker. The authors propose an approximation algorithm for the problem, based on linear programming, and evaluate their methodology using semi-synthetic data. Originality: To the best of my knowledge, this is the first paper framing algorithmic recourse as an instance of Bayesian persuasion. This connection is a novel and conceptually interesting contribution. The paper contains a quite comprehensive related work section discussing work on strategic machine learning, algorithmic recourse and bayesian persuasion. Although the technical setup is different, there is some relevant work (see [1]) connecting counterfactual explanations (concept almost equivalent to algorithmic recourse) with strategic machine learning and also tries to compute personalized recommendations that maximize the decision maker's utility. I believe it should be cited and discussed in the \"strategic responses to unknown predictive models\" subsection.\n\nQuality: I found the paper technically sound. I read part of the proofs in the Appendix and they appear to be correct. The framing of algorithmic recourse as an instance of Bayesian persuasion and the overall formulation is mostly reasonable. However, there are certain assumptions of the model that I am not sure to what extent they reflect the real-life problem that the paper is studying. I elaborate more on these assumptions in the \"Questions\" section. Finally, the experimental evaluation is satisfying but I think that it could have been a bit more extensive. My main concern related to the experiments is that the authors assume all the available actions give equal utility to the decision maker and they are all more desirable than the null action that doesn't change the decision subject's features. However, a large part of the strategic machine learning literature is studying malicious adaptation which is performed to \"trick\" the decision maker and therefore, in the current setup, would lead to negative utility. As far as I understand, the authors' framework could capture this by considering a set of actions with a mixture of negative and positive utilities. I believe that, in that case, the results of Figure 1 might have been different. For example, I think that \"Full information\" could lead to lower utility than \"No information\" and it would be interesting to see how the method proposed by the authors would perform in comparison.\n\nClarity: The paper is nicely written and easy to follow. I have a few minor suggestions for improvement which I list below.\n1. Starting in line 202, the authors discuss the assumption that the decision maker publishes and commits to a signaling policy. However, the arguments presented are trying to fit algorithmic recourse to the Bayesian persuasion setting rather than explaining how the interaction described in the table of page 2 would work in practice. It is not clear to me what it means for a decision maker to publish a signaling policy in lending, college admissions, hiring e.t.c. I think the authors could improve this paragraph by adding some real-life examples.\n2. There is a full stop punctuation mark missing in line 206.\n3. In lines 210-211 the authors talk about \"prior beliefs $\\Pi$ over the observable features\" which kind of implies that $\\Pi$ is a belief about the distribution of features $x$. However, as far as I understand, $\\Pi$ is a belief about the model parameters $\\theta$. I think the aforementioned phrase is causing confusion.\n4. There is a citation number missing in line 331.\n5. I would encourage the authors to bring Algorithm 1 to the main body of the paper. The authors provide a high-level description of it but, since the algorithm itself is not super complicated and it is their main technical tool towards solving their problem, it shouldn't be left for the appendix.\n6. In lines 335 & 337, the authors mention that the algorithm leads to an $\\epsilon$-BIC signaling policy, however, only the definition of a BIC policy has been given in the paper so far and $\\epsilon$-BIC is undefined.\n7. In the experimental section, it would be useful if the authors presented in detail their baselines \"full information\" and \"no information\". To my understanding, these two correspond to the decision subject best-responding based on a posterior $\\Pi'$ which is (i) a point mass on $\\theta$ and (ii) equal to the prior $\\Pi$. If that is correct, I think the authors should write it explicitly when describing the experimental setup.\n8. There is a missing reference in line 652 (Appendix F).\n\nSignificance: I think the paper carries interesting ideas regarding the general problem of algorithmic recourse, namely that the decision subjects have prior beliefs about the decision rule and that the decision maker can provide recommendations to incentivize beneficial long-term outcomes (e.g., repaying existing debts instead of manipulating financial records in the credit scoring example). Therefore, the conceptual contributions are important and the accompanying methodology is sound.\n\n[1] Tsirtsis, Stratis, and Manuel Gomez Rodriguez. \"Decisions, counterfactual explanations and strategic behavior.\" Advances in Neural Information Processing Systems 33 (2020): 16749-16760.\n\nPOST REBUTTAL\n-------------------------------------\nI read the authors response and the authors addressed most of my concerns about the proposed model. I keep my initial score, in favor of accepting the paper. My 2 main concerns/questions regarding the assumptions of the proposed model that I would like the authors to discuss are the following:\n1. What is the real need to assume that the decision maker has a prior over the model parameters $\\theta$? They have the data, they train the model. Where is the uncertainty about $\\theta$ coming from if the training process is completely under their control? More technically, in the problem definition of Section 4, why does the decision maker sample $\\theta$ from $\\Pi$? Related to that, why would the decision maker commit to a signaling policy before training? These look to me like assumptions needed to make the whole thing fit to the Bayesian persuasion setting but they slightly disregard how algorithmic recourse would work in real-life applications. I think these assumptions need to be better motivated in the text.\n2. Why is the decision maker's utility assumed to be a function of the action $a$ and not a function of $x_0 +\\Delta x(a)$? If it is independent of $x_0$, then the decision maker gets the same utility from two decision subjects taking the same action but with one of them crossing the threshold and the other one not crossing it. This sounds a bit weird to me. Is a completely untrustworthy loan applicant who pays some of their existing debt the same as a borderline candidate who repays some of their existing debt? How is it natural that the final output of the classifier doesn't contribute to the decision maker's utility? I think the authors sufficiently discuss the limitations of their methodology, specifically that it focuses solely on linear models and it doesn't consider shared information between different decision subjects. The societal implications section is also ok."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"ccG87ebyiv",
"dk4HwsacppA",
"sCZLODtXMUD",
"MWV8K2FV7wz",
"5Sz35CQfFMk",
"uRPVNdAD9lu",
"mbqQE1zg0_F",
"T0tHVWXIOT0",
"aQd1HOVxSgW",
"EYagEYKJFUw",
"nMuA8WRQ1R_",
"o76UwI-aDky5",
"YSNUauR2XXy",
"mbqQE1zg0_F",
"jJ7nPtX0LPO",
"nips_2022_Z6BFQqzwuS4",
"nips_2022_Z6BFQqzwuS4",
"nips_2022_Z6BFQqzwuS4",
"nips_2022_Z6BFQqzwuS4"
] |
nips_2022_0IywQ8uxJx | Graph Neural Networks as Gradient Flows | Dynamical systems minimizing an energy are ubiquitous in geometry and physics. We propose a gradient flow framework for GNNs where the equations follow the direction of steepest descent of a learnable energy. This approach allows to analyse the GNN evolution from a multi-particle perspective as learning attractive and repulsive forces in feature space via the positive and negative eigenvalues of a symmetric `channel-mixing' matrix. We perform spectral analysis of the solutions and conclude that gradient flow graph convolutional models can induce a dynamics dominated by the graph high frequencies which is desirable for heterophilic datasets. We also describe structural constraints on common GNN architectures allowing to interpret them as gradient flows. We perform thorough ablation studies corroborating our theoretical analysis and show competitive performance of simple and lightweight models on real-world homophilic and heterophilic datasets. | Reject | The authors present a graph neural network for heterophilic data using gradient flows. The proposed architecture is quite simple...large sections of the architecture are fully linear dynamical systems rather than neural networks, and still achieve roughly SotA results on standard graph learning benchmarks. There was a significant amount of disagreement between the reviewers. Some seemed to think the strength of mostly linear methods meant that the benchmarks were too easy, but these are standard graph neural network benchmarks. A simple model performing well is not a negative, and can often be useful for puncturing hype (e.g. https://arxiv.org/abs/2206.13211). Simple architectures can also be useful for providing analytic insights which might get obscured in more complex models. Some reviewers seemed concerned about the scaling of certain tools (e.g. graph Laplacian eigenvectors), but these tools are only used for analysis, not for training. Nevertheless, I feel that there were enough general concerns around the paper that I have a difficult time recommending acceptance. Even if the purpose of the paper is primarily to drive analytic insights rather than achieve SotA results on big benchmarks, I would recommend the authors to show how these analytic insights can be used to improve models on big datasets to strengthen the paper. | train | [
"lBW95jdWgB",
"3is4MgxD7Nj",
"tSTeYcDuGRJ",
"-jq6jCQJF06",
"qoUYZ4bc3rTm",
"kku2Q_X6hov",
"O675c48meXv",
"a71FXRX1YOJ",
"J7aG-6fvpaU",
"49yUH8Eq2A0",
"i1txvnK97-p",
"uJnCGKuejv",
"7aZKQESKNqX-",
"mXjLJIdiNTQ",
"t6WbmVZAMSO",
"l0_mNZ4v0yI",
"K9JmBtEsalG",
"HgeILay1TX9",
"lgkAtxhil1N",
"6ayQowmlDyi",
"jxDn0fiTC3U",
"zkVQHhU_NoS",
"R2n3t2ffnkS",
"F-Oh_ruEYzA"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response and finding our contribution significant. Some final points:\n\n- The analysis of non-linear activations in Proposition 3.2 and the whole Section E in the SM is quite novel in the GNN literature and in fact as you acknowledged it is much more common to have theoretical analysis restricted to the linear case. This effectively puts our framework into the expressive power landscape of other MPNNs; in general though, we are quite unsure about what you mean by \"expressive power\". If you look at all the references we shared above that are aimed at targeting heterophily, they do not have expressive power analysis (either at all or at least in a conventional way). In fact, our LFD and HFD characterizations arguably represent meaningful ways (but of course not exhaustive yet) of studying expressive power for node-classification tasks where one has also node-wise features rather than graph isomorphism tests.\n\n- Thank you for the reference, this will definitely be accounted for in future evaluations since we believe in this new way of thinking about GNNs as functionals and their minimal action. In this submission we were very much interested in the theory and in the simplest formalization of quadratic energy, but much more can be said and further and better evaluation is on our table when using more `sophisticated' gradient flows.\n\n- We agree on the ambiguity of the word explainable and accordingly have removed all its occurrences except one in the intro. This will be removed in a camera ready version where we will incorporate all the feedback to better rephrase the goals and contributions of our submission in light of the feedback we have received. \n\n- Concerning the empirical evaluation not being strong we again agree with you but need to point out that this is on par with several recent papers that do not share the level of theoretical investigation of our work. We are the $\\textbf{first}$ proposing this new angle for studying GNNs, investigating the role of the channel-mixing spectrum and also (thanks to the feedback) proposing energy dissipation arguments when using non-linear activations. \n\nOnce again, thank you for your time and for the suggestions to improve the paper.",
" Thank you for your further detailed response. A couple of final important points:\n\n- \"But - they did not have such strong claims regarding the non-linearities being unnecessary. \" If you look for claims in our paper, we never claimed that non-linear maps are never needed. We simply observed (as corroborated by our ablations) that on these datasets -- which are the same used by many other recent papers targeting heterophily -- non-linear activations may be removed without paying a price in performance. To further align with your feedback, we have revised the pdf and replaced the only occurrence of \"complex benchmarks\" with \"heterophilic benchmarks\". If you search for other mentions of non-linear maps not being useful or other strong claims, you won't find any in our main file. Most importantly, our new theoretical analysis also confirms that one can use non-linear activations and still retain the energy dissipation interpretation, which is generally non-trivial in dynamical systems. Therefore the new theory (Proposition 3.2) effectively \n $\\textbf{makes all the discussion}$ $\\textbf{about linear vs non-linear redundant}$ since we can use non-linear maps in this framework -- we hope you can see that. \n\n- About the GPRGNN results, they use an easier split of 60/20/20 rather than the more commonly adopted 48/32/20 which papers like GGCN, Sheaf, and us rely on.\n\n- About being top 3: this is not an empirical work and conferences like NeurIPS are not just about beating baselines and we hope the reviewers agree on that given that arguably not all papers share the same level of theoretical analysis and investigation we provided. The purpose of this submission is highlighting a new way of thinking about GNNs where we parametrize an energy rather than an equation. As experimental analysis, we showed that equations that are precisely those given to us by the theory can be competitive on benchmarks commonly adopted in literature. \n\n- About the role of depth, most papers that stack many layers have time dependent weights that can suppress later contributions. This is expectable and indeed very often it is a case of maintaining the same performance with deeper architectures and not achieving much better numbers. In our theoretical analysis we actually do not expect that adding many layers will be beneficial because it's never the case that converging to either low-frequency or high-frequency eigenspaces of $\\boldsymbol{\\Delta}$ would be optimal for the given classification task. In fact, note that as argued in a different response, our analysis can be seen as studying of the energy landscape and highlights what the dynamics will converge to in infinite time (and how fast). We agree that general over-smoothing experiments are relevant and we plan to add those in the non-linear setting described in Section B.3 where one can go deeper in a principled manner while stile minimizing an energy with now a different landscape. This is a broader scope that we are reserving for future work.\n\n- The D variant is not generally better performant. It works a bit better on the small heterophilic datasets. Note that we have a learnable component in the encoding and decoding step so effectively given uniformly sampled eigenvalues from -1 to 1 we can always learn to re-order the features so tat the negative valued entries have repulsion and the positive ones have attraction so one would not expect much variation wrt the uniform sampling in such range (given the number of points sampled, usually 64). However, this is not enough on the larger heterophilic datasets where learning a dense channel-mixing is significantly better usually.\n\n- Regarding strong statements, we have addressed this point above.\n\n- Regarding the writing, we put a lot of efforts into conveying the theoretical message without diluting the mathematical details to avoid statements that are more accessible but also more vague and less transparent.\n\nThank you for your time.",
" A big thank you for the detailed rebuttal both to myself, and other reviewers, which I have carefully read. I appreciate the efforts to clarify your contribution, which I continue to think is significant. \n\nLet me clarify and summarize my thinking on a couple of points:\n\n- I am glad to see the new theoretical results for non-linear activations.\n\n- You are not 'paying a price' in terms of my accept/reject assessment for showing that linear activations suffice. Actually I think this observation makes a useful contribution towards demonstrating that the datasets considered by yourself and prior work are too simple. Reiterating my earlier comment, the fact that an MLP is general within a few percent of the best model is another angle on this point. So overall, _I don't hold the choice of datasets against you_ in any way, but do think that the area as a whole should move to more challenging benchmarks. There do exist such datasets (e.g., https://arxiv.org/pdf/2104.01404.pdf) which I hope you might consider for future projects. \n\n- I do not think that the synthetic experiment amounts to an adequate demonstration of your model being \"explainable\". Talking in the abstract, the phrase \"explainable\" has no precise meaning and is best avoided in a work that is otherwise rigorous in terminology, arguments etc. I am glad to hear you have removed some mentions to explainability, but notice that the word \"explainable\" or \"explainability\" still appears six times in the document. I strongly urge you completely remove all mention. I am not asking this for my benefit but yours, since the mention of explainability devalues the scientific seriousness of your work. I am not going to penalize you on this point since it is not related to the contribution of your paper, but I do feel strongly nonetheless. \n\nIn all I remain in favor of this work being accepted, and will make this case during reviewer discussions. However I do not plan to raise my score further as I still think that the expressive power of these models is not clear, and the empirical results remain not especially strong. \n\nBest\n\n",
" Dear Authors,\n\nThank you for the detailed and long response. I still find it hard to recommend the acceptance of your paper. \n\nMy point 1 remains. I agree that other papers showed only these experiments. But - they did not have such strong claims regarding the non-linearities being unnecessary. Non-linearities are not necessary for node classification, which you show, but probably not for other problems. Again - this is against the whole concept of deep neural networks. The authors compare their method to GRAND and GCNII which show more results. Actually, GRAND shows completely different results than in this paper, and the results in this paper are new runs, as far as I can tell. Why don't the authors compare their method to GRAND (Table 1 or 2) and GCNII (Tab 2) on the splits reported there? Overall - to make such strong claims, the authors need to show more empirical evidence, in this reviewer's opinion. See also experiments in GraphCon and GATv2, that the authors can choose from. \nAlso, for some reason, when opening the paper GPRGNN, different and better results are presented: see table 2 here: https://arxiv.org/pdf/2006.07988.pdf (e.g., see Texas and Cornell). Is there an explanation for that? \n\nMy point 2 also remains: I'm not sure that being top 3 on one task is in line with the empirical evaluation that is required in NeurIPS. I will let the other reviewers and AC decide on that.\n\nMy point 3 also remains. As stated in the original review, I read the theorem in the original line 249. I also agree that one should not confuse performance degradation and over-smoothing. However, the authors do not show how their method behaves when adding more layers, as shown in many other works. Maybe the proof somehow misses something that we cannot find, or, maybe there is a difficulty in the learning process when adding more layers. Theorem 4.3 should be empirically validated, as done in other works. See Fig 2 in GraphCon, Tab 1 in EGNN, Tab 3 in GCNII, Tab 2 in GGCN, Fig 2 in GRAND, etc. \n\nRegarding the D variant: The D variant shows significantly better accuracy results, but the authors claim in their rebuttal that it is only a conceptual experiment. How can the best-performing method be a conceptual experiment only? The authors also do not address the variability of the D variant which is uninitialized between -1 to 1, and is not trained. As the authors discuss, the sign of the diagonal entries determines the type of interaction between nodes. Clearly, this has an impact on the action of the GNN, and therefore it is not convincing to me that various random initializations do not change the results. \n\nRegarding strong statements re non-linearities mentioned in point 1: The final statement, in conclusion, claims that complex datasets can be approached with simple networks. However, the tested datasets are rather small and simple, and additional experiments with complex datasets should be evaluated to reach such conclusions.\n\nI still think that the writing is hard to follow. \n\nBecause of the remaining issues above, I keep my original score.\n\nSincerely, \nReviewer 13DY \n\n\n\n\n\n ",
" Thanks for clarifying this further and for taking the time to explain this to us. We address your points about the minimization process and how the learnable parameters have an impact on what we can learn (along with features). Note that below $\\mathbf{W}$ is learnable and hence represents parameters. In summary, our theoretical analysis gives us a very good picture of the energy landscape since we can both determine how the learnable parameters $\\mathbf{W}$ affect where our normalized solution converges to and how fast such convergence is happening (as a function of both graph information and learnable $\\mathbf{W}$). \n\nIn fact, we would like to emphasize that to the best of our knowledge this is the $\\textbf{first work}$ in GNN highlighting convergence of the solution to states that are not in the Laplacian kernel and can hence carry information from input features and graph structure (indeed we can explain how the learnable parameters $\\mathbf{W}$ can guide such convergence through its spectrum).\n\n More details to follow:\n\n- Our theoretical analysis is precisely about the impact of the learnable channel-mixing $\\mathbf{W}$ on the minimization process and indeed we have quite a good understanding of how such landscape looks like. Proposition 3.1 shows that if the learned parameters have more mass on the negative eigenvalues (in the precise way stated in the Proposition), then the normalised solution will converge to the span of the largest frequency eigenvector of the Laplacian which hence represents (if we let the dynamics run for infinite time until convergence) the landscape where my minimum lives. Where exactly in this subspace the solution converges to is a function of the initialized features. The latter are partly given by the problem and partly learned via an encoding. Conversely, if we have more mass on the positive eigenvalues of $\\mathbf{W}$, then the normalized solution will converge to the span of the lowest frequency eigenvector of the Laplacian. So in general one has a pretty clear picture of what the dynamics is accomplishing and where the minima are going to sit.\n\n- An important point though is that while for losses one would like to converge to lower (ideally unique) minima, in the case where node features are updated following a gradient flow this may not be the case. This is similar to the ODE case where the existence of a Lyapunov function tells you something about fixed points but you may want to learn (or tune) the integration time and perhaps even halt the evolution before convergence. To put it more concretely: in some cases (homophilic graphs), $\\textit{some}$ smoothing is beneficial (so you would most likely see that the learned channel-mixing $\\mathbf{W}$ has more positive eigenvalues to mostly induce attraction). However if you run it for too long and hence arrive to convergence to the minima (after normalization), then you will sit in the Laplacian kernel meaning that the only information left to separate node representations are degrees (which is the classical over-smoothing problem). This of course is not desirable, which is also why architectures like GCN are generally shallow (short integration time).\n\n- Another subtle but important point is the initial condition. In principle one has a node-wise initial encoding step that can learn where in the energy profile we should start. This can be very useful. To give you an example, assume that the largest frequency Laplacian eigenvector $\\mathbf{v}$ is not that good for classification, but the second highest frequency one $\\mathbf{u}$ is. Then the initial encoding could learn to give us an initial condition that is orthogonal to $\\mathbf{v}$ such that if we learn $\\mathbf{W}$ with more negative eigenvalues as before, then the normalized solution converges to a minima that is in the span of $\\mathbf{u}$. In principle this would also help us into controlling where in the span of $\\mathbf{u}$ we are converging to i.e. as you raised in your second point which degenerate state we end up converging to. Note that this can also be aid by the decoding step that ideally would learn to choose among the degenerate states the one with better separation power. \n\n\nWe thank the reviewer for bringing this point up which we will investigate even further in the future. We hope that at least regarding the scope of our submission, we have clarified the point about the landscape of the minimization process.",
" Thank you for clarifying the points in detail. I agree with the authors' comment partially. However, I believe there has been some communication gaps as detailed below.\n> The energy is a functional of the node representations that will determine how the nodes get updated in a GNN. \n\n1. In the formulation presented by the authors, the node update is carried out based on the gradient of a parametric energy functional. Specifically, the authors show that the node features can be updated along the direction of energy minimization $\\dot{\\varepsilon}_\\theta(\\textbf{F}(t))=-|| \\nabla_\\textbf{F} \\varepsilon_\\theta(\\textbf{F}(t))||^2$). My specific question was regarding the nature of the landscape of the **learned energy** as a function of the parameters $\\theta$ and $F$, which was inadvertently written as loss landscape. \n2. Additionally, the node features are not unique and is simply a representation of the node. Hence, multiple node features can result in the same output label even in a well-trained GNN. This corresponds to degenerate states in an energy landscape, that is, states having the same energy but different configurations (read node features in this case). But these regions can have different local curvature and hence the nature of the gradient depends highly on the nature of the energy landscape. \n\nIn short, I think exploring the nature of the learned energy functional in terms of its landscape, minima, and saddle points, is crucial to develop an understanding of the limitations and usefulness of the approach. I hope I have been able to clarify my questions to the authors. I am not expecting any additional experiments. However, it'll be helpful if the authors can respond to these queries. \n\n> Concerning the experiments on homophily/heterophily...\nI understand the focus is on node classification. Thank you for the detailed explanation.",
" We have included here a table with further ablation studies regarding the performance of GRAFF with and without non-linear activations, along with comparing it with GCN (and the different steps to go from GCN to GRAFF).\n\n\nExperiment details: \n\n| dataset | **GCN** | **1) +enc/dec** | **2) +residual** | **3) share weight** | **4) W symmetric** | **5) GRAFF linear** | **6) GRAFF DD linear** | **7) GRAFF DD non linear** |\n|---|---|---|---|---|---|---|---|---|\n| **Chameleon** | 61.93 ± 1.96 | 62.19 ± 2 | 66.14 ± 1.62 | 65.94 ± 1.9 | 66.34 ± 2.27 | 66.8 ± 2.28 | 69.34 ± 1.6 | 69.21 ± 1.13 |\n| **Citeseer** | 70.92 ± 2.46 | 70.95 ± 2.73 | 71.45 ± 2.11 | 71.05 ± 1.59 | 70.18 ± 2.07 | 70.5 ± 2.32 | 71.21 ± 2.44 | 71.1 ± 1.3 |\n| **Cora** | 81.63 ± 1.17 | 80.76 ± 1.67 | 81.07 ± 1.67 | 81.09 ± 1.85 | 80.73 ± 1.56 | 80.22 ± 1.76 | 81.62 ± 1.33 | 80.81 ± 1.85 |\n| **Squirrel** | 40.51 ± 1.33 | 43.01 ± 1.8 | 45.31 ± 1.83 | 46.3 ± 1.46 | 46.46 ± 1.92 | 47.07 ± 1.76 | 52.42 ± 1.81 | 54.59 ± 1.53 |\n\nWe choose two homophilic real-world datasets Cora and Citeseer and two heterophilic datasets Chameleon and Squirrel, repeating the series of augmentations 1) add an encoder/decoder. 2) add a residual connection. 3) share the weights of $\\mathbf{W}$ and $\\boldsymbol{\\Omega}$ across time/layers. 4) symmetrize $\\mathbf{W}$ and $\\boldsymbol{\\Omega}$. 5) remove the non-linearity between layers, as described in the GCN ablation in SM Section D.3 to transition from GCN to GRAFF with a sum symmetric matrix. \n\nIn the table we note column “4 - $\\mathbf{W}$ symmetric” is equivalent to GRAFF non-linear with ReLU activation. In addition we try the version of GRAFF linear with diag-dom symmetric matrix (6) and a pointwise $\\tanh$ nonlinearity (7).\n\nTo give representative hyper-parameters for every data set we search over the space; lr {0.0001, 0.001, 0.005}, decay {0.0, 0.001, 0.005}, time {2, 3, 4}, step size {0.5, 1.0}, hidden dimension {64}. We take the best average performance of 10 splits, using the geom-gcn splits for the heterophilic datasets and random splits for the homophilic datasets consistent with the ablations in SM Section D.4.\n\nThe new ablations provide a more explicit comparison between GRAFF (in its linear gradient flow formulation) and non-linear baselines like GCN and GRAFF activated with pointwise non-linear maps. As claimed, there is no (significant) performance deterioration between linear GRAFF and non-linear baselines (in particular its non-linear activated version). We again emphasize the following:\n\n- The suppression of non-linear activations in the context of GNNs has already been studied in the highly popular SGCN reference [43].\n- Linear GRAFF is not equivalent to a single linear layer due to the residual connection. Indeed, we have explicitly derived in the SM (lines 790--795) that linear GRAFF after m layers corresponds to an m-degree polynomial with all powers of the normalized adjacency entering the polynomial expansion.\n- The full model has encoding and decoding blocks that can be implemented using MLPs, hence making the $\\textbf{whole map}$ $\\textbf{raw-features} \\rightarrow \\textbf{labels}$ generally $\\textbf{non-linear}$.\n\n\nA final important point in connection to the new theory we have added in the revised document (Proposition 3.2 line (199) and Section E in the SM): in principle, we have shown that one can activate the linear gradient flow using many common non-linear maps (ReLU, $\\arctan$, $\\tanh$) while preserving the physics inspired interpretation of the channel-mixing $\\textbf{W}$ as a potential inducing attraction and repulsion along edges. Therefore, one can fully $\\textbf{leverage the}$ $\\textbf{expressive power}$ $\\textbf{of non-linear maps in this multi-particle energy framework}$. The experiments here simply confirm that on the benchmarks commonly used by all the recent references we compared with in the context of heterophilic graphs (as amply argued in the previous general responses) there is no significant performance drop by removing the activation.\n\n\nOnce again, we are happy to engage in discussion and clarify any standing doubt.",
" \"What component(s) of GRAFF explain the inference speedup vis-à-vis GCN? Is it due to parameter sharing? Also, no details are given as to how this comparison was decided. Right now I cannot be sure that the comparison is apples to apples; maybe the GCN is a really massive model and the GRAFF is much smaller. More details on this would be great.\nWhy just compare inference time? What about comparison of training time? It seems remiss not to include this, especially since the main paper simply mentions “run-time smaller than GCN” (line 359).\"\n\nWe answer both points here. The speed up is $\\textbf{provable}$ and mainly due to the fact that the initial projections from higher dimensional raw features to smaller hidden dimension is done node-wise rather than edge-wise (this was argued in our complexity paragraph on line 306). Weight-sharing helps but it is secondary to the first point highlighted here. Concerning experiments on Figure 5 in SM we compare GCN and GRAFF with same hidden dimension (that is the x-axis of Figure 5) so we believe it is a $\\textbf{fair comparison}$ by definition.\n\n\nWe hope we have addressed all your concerns -- especially regarding $\\textbf{theoretical analysis with activations}$ and the comparison with the many baselines and recent papers addressing $\\textbf{the same problem}$ $\\textbf{on the same}$ $\\textbf{benchmarks}$ -- and we would appreciate if you raise the score; otherwise let us know of any other doubt/question and we are happy to address them in the discussion period.\n\n",
" \"The paper claims several times that GRAFF models are “explainable”. The basis for this is that the model predictions can be understood by probing property of the energy functional. While this may turn out to be a useful point, the paper does not properly substantiate the claim. Indeed, there are no examples of any such “explanation” in practice. I would ask the author to either drop the “explainable” claim entirely (which isn’t critical in any-case, despite it’s prominent position in the explanation of ”why a gradient flow?”) or to clearly substantiate it, probably via an example.\"\n\nWe agree that the term \"explainable\" (almost as in any case) is a bit ambiguous and in the revised version has been removed when redundant/unnecessary. We think that part of it though has been substantiated in the synthetic experiments where we could test how controlling specifically the spectrum of the channel-mixing affects the smoothness (homophily) of the prediction precisely as indicated by our theory. In this regard, we feel this is the `example’ you may be alluding here. More generally, explainable here is mostly relative to existing GNNs for which it is much harder to investigate a posteriori what is happening. For example, in a gradient flow framework one could analyze the spectrum of the learned channel-mixing and have an idea of whether the dynamics is going to be mostly smoothing (LFD) or sharpening (HFD), $\\textbf{can something similar be said so easily otherwise}$? Explicitly, say we approximate the spectral radius of the graph Laplacian by 3/2 (if it is larger, then it is even easier): if the most negative eigenvalue of the learned W is larger than twice the most positive one, then we are certain (mathematically) that the dynamics is going to be sharpening and concentrate more on the high frequencies. Viceversa, if the most positive eigenvalue is larger than the most negative, then we are certain we are going to have a smoothing dynamics (LFD). \n\n\"Experimental evaluation is fairly limited. \"\n\nWe have thoroughly addressed this in the general response.\n\n\"However, the heterophilic graphs are very small: half only have a few hundred nodes, and the biggest graph considered—“Films”- has 7,600 nodes, and an MLP is a fairly competitive baseline on Films. All this means that the possible benefits to empirical methodology in the immediate future from seem unclear.\"\n\nThis is not entirely accurate. Even with homophily, we are extremely close to the \"top performant one\" (if there is actually one in a meaningful way). Also, about immediate future applications, this holds for almost any GNN paper that focuses on node classification and even more so for the much more involved (and slower) baselines we compared with that were specifically designed to target heterophily. We think again this is not a fair criticism to us given all papers that have been published (see the list in the rebuttal). It cannot be a criticism to a faster and `simpler’ model to be competitive with more complicated ones where the latter use $\\textbf{the very same benchmarks}$. Again, our paper though is mostly focussing on understanding theoretically modules like channel-mixing from this multi-particle point of view, introducing formulations as LFD and HFD to discuss expressive power and corroborate this with ablation. \n\n\"the key point in both cases is that the models are high-frequency dominant, the rate itself seems to be more of an intermediate step towards this final HFD conclusion. Maybe it is a matter of personal taste but I would have hidden the gory details I the appendix\"\n\nSomehow the point is about HFD but the convergence rate is important too. Something that usually is lost in standard references about asymptotic analysis is what the convergece rate (i.e. the second fastest term) is. In our analysis we provide explicit characterizations for that showing how it depends on the gaps of the spectra of the Laplacian and of the channel-mixing. This could potentially lead to better designs: for example, we want to slow down the HFD behaviour, then we need closer eigenvalues in the W-spectrum, and viceversa if we want to speed that up. To simplify a bit the discussion we have removed the explicit formula for $\\epsilon_{\\mathrm{HFD}}$ and reported in the SM, see also general revision comments above.\n\n\n",
" We thank the reviewer for the very detailed feedback – which has inspired some further theoretical investigation on our side as reported in the general response and below – and for liking our paper. We encourage the reviewer to first consider reading the general response above where we touch many crucial points and we also list all the revisions already made to the main file and the SM. Below we address each specific point in detail, reporting $\\textbf{your comments in quotations}$.\n\n \"It is quite unfortunate, however unfortunately typical, that the analysis doesn’t extent to non-linear activations (line 201).\"\n\nTo address this point, we have extended some partial analysis to the non-linear case and have added Proposition 3.2(line 199) and Lemma E.2 (line 978) to further clarify how even with pointwise activations we can maintain the duality between attraction and repulsion induced by the spectrum of the channel-mixing matrix $\\mathbf{W}$ interpreted as a bilinear potential. This deserves further (but more involved) discussion that for the time being is beyond the purpose of this submission but $\\textbf{we note that this is novel}$, since differently from previous works we are not simply considering the classical Dirichlet energy with respect to ReLU but instead a more general energy (wrt a more general class of activations) that can also magnify the high-frequency components.\n\n\"This raises a number of questions: are these models then of comparable expressive power to linear GNNs?\" \n\nWe agree that this is an interesting point. So a few comments:\n- In some way, one may argue that the LFD/HFD characterizations introduced in our submission are already a measure for expressive power and that our linear framework minimizing a quadratic energy is able to generate both and so it is expressive in that sense.\n- We have addressed the expressive power concern in the general response but we reiterate the response here as well. In terms of linear gradient flow framework, note that we are residual meaning that the resulting polynomial has any term $\\bar{\\mathbf{A}}^{k}$ for $0 \\leq k \\leq m$, with $m$ number of layers (as explained in the new line 239 and the equation(27) line 792 in the SM) and indeed, our framework is not equivalent to collapsing the layers into a single one as SGCN given that we are residual. From a continuous point of view, the differential equation is linear but not the solution (which in fact will be exponential). We have also extended our theory to include pointwise non-linear activations that are guaranteed to still make the energy decrease along the solution; we hope that this addresses your concerns.\n- We also note that the source term may also lead to resonance effects and the various works [41,9,36] all explore – one way or another – this resonance phenomenon leading to better expressive power. We highlight how the energy approach can handle source terms as well so all the benefits apply to our setting too.\n\n\"Given that having no non-linearity doesn’t hurt performance does this just suggest that the empirical benchmarks considered just aren’t that challenging?\"\n\nThe point about benchmarks has been thoroughly addressed in the general response that we invite you to consider. A further point we would like to emphasize here. The benchmarks are the same and only ones used by baselines (see general response) that are extremely more sophisticated/involved (and much slower) than ours and specifically designed to get to those same numbers as ours. It seems a bit unfair that we `have to pay a price' for showing that a residual linear model that is a gradient flow can match those performances given that the main purpose of our work is understanding GNNs and the channel-mixing role from a new light rather than proposing a new architecture. In any case, we are working on further checking the role of pointwise non-linear activation that preserves energy monotonicity as argued above and we’ll come back with results later. \n\n",
" We thank the reviewer for their feedback and for believing that in our paper we \"give a new perspective on GNN in terms of the particle system, which explain why the original GNN does not work well on heterophilic datasets and also analysis Dirichlet energy change in the dynamic system\" and for thinking that \"the whole paper's structure is clear and easy to follow and that several adequate experiments are used to verify the author's statement\".\n\nWe kindly ask the reviewer to check the general response before where we address several important points. Below, we further reply to each specific comment raised in this review. We report your $\\textbf{feedback/comments in quotations}$.\n\n\"As this model needs to compute the eigendecomposition of the graph Laplacian, when the graph size is increasing, it should be hard to compute.\"\n\nThere is a $\\textbf{misunderstanding}$ here that we hope to clarify. We $\\textbf{never}$ have to $\\textbf{compute}$ the $\\textbf{eigendecomposition of the graph Laplacian}$ or not even an approximate one and we are not sure which line of the paper was misleading about this point. The $\\textbf{eigenvectors are used}$ from a $\\textbf{theoretical level only}$ to derive results. The framework acts as a $\\textbf{classical message passing}$ using the sparsity of the input graph and indeed $\\textbf{it is as fast as classical GCN}$ (as reported in Figure 5, line 950 in the SM) and much faster than spectral methods.\n\n\"If two particle (two nodes) is repulsive to each other, will both feature blow up (going to infinity) as the time of the dynamic system increases? How can you ensure all the feature on every node is bounded with the system is evolving?\"\n\nThis is an interesting question. The linear dynamics is in principle unbounded – albeit there are non-linear ways to prevent solution to blow-up (see for example the form of the non-linear gradient flow in Section B.3 of the SM). However, the fact that the dynamical system is unbounded is not intrinsically bad for a variety of reasons:\n- the problem of unbounded dynamics only emerges as one approaches the limit of infinite layers (i.e. infinite integration time) and of course the system is always well-defined for each finite time\n- On a heterophilic graph, separating adjacent particles that are `opposite’ to each other $\\textbf{fast}$ can be beneficial and this could be achieved thanks to the negative eigenvalues of the channel mixing which induce repulsion.\n- Even simple models like (S)GCN can be unbounded. In fact, the norm of the solution can blow up (again in the infinite time limit) as soon as some eigenvalue of the channel mixing has absolute value larger than one. We also invite you to read the new paragraph in line 278 added in our revised submission.\n - Note also that the pre-image of the one-hot encoding of the softmax in fact is by design never bounded.\n\nSince the $\\textbf{only weakness was based on the wrong premise}$ that we hope we have addressed (once again we never compute eigendecomposition of graph Laplacian, our model is a sparse MPNN like GCN), we'd appreciate if you raise your score -- otherwise let us know any other specific concerns and we're happy to address them in the discussion period.\n",
" We thank the reviewer for their feedback and for finding that the work is \"clearly written and well presented\". We encourage the reviewer to first see our general response above. Below we address each specific point raised in the review more in detail.\n\n$\\textbf{Your comments are reported in quotations}$. \n\n\"While the presentation in the work is good, the idea in itself is fairly intuitive and simple and has been discussed in contexts of neural networks and physics (related works, see: Landscape and training regimes in deep learning)\"\n\nWhile we agree that the idea of gradient flows is not new and in fact dates back by centuries, we have not seen these approaches applied in the context of Graph Neural Networks. In particular, although the reference is very interesting and has already been added in our revised version, we think that there might be some $\\textbf{misunderstanding}$ here that we hope to clarify. We are not applying energy arguments to understanding the loss – our discussion is $\\textbf{not about the loss}$. The energy is a functional of the node representations that will determine how the nodes get updated in a GNN. We do not see any strong comparison with the reference here and most importantly we do not see why \"the energy-based formulation with simpler formulation loses the advantages of the landscape that a deep-learning architecture has\". We $\\textbf{cannot lose any advantage}$ given by $\\textbf{deep-learning}$ architectures since $\\textbf{our framework is a deep learning}$ one: this is a point that seems to have been misunderstood. We are not changing the loss and indeed our framework has the same advantages as other deep learning frameworks on graphs like GCN in terms of loss landscape.\n\n\nTo further emphasize the (possible) point of confusion: the reviewer said in addition \"a closer analysis on the loss landscape is required to understand the nature of the minima and saddle points\". Our energy formulation is not about the loss, it is the energy that regulates the update equations (i.e. the forward) we are not modifying the backward pass. Concerning the experiments on homophily/heterophily, this is a standard practice that has been used exactly as for our work by [47], [4], [5], [42], [11], [15], [28], [37], [45]. We invite you to see our general point about benchmarks in the general response section.\n\n\nSince the $\\textbf{main (only) weakness raised seemed to be based on a wrong premise}$ that hopefully we have managed to clarify how it does not apply to our work, we'd appreciate if you raise your score -- otherwise let us know any other specific concerns and we're happy to address them in the discussion period.\n\n",
" \"The authors state that linear GNNs achieve competitive performance on real-world data sets. Again, having linear layers negates the whole concept of NNs. This cannot be a conclusion that holds for all data sets and tasks. Given the rather limited scope of experiments, I would say that this is a too strong statement here\"\n\nPlease see our general response about non-linearities. We did not state this to be the case on all real world datasets but on 9 datasets (with different levels of homophily) that the community has been using for a while now (see all the baselines). Also, very popular papers like SGCN [40] have already argued that removing non-linear activations in a GNN is not always bad.\n\n\"The authors choose W to be diagonal and random and do not train over it. Why is that? Won’t we get better results if we train over W? This means that there are no non-linearities in the network, and there are no learnable channel-mixing parameters. So, except the encoding and decoding layers, it’s essentially a classical algorithm. How do the authors explain that?\"\n\nThis is just one of the baselines, the simplest one that satisfies our framework and is indeed not better performant than the one where we train the channel-mixing over the majority of datasets. We are not claiming that this is the best one and in fact did not observe/report to be the best one. It was more a conceptual experiment to test the lightest possible instance of our framework where we simply ask the encoding and decoding to learn to re-align the features based on graph and random diagonal attraction/repulsion uniformly sampled. We find it surprising that this is enough on the smaller heterophilic graphs. \n\n\"How does the choice of a random W influence the results? Are the achieved accuracies stable? Or do the authors see large deviations between different runs?\"\n\nWe did not observe large deviations, but again we emphasize that this is just a conceptual experiment. We are not claiming that avoiding to learn the channel-mixing is always helpful, which is why we have reported experiments with the dense, learnable, diagonally dominant channel-mixing configuration that perform better. \n\n\"Line 375. This is yet another strong sentence, given the rather limited experimental study in the paper.\"\n\nOur experiments are in line with most recent papers studying heterophily. The complex benchmarks we used are the heterophilic graphs that recent papers specifically targeting this issue have been considering, see [47], [4], [5], [42], [11], [15], [28], [37], [45] and general response.\n\n\n$\\textbf{Important conclusion}$\n\nWe hope that we have addressed all points and that the reviewer will let us know of any outstanding doubt. We kindly ask to re-consider the score based on both the general response and our replies given that \n- most of the questions were already addressed in the SM\n- this is a theoretical work that has still shown competitive performance w.r.t much more complicated architectures despite empirical evaluation not being its focus\n- that we believe the questions about non-linear maps have been addressed in the context of the energy framework from a theoretical standpoint in the general response (see the new Section E in the SM and the new Proposition 3.2 in line 199)\n- and that we have provided ample evidence for why papers in this community interested in the heterophily setting and frequency response have used the very same benchmarks we adopted which cannot be a genuine reason for criticism in order to enforce fairness.\n",
" \"Line 230 – Why are Omega and W shared across the layers? Traditionally, at least in CNNs, one learns a variety of channel-mixing convolution operators. What is different here? This negates the common practice of neural networks and requires explanation and evaluation.\"\n\nThe sharing across layers is due to the fact that these symmetric matrices represent the potentials in the energy and accordingly are taken to be time-independent, otherwise one would not be able to conclude that the energy is being minimized along the GNN. We will emphasize this point better, but this is again a choice due to the physics model by which we are inspired. The end goal is using this framework to investigate the role and importance of the channel-mixing matrices. This is checked in Table 1 line 347 where the last row GRAFF -timedep(DD) is a variant of our framework where we do not share weights across layers. As you can see this is not better and in fact, sometimes marginally worse despite the larger number of parameters.\n\n\"The authors have T, tau, and the number of layers. How do we choose T or tau? Is it a hyperparameter? Do the authors choose tau to be small enough to ensure the stability of the Euler scheme?\"\n\nWhen discretizing the differential equations using explicit (forward Euler) scheme with fixed step size tau and integration time T, the ratio L=T/tau corresponds to the number of layers (see l. 235-236) and it is a hyperparameter as for standard GNNs. We refer the Reviewer to Section D5 in the SM where we have reported all the hyperparameters, including integration time T and step size tau for all the nine datasets.\n\n\"The authors state that they can include the non-linearities in the encoder and decoder layers only. Will that be the equivalent to having a non-linearity in Eq. (12)? I do not understand why. Again, that is against the fundamental aspect of neural networks – the use of non-linear activations to express complex functions.\"\n\nThe non-linearities can be used in either the encoding map or the decoding one or both. This is not equivalent to having a non-linearity in equation (11), as typically done in GCN, and we did not state that. We show an architecture implementing a nonlinear map from raw node features to node labels, where the only nonlinearity is in node-wise encoder/decoder, whereas the graph propagation part is linear (a discretized differential equation, without layer-wise nonlinearity). Our experiments suggest such an architecture can perform very well (on par with much more complex SOTA architectures, including heterophilic datasets). Please see our general reply about non-linearity too and especially the point where we discuss how – based on your feedback and the Reviewer $\\textcolor{red}{PPbd} – we have introduced a new theoretical justification for using pointwise non-linearities (Proposition 3.1 in line 199) in this energy framework.\n\n\"Symmetry being a key requirement. When other works learn the channel-mixing operators, they do not enforce symmetry. So – this is important only for looking at architectures as a gradient flow. Further, in lines 247-248: indeed (13) can be seen as a generalization of the mentioned methods (with identity activation), but none of these learn symmetric matrices. Also, as the authors note, GAT does not have a symmetric attention matrix. So, is symmetry really important? Will we get better networks if we enforce symmetry in the learning?\"\n\nAs the Reviewer correctly notes, symmetry is a constraint under which the architecture is a gradient flow minimizing a well-understood energy, making it more interpretable and allow for more educated architectural choices. Our paper is mostly focused on providing an understanding of the channel-mixing in terms of attraction and repulsion, however note that one could possibly leverage our insight to better initialize a given channel-mixing (in terms of eigenvalues for example) which was beyond the scope of our paper. We did not show that enforcing symmetry always leads to better networks. GAT does not have a symmetric matrix and in fact GAT cannot be interpreted as a gradient flow: if you think about physics and dynamical systems in general, not all differential equations are gradient flow of an energy, but many important ones, especially the ones coming from physics, are. Most importantly, GAT is one of our baseline and is consistently beaten by a large margin when the graph is heterophilic. This shows that the removal of the symmetry constraint – even if more general in principle – is not necessarily helpful. Furthermore, we refer the Reviewer to ablation studies in the SM that thoroughly investigate what we gain/lose by enforcing symmetry vs standard (non-linear) GCN (see Figure 3 in the SM).\n\n\n\n",
" \"Missing citation from ICLR 2022: How attentive are graph attention networks? (GATv2) The conclusion of this paper is that having a more non-linear (in some sense) attention matrix improves the accuracy and training stability over GAT. How does the conclusion in this work align with the findings in GATv2? Note that there are additional data sets in GATv2, which may be more challenging and require non-linearities in the layers\"\n\nWe thank the reviewer for pointing the citation to us that has been added. Concerning the role of non-linear maps, please refer to our general point above and the new theoretical paragraph in line 194. About GATV2: their findings apply to the attention mechanism and in particular to the collapsing into a single layer of the query projection and head attention one. This is not equivalent to what we do since in our framework (as explained in the new line 239 and the equation(27) line 792 in the SM). We refer though to how we can include non-linear activations too by maintaining the interpretation as discussed in the general response. Note that we are not studying an attention framework and we are not stating that non-linearities are not needed. \n\n$\\textbf{Questions}$\n\n\"The gradient of Eq (4) looks like Grad^TW^TWGrad(F). Why is Eq (5) the way it is? Please clarify in the text.\"\n\nThis is proved in the SM, see line 717-719 and the review about Kronecker product in Section A.2.\n\n\"I am confused by the derivation from (6) to (7). First, if the rest of the paper uses the energies in (7), why introduce the energies in (4)-(6)? How do (6) and (7) relate? This is not clear. More importantly, is it the same W in both equations? It does not seem so. This is very confusing. Please consider revising.\"\n\nThe introduction of the energy in (4)-(6) is pedagogical since we show how one could easily generalize approaches that were used with success in both geometry and image processing to graphs. However, we also show in Proposition 2.4 that such framework would always lead to a smoothing process (and over-smoothing in the asymptotic limit if the kernel of H is zero) that may be not desirable to deal with heterophily. Accordingly, we define a more general energy in equation (7) where we have now two matrices Omega and W not necessarily equal. This is a more general energy and does not have to be equal to the previous one. Based on your feedback, we have rephrased this part more explicitly in the revised version, let us know if this reads better now (lines 161--163).\n\n\"Line 202-203: Are the authors saying that there is no role for the non-linearities in graph neural networks? But that is the most important aspect of a neural network’s definition (otherwise, the whole network collapses to a single linear operator). It does not make sense. Maybe try other experiments? CNNs, for example, sure require non-linearities for image classification. Maybe test this hypothesis on graph classification? Maybe shape classification (ModelNet40)?\"\n\nNo, we simply state that we did not find large improvements in our framework by making it non-linear given that this always has a price in terms of speed and interpretation. In any case, inspired by this point, we have added a reply in the general response. We have now a new theoretical justification for using activation functions like (tanh, ReLU, arctan..) (see Proposition 3.2 line 199) in our framework without losing the interpretation given by the channel mixing matrix W in terms of attraction and repulsion. We are running further tests and ablations. Note again that differently from SGCN that instead removes non-linear maps and collapses the GNN to a single layer, it is not true that our framework is equivalent to a single linear layer since we have a residual connection (as explained in the new line 239 and the equation(27) line 792 in the SM).\n\n\"Line 209: by linearized GNNs, do the authors mean with identity activation? Because there are no activations in the following equations. But - this is not the traditional use of the word \"linearized\", which is traditionally used for a Taylor approximation. Please revise.\"\n\nYes that is correct. We again emphasize though that the differential equation is linear (i.e. we have a residual connection) but the solution won’t be (indeed we are approximating a matrix exponential solution in the discrete case).\n\n\"Line 228 – why do the authors introduce tilde{W}, and then set it as identity? Can’t tilde{W} be chosen better? And if so - why introduce this matrix?\"\n\nThe reason for introducing \\tilde{W} is to show that this can be done in the gradient flow framework i.e. the source term induced by \\tilde{W} can be derived from a source term at the energy level. When evaluating the framework, we wanted to keep the number of parameters low so we decided to choose \\tilde{W} = I. In principle, other choices could be better, however our paper is $\\textbf{not about fine tuning a specific model}$ to get a marginal improvement over SOTA.\n\n\n\n",
" \n\"At the end, the authors do not show if their method is over-smoothing or not. Ideally, the authors would provide accuracies for a variety of number of layers and show that the accuracy does not degrade (e.g., see GCNII)\"\n\nThis is a very important point we would like to clarify, which is also one of the key messages of our paper. We believe that there has been some confusion in the literature describing over-smoothing as a degradation of performance. Clarifying this somewhat vaguely used concept is a reason why we proposed a formal definition for it (line 117; see also [7] and [31]). Note for example, that even a dynamics that by design magnifies high frequencies (i.e. the opposite of over-smoothing) like $\\dot{\\mathbf{F}}(t) = \\boldsymbol{\\Delta}\\mathbf{F}(t)$ can lead to extreme performance degradation if the underlying graph is homophilic for example. To further emphasize why Theorem 4.3 is in fact a more accurate and general depiction of over-smoothing, we have added a new paragraph in line 278 which we invite you to read. Having said that:\n\n- Experimentally we use prediction homophily pre and post diffusion as a measure of oversmoothing. This is much more aligned with the actual problem of over-smoothing as in `smoothing too much’ so that the predicted homophily is (much) higher than the true one. We have fully confirmed in the ablation studies (line 314) that negative eigenvalues of the channel mixing matrix induce repulsion and indeed lead to generally low homophily predictions i.e. they cannot oversmooth as proven rigorously in Proposition 3.1 (line 181) for the continuous case and Theorem 4.3 (line 260) for the discrete case. \n- In general, heterophilic graphs are much more challenging in terms of going deeper and GCNII for example only ran on homophilic ones. Indeed, you can see from our Table 1 that GCNII generally seems to suffer on heterophilic graphs. The fact that you can go deeper does not necessarily help. On the other hand, we have also proved in Theorem 4.3 that over-smoothing (as LFD dynamics characterized in our paper) cannot be avoided if you remove residual connections which again is the reason why GCNII generally works better than GCN. \n\n\"There are questionable choices for the architecture (e.g., no non-linearities) that are accompanied by too strong statements without backing them up. See details in the questions section. In particular, the authors do not show that indeed adding and removing the non-linearity has no influence on the accuracy\"\n\n\nThis point has been addressed in the general response. We further emphasize here the following:\n- We have proved in the new Proposition 3.2 (line 199) that one can have standard non-linear activations acting pointwise on top of our framework and still maintain the interpretation of energy decreasing along the solution. Further details in the new Section E of the SM.\n- All our baselines are effectively non-linear (we use nonlinear encoder/decoder, and only the graph propagation is linear), so a substantial comparison with non-linear GNN models is in fact already provided.\n\n\"The writing of the paper is hard to follow. I would say that the presentation (i.e., notation and language) can be simplified to make this paper more reader-friendly\"\n\n\nWe thank the reviewer for the feedback, but would appreciate specific suggestions of what we might improve. We also notice other Reviewers were happy with our writing, and that we included a preliminary notation paragraph at the beginning of section 2 where we have introduced most conventions and notations used throughout the paper to help the reader. We have already simplified the notation/discussion in a few places as detailed in the general response about revised versions.\n\n\"I could not find how many layers were used in Table 1.\"\n\nSection D5 in the SM has the hyperparameter choices for all the datasets, including the integration time and step size inferring number of layers used.\n\n\"Dirichlet energy constrained learning for deep graph neural networks. Missing citation.\"\n\nWe thank the reviewer for pointing us to this interesting reference that had already been added. A few minor comments:\n\n- The claim that Dirichlet energy going to infinity may lead to over-separation is not accurate and that can be explained thanks to the new characterization of LFD dynamics we have introduced. Please consider the new paragraph in line 278 explaining why our Theorem 4.3 provides a better characterization of smoothing dynamics than simply looking at Dirichlet energy (rather than using our LFD characterization). \n- In connection to our general point about benchmarks, we would like to emphasize that the given reference tested $\\textbf{only}$ node classification $\\textbf{tasks on homophilic datasets}$ further supporting the point that the benchmarks considered “challenging” are either large (homophilic) graphs or (smaller) heterophilic graphs. \n\n\n\n\n",
" We thank the reviewer for their feedback and for finding our paper \"rich in theoretical insights\", and the first to \"analyze the channel mixing matrix\". We kindly ask to first check our $\\textbf{general response}$. Below we address each point raised separately and more in detail. $\\textbf{We report your comment in quotes first}$: note that partly as a consequence of that the response will be relatively long. \n\n\"The experiments are rather limited. The authors show only node classification, where it is customary to show more experiments. The work of GCNII, for example, shows PPI, and two cases of node classification (semi and fully supervised). Also, an example on a large dataset (e.g., OGBN-Arxiv) is also important. Most importantly, it is not clear how well the method works on graph classification tasks (e.g., the TUD data sets), without the non-linearities in the layers. Given the strong claims made by the authors regarding those non-linearities (see below), showing these experiments is essential in my opinion.\"\n\nThe point about experiments and benchmarks has been addressed in the general response. We would like to reiterate that:\n\n- Works like GCNII or the suggested reference about constrained Dirichlet energy do $\\textbf{not}$ investigate GNN performance on heterophilic datasets (the homophily of ogbn-arxiv is $\\textbf{0.80}$)\n- Generally, works that are mostly interested in frequency response (smoothing vs sharpening effect) – as the ones listed in the general response – never test on large graphs and only focus on node-classification (homophily and frequency response are not very meaningful for graph-level tasks, nor are there such established specific benchmarks) with the inclusion of heterophilic baselines.\n- The experimental evaluation we report follows that used in most recent papers studying GNNs in heterophilic settings again see [47], [4], [5], [42], [11], [15], [28], [37], [45] as discussed in the general response.\n- Finally, our work – as acknowledged – is indeed mostly theoretical and provides an understanding of common elements of GNN models. Our extensive synthetic experiments and ablation studies fully support our theory. \n\n\"I find only a single data set where GRAFF yields the best performance. Overall, this method does not improve the SOTA.\"\n\nWe respectfully disagree with this comment and what this might entail. First, the performance of the top k models (usually k ~ 3) on almost all datasets is $\\textbf{extremely close}$ and even defining what ‘SOTA’ means here is a gray area. Second, our paper describes a theoretical framework allowing to design more interpretable GNN architectures (in the sense that they minimize a well-understood energy) and make more educated architectural choices. In particular, we show both theoretically and experimentally that very simple architectures (linear residual GCN with shared symmetric layer parameters) can perform in heterophilic settings on par with much more complex SOTA models -- which is an interesting finding.\nWe also emphasize an $\\textbf{important point}$: in this work there is no separation between the theory section and the implementation, meaning that the model tested is precisely a discretized gradient flow (we have even removed intra-layer dropout to be as close as possible to the theoretical equations). \n\n\"There is very little discussion about over-smoothing in this work (only in the paragraph in line 249). At the end, the authors do not show if their method is over-smoothing or not. Ideally, the authors would provide accuracies for a variety of number of layers and show that the accuracy does not degrade (e.g., see GCNII).\"\n\n$\\textbf{We find this comment a little worrying}$: line 249 (in the revised version now line 255) refers to an $\\textbf{entire paragraph including the main theorem}$ of our paper which shows when and how the channel-mixing matrix has the power – thanks to its negative eigenvalues – to induce a high-frequency dominant dynamics which therefore avoid over-smoothing. To some extent, our whole work is about a better analysis of the smoothing and over-smoothing effects both in the finite time case (convergence rate) and asymptotic one. Note that we even have a formal definition of over-smoothing in line 118. In fact, our theoretical analysis in Theorem 4.3 offers a further theoretical justification in terms of the spectrum of the channel-mixing matrix for why methods like GCNII that introduce a residual connection avoid over-smoothing. We kindly ask the Reviewer to read again the paragraph starting at line 255 along with the new paragraph at line 278 and tell us if there are any doubts about our smoothing/sharpening analysis. \n\n",
" This is a detailed list of the modifications to the main file and SM (both now appear in their revised versions) based on the reviewers' feedback:\n\n- New references [50] and [8] (suggested by $\\textcolor{blue}{13DY}$) and [6] (suggested by $\\textcolor{green}{4YFR}$) have been added\n- We have reformulated the introduction of $\\mathcal{E}^{\\mathrm{tot}}$ in lines (161--164) to address concerns about clairity raised from $\\textcolor{blue}{13DY}$.\n- Removed explicit formula for $\\epsilon_{\\mathrm{HFD}}$ and moved to the SM line 760 to reduce notations as suggested by $\\textcolor{red}{PPbd}$\n- The old paragraph about non-linear gradient flow has been moved to SM (line 780). Instead we have a $\\textbf{new paragraph}$ in line 194 containing the $\\textbf{new Proposition 3.2}$ about energy dissipation when using non-linear activation functions we considered based on feedback from both $\\textcolor{blue}{13DY}$ and $\\textcolor{red}{PPbd}$.\n- We have a new Section E in the SM to discuss non-linear activation functions applied to a GNN Gradient Flow dynamical system.\n- We have added two further bullet points in line 239 to explain why a linear discrete gradient flow is not equivalent to collapsing the MPNN into a single layer and that thanks to the new Proposition 3.2 we could also activate the equations with pointwise non-linear maps without losing most of the physics inspired interpretation.\n- We have removed in a few instances the word `explainable' where not essential or ambiguous as suggested by $\\textcolor{red}{PPbd}$ as for example in line 252.\n- We have moved the previous paragraph about the edge sign flipping to the SM to have a less packed discussion about the implications of our main Theorem 4.3\n- Added a new paragraph in line 278 commenting about the subtle but fundamental difference about over-smoothing and LFD in light of Theorem 4.3. Our contribution extends to unbounded channel-mixing spectral radius and shows that even though technically we are not over-smoothing we are still always LFD meaning that it is just a problem of global scale.",
" $\\textbf{New theoretical results concerning non-linear activations}$: In response to Reviewer $\\textcolor{red}{PPbd}$ question about expressive power and some comments by Reviewer $\\textcolor{blue}{13DY}$, we included $\\textbf{Proposition 3.2 line 199}$ and a new Section E in the SM investigating how a non-linear pointwise activation would fit this framework. In a nutshell: we prove that if we activate equations (11) (line 234) with a non-linear map $\\sigma$ belonging to a large class of functions (including common choices like ReLU, $\\arctan$, $\\tanh$..), then the learnable energy $\\mathcal{E}^{\\mathrm{tot}}$ is still $\\textbf{decreasing along the solution}$. This allows us to retain the interpretation of $\\textbf{W}$ as inducing attraction and repulsion since the energy has not changed and is still decreasing. We have also added $\\textbf{Lemma E.2 (line 978)}$ to check how in a simple diagonal case we maintain the same smoothing vs sharpening analysis. In principle then, we could have $\\textbf{non-linear activations and keep the}$ same $\\textbf{physics oriented approach}$ and interpretation where a learnable multi-particle energy is decreasing along the GNN. We believe this deserves further investigation and we reserve that for future work.\n\n$\\textbf{Concerning the level of challenge of the benchmarks}$: We selected baselines in Table 1 specifically designed to perform well on heterophilic graphs (and in fact note the ones like GAT, GRAND, CGNN for example that are not and suffer significantly on heterophilic graphs). We show we are extremely competitive with much slower and more sophisticated baselines despite a simpler framework (however we again emphasize that we are $\\textbf{not equivalent to a single linear layer}$ and that as $\\textbf{per the new Proposition 3.1}$ we could also $\\textbf{use non-linear activations}$). We restate the different baselines like GGCN, GPRGNN, H2GCN, Geom-GCN, Pair-Norm, Sheaf, all use the same task and same datasets for evaluation. A further important point concerning evaluation: in this work there is $\\textbf{no separation between the theory}$ section and $\\textbf{the implementation}$, meaning that the model tested is precisely a discretized gradient flow (we have even removed intra-layer dropout to be as close as possible to the theoretical equations). \n\n$\\textbf{Some important misunderstandings}$. A few general points reviewers raised as weak points that are misunderstandings we would like to clarify:\n- Lack of discussion on over-smoothing $\\textcolor{blue}{13DY}$: the whole paper is in some regard about the smoothing effect and how the channel-mixing is able to steer away diffusion from over-smoothing thanks to the negative eigenvalues as proved in the main Theorem 4.3. We have emphasized this point further in the revised version, as explained in the response below about modifications.\n- All points raised from $\\textcolor{blue}{13DY}$ about choice of integration time, step sizes and other hyperparameters as well as derivation of equations are $\\textbf{already addressed in the SM}$: please see detailed individual response. \n- It seems the review of $\\textcolor{green}{4YFR}$ raised as only/main weakness what we believe is a misunderstanding concerning what our energy is; they asked what we lose \"$\\textit{compared to deep learning frameworks}$\". We clarify with reviewer $\\textcolor{green}{4YFR}$ that ours $\\textbf{is a deep learning framework}$ and the parametrised energy we use concerns the $\\textbf{forward pass and not the backward}$ w.r.t to the loss optimisation. See also detailed response.\n- Reviewer $\\textcolor{orange}{c28L}$ raised as a weakness that our framework may not scale to large graphs given we need to compute the graph Laplacian eigenvectors. This is $\\textbf{not}$ the case. The SVD decomposition of the graph Laplacian $\\textbf{is not required and the eigenvectors are only used in our theoretical analysis}$. In fact, our model is a sparse MPNN that is as fast as GCN and much faster than spectral methods (see Figure 5 in SM for a runtime comparison with GCN using same hidden dimension).\n\n",
" We thank the reviewers for finding that our paper is `rich in theoretical insights’, a ‘nice piece of work that offers some new perspectives, and promising new directions’, that is ‘well-written and clearly presented’, and that the work ‘ contains some great conceptual components’. We address here a few general but $\\textbf{crucial}$ questions/doubts raised from the reviewers along with some $\\textbf{key misunderstandings}$ that we hope to clarify. We hope that the reviewers revisit their scores in the light of our response. \n\n\n$\\textbf{Important disclaimer}$: We have revised the main file and SM based on the feedback and the new theoretical results including non-linear activations. All references below to equations and lines refer to the $\\textit{revised version}$. Below you can also find a detailed and granular list with all modifications.\n\n$\\textbf{A few words on the goal}$: The main goal of our paper consists in studying a new framework where GNNs minimize an energy with emphasis on its theoretical implications. This allows us to study the role of the channel-mixing and provide theoretical results in terms of smoothing vs sharpening dynamics induced by its spectrum, convergence rate and asymptotic behaviour. In fact, we emphasize how our results are more granular and more explicit than classical over-smoothing ones currently available in literature [26,27,7] and differently from those fully explain the role of the residual connection from the spectral perspective of the channel-mixing. \n\n\n$\\textbf{Benchmarks and empirical evaluation}$: Despite our discussion is general, the main underlying problem we are interested in is the frequency response of the GNN with associated performance $\\textit{on heterophilic graphs}$, something that is becoming of increasing interest for the community. This is a list of references included in our paper that, for the great part, have the specific and unique goal of proposing models that work well with heterophily: [47], [4], [5], [42], [11], [15], [28], [37], [45]. $\\textbf{They all have experiments on node classification task only}$ and using the $\\textbf{very same datasets we have tested on}$ (some of the references on fewer datasets actually, while [45] has 2 extra homophilic datasets but 1 heterophilic dataset less). None of them test on $\\textbf{graph-level tasks}$ for which $\\textbf{the notions of homophily and frequency are less meaningful}$. Therefore we believe $\\textbf{our experiments are aligned with recent papers and baselines}$ that are interested in the same problem as ours; moreover, with the exception of [5] and [15], none of the papers above arguably shares the same theoretical flavour and analysis as ours. We also note how although papers like [41], [8], [36] propose node-classification tasks on larger graphs – usually a single one –, the latter are $\\textbf{homophilic}$ and indeed such references never test on heterophilic datasets. In fact, Table 1 in our paper compares with [8] and [41] for example, emphasizing how they are not suitable to handle heterophily. Accordingly, $\\textbf{we believe our experiments to be extensive and sufficient}$ – especially considering that reviewers have acknowledged the main theoretical nature of our work.\n\n$\\textbf{The role of non-linear activations}$: Reviewers $\\textcolor{blue}{13DY }$ and $\\textcolor{red}{PPbd}$ have raised questions about the role of non-linear activations and associated evaluation. Some important preliminary remarks:\n- We do not state that in the general graph learning landscape non-linear activations are not needed.\n- The fact that in GNNs one can (sometimes) suppress non-linear activations without serious issues has been already observed, see for example the $\\textit{highly popular}$ SGCN paper [43].\n- Our framework is composed of a node-wise encoding block, a diffusion block, and a node-wise decoding block. In principle, both encoder and decoder can be chosen as non-linear MLPs, meaning that the $\\textbf{overall architecture can indeed be nonlinear}$.\n- The fact that the diffusion block (equation (11), line 234) is linear does $\\textbf{not}$ mean that the GNN collapses to a single layer due to the residual connection term. More precisely, we refer to lines 790-795 of the $\\textbf{revised SM}$ (the list of modifications is detailed below). Since we are discretizing a linear ODE, the solution is an approximation of an exponential map.\n\n\n",
" The paper suggests a new graph neural network architecture that can be seen as a gradient descent minimization of a learnable energy function. The authors use channel-mixing matrices with mixed eigenvalues to infuse high frequencies into the architecture dynamics. Many existing architectures fall into this framework. Three variants of the new architecture are presented. Strengths\n1) The paper is rich in theoretical insights. \n2) Unlike previous works, this is the first work that analyzes the channel mixing matrix. \n\nWeaknesses: \n1) The experiments are rather limited. The authors show only node classification, where it is customary to show more experiments. The work of GCNII, for example, shows PPI, and two cases of node classification (semi and fully supervised). Also, an example on a large dataset (e.g., OGBN-Arxiv) is also important. Most importantly, it is not clear how well the method works on graph classification tasks (e.g., the TUD data sets), without the non-linearities in the layers. Given the strong claims made by the authors regarding those non-linearities (see below), showing these experiments is essential in my opinion.\n\n2) I find only a single data set where GRAFF yields the best performance. Overall, this method does not improve the SOTA. \n\n3) There is very little discussion about over-smoothing in this work (only in the paragraph in line 249). At the end, the authors do not show if their method is over-smoothing or not. Ideally, the authors would provide accuracies for a variety of number of layers and show that the accuracy does not degrade (e.g., see GCNII). \n\n4) There are questionable choices for the architecture (e.g., no non-linearities) that are accompanied by too strong statements without backing them up. See details in the questions section. In particular, the authors do not show that indeed adding and removing the non-linearity has no influence on the accuracy. \n\n5) The writing of the paper is hard to follow. I would say that the presentation (i.e., notation and language) can be simplified to make this paper more reader-friendly. \n\n6) I could not find how many layers were used in Table 1.\n\n7) Missing citation from the previous Neurips: \nZhou, K., Huang, X., Zha, D., Chen, R., Li, L., Choi, S.-H., and Hu, X. Dirichlet energy constrained learning for deep graph neural networks. Advances in Neural Information Processing Systems, 34, 2021.\nThis paper also discusses the Dirichlet energy throughout the layers.\n\n8) Missing citation from ICLR 2022:\nHow attentive are graph attention networks? (GATv2)\nThe conclusion of this paper is that having a more non-linear (in some sense) attention matrix improves the accuracy and training stability over GAT. How does the conclusion in this work align with the findings in GATv2? Note that there are additional data sets in GATv2, which may be more challenging and require non-linearities in the layers.\n\n - The gradient of Eq (4) looks like Grad^TW^TWGrad(F). Why is Eq (5) the way it is? Please clarify in the text. \n\n- I am confused by the derivation from (6) to (7). First, if the rest of the paper uses the energies in (7), why introduce the energies in (4)-(6)? How do (6) and (7) relate? This is not clear. More importantly, is it the same W in both equations? It does not seem so. This is very confusing. Please consider revising. \n\n- Line 202-203: Are the authors saying that there is no role for the non-linearities in graph neural networks? But that is the most important aspect of a neural network’s definition (otherwise, the whole network collapses to a single linear operator). It does not make sense. Maybe try other experiments? CNNs, for example, sure require non-linearities for image classification. Maybe test this hypothesis on graph classification? Maybe shape classification (ModelNet40)? \n\n- Line 209: by linearized GNNs, do the authors mean with identity activation? Because there are no activations in the following equations. But - this is not the traditional use of the word \"linearized\", which is traditionally used for a Taylor approximation. Please revise. \n\n- Line 228 – why do the authors introduce tilde{W}, and then set it as identity? Can’t tilde{W} be chosen better? And if so - why introduce this matrix? \n\n- Line 230 – Why are Omega and W shared across the layers? Traditionally, at least in CNNs, one learns a variety of channel-mixing convolution operators. What is different here? This negates the common practice of neural networks and requires explanation and evaluation.\n\nLine 235 – the authors have T, tau, and the number of layers. How do we choose T or tau? Is it a hyperparameter? Do the authors choose tau to be small enough to ensure the stability of the Euler scheme? \n\nLine 236 – The authors state that they can include the non-linearities in the encoder and decoder layers only. Will that be the equivalent to having a non-linearity in Eq. (12)? I do not understand why. Again, that is against the fundamental aspect of neural networks – the use of non-linear activations to express complex functions.\n\nLines 244-245: Symmetry being a key requirement. When other works learn the channel-mixing operators, they do not enforce symmetry. So – this is important only for looking at architectures as a gradient flow. Further, in lines 247-248: indeed (13) can be seen as a generalization of the mentioned methods (with identity activation), but none of these learn symmetric matrices. Also, as the authors note, GAT does not have a symmetric attention matrix. So, is symmetry really important? Will we get better networks if we enforce symmetry in the learning? \n\nIn continuation to the previous point: the symmetrization in lines 304. In my opinion, this whole concept of symmetry or not should be evaluated in extensive experiments, but I do not see such experiments in the paper. \n\nLines 282-284: Essentially, when choosing negative eigenvalues for W, you indeed reverse the time integration. But then, the Euler method that is used is known to become unstable. Isn’t that a problem with the whole approach?\n\nLines 290: The authors state that linear GNNs achieve competitive performance on real-world data sets. Again, having linear layers negates the whole concept of NNs. This cannot be a conclusion that holds for all data sets and tasks. Given the rather limited scope of experiments, I would say that this is a too strong statement here. \n\nLines 301-302: The authors choose W to be diagonal and random and do not train over it. Why is that? Won’t we get better results if we train over W? This means that there are no non-linearities in the network, and there are no learnable channel-mixing parameters. So, except the encoding and decoding layers, it’s essentially a classical algorithm. How do the authors explain that?\n\nLines 301: the random W. How does the choice of a random W influence the results? Are the achieved accuracies stable? Or do the authors see large deviations between different runs?\n\nLine 375: This is yet another strong sentence, given the rather limited experimental study in the paper. \n yes. ",
" In this work, authors present GRAFF, a gradient flow based graph neural network in which the evolution of GNN is represented as minimizing the combination of attractive and repulsive interactions of a multi-particle system. Detailed theoretical characterization in terms of a parametric Dirichlet energy, a general parametric energy and spectral analysis is performed. The work is well-written and clearly presented. It builds on similar ideas as outlined in several previous works such as, for instance, [27]. While the presentation in the work is good, the idea in itself is fairly intuitive and simple and has been discussed in contexts of neural networks and physics (related works, see: Landscape and training regimes in deep learning, M. Geiger; L. Petrini; M. Wyart, Physics Reports. 2021-04-16. Vol. 924, p. 1-18. DOI : 10.1016/j.physrep.2021.04.001.). Further, the empirical experiments reveal that the results are comparable with the existing approaches, but not necessarily better. It has been shown by earlier works that energy formulation results in a loss landscape that has a large number of local minima and hence gradient based minimization results in a poor solution. In contrast, deep neural networks with large number of parameters have flat minima and the loss landscape is connected by level set. Reading these together, it is unclear whether the energy-based formulation with simpler formulation loses the advantages of the landscape that a deep-learning architecture has. Could this also be the reason why the performance of the GRAFF is not superior in comparison to other SOTA models? Authors should investigate this. The experiments has primarily focussed on one aspect while studying on several datasets with varying hetero/homophily. To evaluate the true performance of the approach, several other experiments on varying downstream tasks are required. in addition, a closer analysis on the loss landscape is required to understand the nature of the minima and saddle points. ",
" In this paper, the evolution of the GNN is explained as learning attractive and repulsive forces in feature space by the positive and negative eigenvalues of a symmetric 'channel-mixing' matrix. According to the spectral analysis of the solutions, gradient flow graph convolutional models result in a dynamic dominated by graph high frequencies, which is desirable for heterophilic datasets. Moreover, the authors present structural constraints on common GNN architectures, allowing them to be interpreted as gradient flows. We perform extensive ablation studies to verify our theoretical analysis and demonstrate the comparative performance of simple and lightweight models on real-world homophilic and heterophilic datasets. Strengths:\n\nIn this paper, the author gives a new perspective on GNN in terms of the particle system, which explain why the original GNN does not work well on heterophilic datasets and also analysis Dirichlet energy change in the dynamic system.\n\nThe whole paper's structure is clear and easy to follow. Several adequate experiments are used to verify the author's statement.\n\nWeaknesses:\n\nWhen the graph size is increasing, is this GNN also computation feasibly? As this model needs to compute the eigendecomposition of the graph Laplacian, when the graph size is increasing, it should be hard to compute. If two particle (two nodes) is repulsive to each other, will both feature blow up (going to infinity) as the time of the dynamic system increases? How can you ensure all the feature on every node is bounded with the system is evolving?\n\n\n The authors discussed the limitation of their works and their social impact.",
" This paper introduces a new family of models, GRAFF, on graphs wherein graph features are transformed according to a dynamical system given by the negative gradient of an *energy functional*, which is parameterized and learned. This amounts to a re-parameterization to focus on an energy function describing a discretized iterative update, instead of parameterizing the iterative update itself, as the most widely used GNN architectures do. This relation is properly studied in Section 4, where they show that GRAFF still includes many prior GNN model (up to the perhaps critically important matter of the non-linearity). However, this re-parameterization appears to offer more than just a reinterpretation of existing models. The primary value added explored in this work is to the analysis (and empirics) of the ability to handle heterophilious graphs. First, a congratulations to the authors on a nice piece of work that offers some new perspectives, and promising new directions. I enjoyed reading your work and certainly felt that I learned something in the process. \n\nBelow I discuss some of the things I especially liked in this work, as well as some of the concerns I have about certain aspects. Overall, I think this work contains some great conceptual components, but leaves open so quite important questions, particularly revolving around expressive power. The empirical evaluation is also relatively weak, and leaves me uncertain whether gradient flow models would enjoy widespread adoption. I will explain why I came to each of these beliefs in more detail below.\n\n---\n\n**Strong aspects:**\n\nThe community is moving towards a number of candidate approaches for circumnavigating the weaknesses of message passing networks. Although many ideas have already been given, the debate remains open. The idea of models following gradient flows is creative, and immediately prompted an “aha!” feeling. The community is in need of creative ideas like this, as you never know which will end up having a decisive impact.\n\nOn a technical level, it was pleasing to see the amenability of gradient flows to analysis of smoothing properties. The result in Section 3 on the Dirichlet energy functional were particularly interesting. It is quite unfortunate, however unfortunately typical, that the analysis doesn’t extent to non-linear activations (line 201). \n\n--- \n\n**Weak aspects:**\n\nA major missing piece of the picture is an understanding of the expressive power of the proposed gradient flow models. I found it particularly perplexing that line 202 mentions that no non-linearity is used in experiments. This raises a number of questions: are these models then of comparable expressive power to linear GNNs? Given that having no non-linearity doesn’t hurt performance does this just suggest that the empirical benchmarks considered just aren’t that challenging? \n\nThe paper claims several times that GRAFF models are “explainable”. The basis for this is that the model predictions can be understood by probing property of the energy functional. While this may turn out to be a useful point, the paper does not properly substantiate the claim. Indeed, there are no examples of any such “explanation” in practice. I would ask the author to either drop the “explainable” claim entirely (which isn’t critical in any-case, despite it’s prominent position in the explanation of ”why a gradient flow?”) or to clearly substantiate it, probably via an example. \n\nExperimental evaluation is fairly limited. Table 1 is the main seat of comparisons to other models on node level classification tasks for varying levels of homophily. As billed, the strengths of GRAFF seem to emerge primarily in low homophily (high heterphily) graphs. However, the heterophilic graphs are very small: half only have a few hundred nodes, and the biggest graph considered—“Films”- has 7,600 nodes, and an MLP is a fairly competitive baseline on Films. All this means that the possible benefits to empirical methodology in the immediate future from seem unclear. \n\n---\n\nTo conclude, although the empirics leave a number of question marks over the immediate empirical viability, the idea of graph models via gradient flows along an energy functional is elegant and thought provoking for me. The idea itself, plus the good exploration of the connection to smoothing, is enough to put me on the side of acceptance, but the limitations mentioned keep it only marginally so.\n\n\n---\n\n**Miscellaneous:**\n\n- Clarity in certain places could be improved. For instance, Propositions 2.4 and 3.1 give explicit rates for the energy functional. Since (unless I am missing something!) the key point in both cases is that the models are high-frequency dominant, the rate itself seems to be more of an intermediate step towards this final HFD conclusion. Maybe it is a matter of personal taste but I would have hidden the gory details I the appendix. \n- More generally, the paper is pretty notation heavy\n - What component(s) of GRAFF explain the inference speedup vis-à-vis GCN? Is it due to parameter sharing? Also, no details are given as to how this comparison was decided. Right now I cannot be sure that the comparison is apples to apples; maybe the GCN is a really massive model and the GRAFF is much smaller. More details on this would be great.\n- Why just compare inference time? What about comparison of training time? It seems remiss not to include this, especially since the main paper simply mentions “run-time smaller than GCN” (line 359).\n- The connection to spectral GNNs is interesting. Perhaps this suggests a path to developing expressive power results. Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"tSTeYcDuGRJ",
"-jq6jCQJF06",
"a71FXRX1YOJ",
"7aZKQESKNqX-",
"kku2Q_X6hov",
"uJnCGKuejv",
"HgeILay1TX9",
"J7aG-6fvpaU",
"49yUH8Eq2A0",
"F-Oh_ruEYzA",
"R2n3t2ffnkS",
"zkVQHhU_NoS",
"mXjLJIdiNTQ",
"t6WbmVZAMSO",
"l0_mNZ4v0yI",
"K9JmBtEsalG",
"jxDn0fiTC3U",
"lgkAtxhil1N",
"6ayQowmlDyi",
"nips_2022_0IywQ8uxJx",
"nips_2022_0IywQ8uxJx",
"nips_2022_0IywQ8uxJx",
"nips_2022_0IywQ8uxJx",
"nips_2022_0IywQ8uxJx"
] |
nips_2022_kRgOlgFW9aP | Thompson Sampling Efficiently Learns to Control Diffusion Processes | Diffusion processes that evolve according to linear stochastic differential equations are an important family of continuous-time dynamic decision-making models. Optimal policies are well-studied for them, under full certainty about the drift matrices. However, little is known about data-driven control of diffusion processes with uncertain drift matrices as conventional discrete-time analysis techniques are not applicable. In addition, while the task can be viewed as a reinforcement learning problem involving exploration and exploitation trade-off, ensuring system stability is a fundamental component of designing optimal policies. We establish that the popular Thompson sampling algorithm learns optimal actions fast, incurring only a square-root of time regret, and also stabilizes the system in a short time period. To the best of our knowledge, this is the first such result for Thompson sampling in a diffusion process control problem. We validate our theoretical results through empirical simulations with real matrices. Moreover, we observe that Thompson sampling significantly improves (worst-case) regret, compared to the state-of-the-art algorithms, suggesting Thompson sampling explores in a more guarded fashion. Our theoretical analysis involves characterization of a certain \emph{optimality manifold} that ties the local geometry of the drift parameters to the optimal control of the diffusion process. We expect this technique to be of broader interest. | Accept | This paper proposes and analyzes a Thompson-Sampling based method to learn to control continuous-time linear systems when the costs are quadratic. The authors first propose an algorithm that guarantees stabilization of the diffusion process and then give a second, Thompson-Sampling-based method with regret bounds and estimation rates for the parameters of the linear system.
The reviews for this paper were generally positive and found this work to positively contribute to our understanding of linear control--- though several reviews noted the similarities with with reference [2] and a general lack of contextualization of the work in the general adaptive and Bayesian control literatures. Nevertheless the results were sound---and extended our understanding of learning and control in the LQ setting, and the paper was well written and easy to follow. | train | [
"Amahqzs7Yb_",
"FMp4LRUhR3",
"1HTqNnGBbqF",
"nf_hXTfFNp9",
"8AwzIScQmq",
"RFL5JD63BZ",
"kUfI8F-O0j",
"4mrxfdFoWO"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the feedback. The authors will be happy to provide point-by-point explanations to all questions of the reviewer.\n",
" Thanks for the deep conceptual and technical comments the reviewer correctly provided. The authors appreciate the comprehensive review and the constructive comments, are grateful that the reviewer found the paper interesting, and will incorporate the edits in the final version. It is also satisfactory to hear that the reviewer found the intuitive explanations helpful.\n\n- \"\"Additionally, through … informed exploration.\", is a bit far stretched, ... strategy.\"\nThanks for the comment. We will rewrite this according to the comment, in the final version. \n\n- “the topics … are not sufficiently mentioned in the paper”\nWe thank the reviewer for the relevant references. We will add them, as well as the following references, to the final version. \nThis work is essentially similar to [2], [3], and it studies one of the closest policies to [2], [3] that admits both a fast implementation for multidimensional systems, as well as theoretical tractability. Technically, approaches based on the augmented dynamics method (for both the state and the unknown parameter) suggested in [2] are analyzed for a few special settings, e.g., [3], [7]. Furthermore, as reviewed in [4], since design and analysis of dual control policies utilize additional ideas such as the linearization of the dynamics and the quadratic expansions of the cost functions as in [5], [6], the authors expect the technical framework developed in this work (e.g., Lemma 9) to pave the road toward theoretical analysis of the performance of dual and adaptive policies for balancing exploration and exploitation. \n\n- [4] Wittenmark, Björn. \"Adaptive dual control methods: An overview.\" Adaptive Systems in Control and Signal Processing 1995 (1995): 67-72.\n- [5] Klenske, Edgar D., and Philipp Hennig. \"Dual control for approximate Bayesian reinforcement learning.\" The Journal of Machine Learning Research 17.1 (2016): 4354-4383.\n- [6] Tse, Edison, and Yaakov Bar-Shalom. \"An actively adaptive control for linear systems with random parameters via the dual control approach.\" IEEE Transactions on Automatic Control 18.2 (1973): 109-117.\n- [7] Sternby, Jan. \"A simple dual control problem with an analytical solution.\" IEEE Transactions on Automatic Control 21.6 (1976): 840-844.\n\n- \"\"We assume … [32–35].\" What is meant by this? ... a bit on this.\"\nThank you for the deep technical question. The aforementioned lines aim to explain that to compute the solution of the Riccati equation, it suffices to solve the ODE of the finite-horizon Riccati equation, and then let the horizon grow. Practically, it is based on the fact that for minimizing the average cost of this work, it suffices to minimize the cumulative cost for a ‘large’ horizon. Accordingly, if the finite horizon tends to infinity, the solutions of the Riccati equations converge to those of equation (5) in the paper. In the final version, we will edit according to the explanations of the reviewer to ensure that this computation method is clear.\n\n- \"In line 192: \"Nonetheless, ... let [...] and [...].\" This formulation confuses ... would be appreciated.\"\nThanks for the interesting question. The above-mentioned lines try to explain the case that a prior distribution of the unknown dynamics matrices is available, which will be used by the algorithm to have a better ‘initial’ exploration. Otherwise, standard multivariate Gaussian distribution can be adopted as a prior (although it is not actually). Note that the performance metric here is worst-case, which differs from an averaged one wrt the prior distribution (i.e., the regret is not the so-called Bayesian regret). Therefore, $\\hat \\mu_0, \\hat \\Sigma_0$ contribute as a constant and do not affect the rates of the performance of the algorithms in Theorem 2.\nIn the final version, we rewrite the corresponding lines to further clarify the roles of $\\hat \\mu_0, \\hat \\Sigma_0$. \n\n- \"Would it ... sparsity-prior.\"\nThanks for the interesting question. The approach can be extended to non-Gaussian posterior distributions, but further technical details are required in some cases. More precisely, as long as the drift matrices do not possess any structure (such as sparsity), every posterior that concentrates appropriately can be employed. That is, the presented analysis extends to cases where the posterior does not concentrate faster or slower than Gaussian, and has a sub-Gaussian tail as well as a bounded probability density function (e.g., mixtures of Gaussians). \nOn the other hand, under structured dynamics parameters, some of the technical results can be directly used for non-Gaussian priors/posteriors (e.g., Lemmas 1, 2, 3, 5, 6, 7, 10, 11, and 12, assuming sparsity of the dynamics matrices). For some others, accordingly appropriate counterparts are required (e.g., Lemmas 4, 8, and 9, for sparse drift parameters). The authors believe these extensions constitute interesting directions for future work that this paper paves the road toward.\n",
" We appreciate the helpful comments of the reviewer. According to them, clarifying explanations and relevant edits to the final version of the paper are provided in the sequel, and the authors will perform further edits the reviewer may recommend.\n\nBelow, we elaborate differences compared to [2], including that Algorithm 2 significantly outperforms the RL policy in [2], and Algorithm 1 and its performance guarantee, both are novel. In addition, the reviewers and the authors believe that performance guarantees for the popular TS policy in the canonical systems of linear diffusion processes is a theoretically interesting strong contribution to RL theory.\n\n- “Definition 1 … implies.”\nThanks for the comment. Following this comment, in the final version of the paper, we will bring the explanations in Section 3 (Stabilizing the Diffusion Process), in lines 149-158, before Definition 1, and edit further to ensure clarity. We will also discuss the failure event and how the diffusion process grows unboundedly in case of failure, as well as its remedies. \n\n- “Also, does Algorithm 2 have similar stability result?” \nThanks for the interesting technical question. Algorithm 2 utilizes Algorithm 1 for learning to stabilize the diffusion process under consideration. Therefore, by Theorem 1, with high probability it stabilizes the system. As the same logic applies to the subsequent episodes of Algorithm 2, in case of no ‘resampling’ (as further explained below), it stabilizes the process with high probability.\nIn addition, thanks to the randomness provided by the posterior distribution, the failure probability can shrink further by ‘resampling’ from the posterior (lines 175-177). More precisely, in case of failure of Algorithm 1 (which can be detected since the magnitude of the state vector drastically grows with time if the system is not stable), once can simply ‘resample’ from the posterior. Similarly, by employing a resampling strategy at the end of an episode of Algorithm 2 in case of observing any instability, the state evolution under the algorithm almost surely remains stable. Note that the failure probability decays exponentially as time proceeds, by Theorem 1. These explanations will be added to those in lines 224-230 in the paper. \n\n- “The numerical results in Figure 1 suggest that the randomized estimation performs better in terms of the normalized estimation error. How would you interpret the results?” \nThanks for the interesting technical question. The interpretation of the better estimation error of the randomized estimation policy compared to Thompson sampling, while the regret comparison holds in the opposite direction, is as follows. The former policy ‘over-explores’ for learning the parameters and unduly deviates from exploiting efficiently based on the parameter estimates at the time. It can be seen in the empirical experiments that infrequently the randomized estimation policy has a better learning accuracy, while its regret is always inferior to that of Thompson sampling. Importantly, since the end goal is to have policies with small regret, numerical results showcase superiority of Thompson sampling. Note that this superiority is significant since in the graphs, ‘magnified’ squared estimation errors are reported (multiplied by $\\sqrt T$, approximately), while regret curves contain the actual regret values ‘divided’ by $\\sqrt T$. So, the smaller regret of Thompson sampling is consequential and important in practice. We will add these explanations to the final version of the paper.\n\n- “It would be better to elaborate the introduction about [2].”\nThanks for the helpful suggestion. As per the above discussions, we will further elaborate and discuss comparisons with [2] in the introduction of the final version. \n",
" \nWe thank the reviewer for the helpful feedback and are happy that they found the setting interesting for studying the popular Thompson sampling policy for the continuous-time LQ problem. Below, we address the comments and hope that the reviewer finds them satisfactory. \n\nThe references [1] and [2] (cited in the paper as [29] and [16]) are important motivating papers for the authors to establish and present the results of this paper. In addition to the theoretical benefit of answering a natural question about Thompson sampling in continuous-time LQ systems, this work aims to fill some gaps in the existing literature (as expressed by Reviewers Vgkm and u7y2), for which different technical novelties are established. More details are provided below.\n\nOn [1], the self-exploration property of finite (in practice, short) horizon problem does not hold in the infinite (or practically long) horizon problem considered in this work. Technically, Assumption H.1 (2) in [1] is crucial in the sense that it renders exploration unnecessary (as discussed in Remark 2.1 therein). However, since in the setting of this work, Assumption H.1 (2) ‘cannot’ hold, the exploration-exploitation trade-off necessitates exploring the environment and precludes logarithmic regret. \n\nFurther technical differences indicating that the analysis in [1] is inapplicable, are as follows. The short horizon in [1] makes stabilization unnecessary, while it is crucially required for single trajectory online RL policies of this work that cannot reset the state of the system. Accordingly, we proposed Algorithm 1 and established its performance guarantee by developing novel technical results, as discussed in Section 5. Finally, the definition of regret in [1] does not fully include the stochasticity induced by the Wiener process, while here a comprehensive worst-case analysis is performed (as presented in Lemmas 7 and 8). \n\nOn the other hand, the analysis in [2] for Thompson sampling in discrete-time, focuses on one-dimensional systems. In contrast, Theorem 2 specifies dependence of the regret and estimation error on ambient dimension, as well as the other problem instances. Moreover, the conventional approach for discrete-time systems, as in [2] and [3], relies on concentration inequalities, which does not extend to continuous-time settings. In fact, because of technical difficulties including the fact that “continuous-time martingales have sub-exponential distributions, unlike sub-Gaussianity of discrete-time counterparts” (line 215), the theoretical analyses that establish rates for continuous-time settings that are similar to the corresponding rates in discrete-time systems, are considered as strong theoretical contributions.\n",
" I do not have the background to give a reasonable assessment. So, please ignore this review. I do not have the background to give a reasonable assessment. So, please ignore this review. No I can't see any limitations and potential negative societal impact from their work.",
" This paper proposes a Thompson-Sampling-based method to learn how to make decisions in a class of continuous-time linear-quadratic (LQ) problems with unknown coefficients in the linear dynamics.\n Pros: Reinforcement Learning (RL) for LQ problems is a tropical topic in recent years since it is the building to understand how machine learning methods perform for general decision-making problems. Thompson Sampling, on the other hand, is a practically-popular and theoretically-plausible algorithm for bandits and reinforcement learning method for Markov Decision Process (MDP) due to its efficiency in exploration. Therefore, it is interesting to see what additional benefit Thompson Sampling could bring to the LQ problem compared to the existing methods.\n\nCons: The contribution seems to be marginal compared to two existing papers [1] and [2]. [1] showed that the continuous-time least-squares algorithm leads to logarithmic regret and hence greedy algorithm (with no exploration) is sufficient for finite-horizon LQ problem. [2] proved an $O(\\sqrt{T})$ regret for infinite-horizon ergodic LQ problem in discrete time. \n\nIt is not clear, from both theoretical and empirical perspectives, what is the benefit of using Thompson Sampling (or Guassian exploration on the estimated model parameters). This is because it has been shown in the literature that:\n\n(1) LG problems have self-exploration properties (due to the Gaussian noise in the linear dynamics) and the least-square estimate (exploration-free method) leads to log regret bound [1]\n\n(2) Using linear regression to estimate parameters in the dynamics is sample efficient [3]\n\n[1] Basei, Matteo, et al. \"Logarithmic regret for episodic continuous-time linear-quadratic reinforcement learning over a finite-time horizon.\" arXiv preprint arXiv:2006.15316 (2020).\n\n[2] Abeille, Marc, and Alessandro Lazaric. \"Improved regret bounds for thompson sampling in linear quadratic control problems.\" International Conference on Machine Learning. PMLR, 2018.\n\n[3] Dean, Sarah, et al. \"On the sample complexity of the linear quadratic regulator.\" Foundations of Computational Mathematics 20.4 (2020): 633-679.\n\n==========\nI have read the reviews from other reviewers and the responses from the authors. The authors have addressed my concerns and I raised my rating to 6. Please address the comments listed above. Yes.",
" The paper studies policies for systems that evolve according to Ito stochastic differential equation (1) with unknown drift parameters. The cost function is quadratic. The goals are: first, to minimize the regret; and second, to accurately estimate the drift parameters. The authors first propose Alg. 1 and show that with high probability it stabilize the process. The they propose Alg 2 using Thompson Sampling. Regret bound and estimation rates are given in Theorem 2. Experiments are conducted comparing the proposed method to reference [2]. Strengths:\nThere is a gap in the literature on applying TS to controlling the continuous-time diffusion process. The paper first provide algorithms and analysis for such method. Detailed proofs are given in the appendices.\n\nWeaknesses:\nThe problem setting is very similar to reference [2]. The experimental results in section 6 and appendix D do not reflect a significant improvement. In fact, the normalized estimation error using randomized estimation seems to be better than the proposed method on average. The first part of the paper focuses on stabilization. But Definition 1 is given before proper introduction. Moreover, the definition directly refers to the eigenvalues of the closed-loop matrix, which uses a linear feedback. Perhaps it would be better to give some intuitions about stability of the system and what the failure event implies. Also, does Algorithm 2 have similar stability result?\n\nThe numerical results in Figure 1 suggest that the randomized estimation performs better in terms of the normalized estimation error. How would you interpret the results?\n\nThe problem setting is very similar to reference [2]. It would be better to elaborate the introduction about [2], especially because it is also used as the baseline. not applicable",
" The paper discusses Thompson sampling for linear-quadratic continuous-time stochastic optimal control problems. \n\nThe analysis and numerical algorithms are developed using a conjugate Gaussian prior distribution. \nThe paper discusses thoroughly theoretical implications, including (i) a bound on the probability of not stabilizing the system under a linear exploration strategy, (ii) a bound on the squared estimation error, and (iii) a finite-time regret bound.\n\nThe developed algorithm is numerically investigated using some synthetic examples of stochastic optimal control problems. It was very surprising to me that nobody until now did a theoretical analysis for this linear-quadratic setting. In my opinion, this paper solidly develops theoretical guarantees for the popular Thompson sampling algorithm. Since model-based RL algorithms in continuous-time are still lacking within machine learning, I very much appreciate this work.\nIt is nicely written, and I enjoyed reading it. I appreciate that the authors discuss the intuitive implications of the bandit analysis, which is normally filled with hard-to-understand mathematical intricacies.\n\nA minor point I have to critique is, in my opinion, the claim in line 70: \"Additionally, through extensive simulations we illustrate that TS enjoys smaller average regret and substantially lower worst-case regret than the existing RL policies, thanks to its informed exploration.\", which is a bit far stretched, as this work only compares one other algorithm to this Thompson sampling strategy.\n\nThis brings me to my main critique point, which is that the topics adaptive control and dual control (Bayesian reinforcement learning within machine learning) are not sufficiently mentioned in the paper, see, e.g., [1]. Theory discussing simultaneous optimal estimation and control is unsurprisingly very old. This goes back to the more than 50-year-old works of Feldbaum [2] and is since then been discussed within the control community. This topic, which in principle optimally solves the problem has to be discussed within the related work.\nThis would also give a nice numerical example. For example, there are very simple dual control problems, see, e.g., [3], where one could try to find the solution to the HJB equation numerically using the finite difference method. This would result in a control strategy that Bayes-optimally selects the actions and hence, balances exploration and exploitation optimally.\n\nHowever, all critique aside, I am of the opinion that this paper is a solid contribution to the community.\n\n- [1] Stengel, Robert F. Optimal control and estimation. Courier Corporation, 1994.\n- [2] Feldbaum, Aleksandr Aronovich. \"Dual control theory. I.\" Avtomatika i Telemekhanika 21.9 (1960): 1240-1249.\n- [3] Florentin, J. J. \"Optimal, probing, adaptive control of a simple Bayesian system.\" International Journal of Electronics 13.2 (1962): 165-177.\n Something I did not understand was in the paragraph in line 131:\n\"We assume that the process (1) with the drift parameter [...] is stabilizable. Therefore, [...] exists, is unique, and can be computed using continuous-time Riccati differential equations similar to (5), except that the zero matrix on the right-hand side will be replaced by the derivative of [...] [32–35].\"\nWhat is meant by this? I know of the Riccati equation for finite-time optimal control problems, which is an ODE. Though, here an average reward criterion is discussed. Maybe the authors can elaborate a bit on this.\n\nIn line 192: \"Nonetheless, if there is no such prior, we simply let [...] and [...].\" This formulation confuses me. I think it shadows a bit that the hyper-parameters are set in this way because in the proceeding bounds the hyper-parameters $\\hat{\\mu}_0$ and $\\hat{\\Sigma}_0$ are not appearing anymore. Some explanation for this would be appreciated.\n\nThe last question I have is: Would it be possible to also derive these bounds under different posterior distributions than a Gaussian? For example when using a sparsity-prior. Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work."
] | [
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
1,
4,
4,
4
] | [
"8AwzIScQmq",
"4mrxfdFoWO",
"kUfI8F-O0j",
"RFL5JD63BZ",
"nips_2022_kRgOlgFW9aP",
"nips_2022_kRgOlgFW9aP",
"nips_2022_kRgOlgFW9aP",
"nips_2022_kRgOlgFW9aP"
] |
nips_2022_lJHkZbX6Ic1 | Is this the Right Neighborhood? Accurate and Query Efficient Model Agnostic Explanations | There have been multiple works that try to ascertain explanations for decisions of black box models on particular inputs by perturbing the input or by sampling around it, creating a neighborhood and then fitting a sparse (linear) model (e.g. LIME). Many of these methods are unstable and so more recent work tries to find stable or robust alternatives. However, stable solutions may not accurately represent the behavior of the model around the input. Thus, the question we ask in this paper is are we approximating the local boundary around the input accurately? In particular, are we sampling the right neighborhood so that a linear approximation of the black box is faithful to its true behavior around that input given that the black box can be highly non-linear (viz. deep relu network with many linear pieces). It is difficult to know the correct neighborhood width (or radius) as too small a width can lead to a bad condition number of the inverse covariance matrix of function fitting procedures resulting in unstable predictions, while too large a width may lead to accounting for multiple linear pieces and consequently a poor local approximation. We in this paper propose a simple approach that is robust across neighborhood widths in recovering faithful local explanations. In addition to a naive implementation of our approach which can still be accurate, we propose a novel adaptive neighborhood sampling scheme (ANS) that we formally show can be much more sample and query efficient. We then empirically evaluate our approach on real data where our explanations are significantly more sample and query efficient than the competitors, while also being faithful and stable across different widths. | Accept | The paper attacks the problem of how to define "local" when generating local linear explanations (e.g. LIME). Forming the linear approximation using multiple points, the proposed method attempts to balance robustness of the explanation vs its specificity. The approach of using multidimensional piecewise linear segmented regression is sensible for this end, albeit if increasing runtime. The majority of reviewers had a favorable opinion of the work, recognizing the paper's contribution as targeted but important, given the popularity of local linear explanations. Even reviewer hHAF, who recommended rejection, recognized the work's practical benefit ("experimentally it is beneficial"). Thus, I recommend acceptance. | train | [
"H-0buaAJa26",
"7IUw3PI2fSl",
"uHmNV6lxChO",
"_GB8sS_eMVc",
"4Mvj2eHdn2C",
"kqBSHt8kjt",
"8hOFzXRmqoH",
"6VnykH2hGeI",
"wVZtQBqzpHnI",
"NPYMtJUPLxW",
"dv-84d3z5PC",
"8SHaeoRw0D3",
"dmuUikECKPg"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Since the response period ends tomorrow. Please let us know if you have any further questions/concerns. Thank you.",
" We are glad that most of your concerns have been addressed. Yes, most other methods sample in input space. Even manifold methods such as MeLime which sample in the latent space end up decoding these neighbors and finally fitting a (proxy) explanation model in the input space, where the explanations highlight important input pixels since the explanations have to be humanly understandable.",
" Most of my concerns are addressed. Are other works for interpretability also sample neighbors in the input pixel space for cifar-10?",
" As argued in our response above we believe our contribution to be significant which the other 3 reviewers also seem to be in agreement with. Please let us know if you have any other questions. Thank you.",
" **Definition of stability and faithfulness:**\n\n-- We now have defined both of these terms at the end of the introduction.\n\n\n**indicate for each metric whether higher or lower is better:**\n\n - We now have indicated in Figure 2 caption that lower values for all metrics are better.\n\n\n**for HELOC dataset, it seems that ANS-Basic and ANS are worse than the baselines wrt GI and CI:**\n\n-- We now have discussed this point in the Observations part of Section 4. \n\n\n**for CIFAR10 comparison with LIME is missing:**\n\n-- LIME is present in blue for INFD. It is also present for ASC and QC, but just overplotted by MeLIME since both use all samples and hence have same complexities. We could not compute GI and CI for it since LIME uses superpixels for images which are different for each image and hence it is not possible to apply the same model to neighbors and obtain GI or compare coefficients since the features (i.e. superpixels) themselves are different.\n\n\n**for Cifar10, how do you sample neighbors? in input image space or feature space?**\n\n-- For ANS we sample in the input space where each pixel is a feature.\n\n\n**Tabular data qualitative examples:**\n\n-- Figure 8 in the supplement provides some (randomly selected) qualitative examples for the IRIS dataset, where we again witness the stability of our method.",
" **Report Variances:**\n\n-- We now report standard errors in Figure 9 in the supplement. If you think this is appropriate we will add it to the main paper in the final version. Error bars are larger for IRIS since it is a significantly smaller dataset than the other two (test size = 30).\n\n\n**Limitations not discussed:**\n\n-- Sections F and G in the supplement discuss limitations and societal impact of our approach respectively.\n\n\n**Small scale datasets:**\n\n-- The datasets we used have been commonly used in other explainability works (Ramamurthy et. al. NeurIPS 2020; Dash et. al. NeurIPS 2018; Dhurandhar et. al. arxiv 2022; Arya et. al. JMLR 2020).",
" **method is slower:**\n\n-- This actually is not necessarily the case. This was in the supplement but now we have mentioned in the observations subsection (in Section 4) in the main paper that for ResNet-18 we are actually faster than LIME. The reason for this is that as NN models get deeper inference time is no longer insignificant and hence having to query it fewer times more than compensates for the extra running time of MPSLR schemes. Even for the ResNet-18 case we were about 10 seconds faster per example. This gap should increase as we consider deeper models.\n\n\n**involves choosing multiple hyper parameters like $n$, $\\alpha$:**\n\n-- You are right that $n$ may have to be chosen, however $\\alpha$ can be estimated per equation 1. Even $n$ could be chosen by seeing performance for different values and seeing the behavior as shown in Figures 4-6 in the supplement.\n\n\n**accuracy depends on correctness of MPSLR methods:**\n\n-- As discussed in section F of the supplement, although we want reasonable MPSLR schemes we are mainly affected by error only in one direction. As such, only overestimation of the range (underestimation of the number of linear pieces) is a problem as it may cover non-linearities however, if we are to underestimate the range (overestimate the number of linear pieces) it is largely fine since the black box will still correspond to the correct linear piece in this case.\n\n\n**Defining stability and faithfulness:** \n\n-- We now have defined both of these terms at the end of the introduction.\n\n\n**I wanted to clarify if this is the first work which does the adaptive sampling of the neighborhood:**\n\n-- Thanks for noticing this, which even skipped our minds. Yes, you are right to claim that this might be the first adaptive neighborhood sampling procedure for local black box explainability.\n\n\n**Stability of MeLIME:**\n\n -- Although we are still more stable than MeLIME it is seemingly stabler than other alternatives in some cases. We believe this happens because MeLIME uses realistic perturbations which leads to a better neighborhood thus resulting in superior explanations.\n\n\n**More details on other methods claiming stability:**\n\n-- We now have added some details for methods that target stability in the related work section as indicated by you.\n\n**clarify if the metrics shown in the graphs are averaged over all test examples:**\n\n-- Yes, the results are over all test samples.",
" **minor improvement over existing approaches:**\n\n-- As correctly pointed out by reviewer 8rcz ours is to the best of our knowledge the first adaptive sampling method for local post-hoc black box explainability which is one of the hottest topics in XAI research as it can have significant bearing in appropriating trust in models (Ferrario et. al. FAccT, 2022; Arya et. al. JMLR 2020). As mentioned in the introduction, sampling is the most critical part for generating faithful and stable explanations, since the explanation model itself is fixed (e.g. sparse linear). All other works (mentioned in the paper) have static sampling strategies that are either in the input space or on a manifold. Conceptually, we are the first to realize that manifold or not examples sampled in a neighborhood could belong to different linear parts in a non-linear function such as a deep ReLU network and hence using them to obtain a local explanation through sparse linear or some other simple model fitting can be misleading (mentioned in related work). Theoretically and operationally, we show that our adaptive sampling method can be significantly more query (and sample) efficient which has not just computational but direct monetary significance in today's multi-cloud world as pointed to in the introduction. Moreover, we believe the simplicity of the solution to be a positive as it can be easily implemented and hence more widely used. Equally importantly, this solution was arrived at after giving significant thought to other alternatives such as rejection sampling where one typically has a proposal distribution belonging to the natural exponential family (NEF) that one uses to sample from. However, this option was thoughtfully rejected by us given the two factors mentioned in the Section 5. Hence, although our proposal may seem simple it requires thought, analysis, and careful experimentation to arrive at. Not to mention our approach besides being intuitive has strong motivations grounded in causality and practical behavior of neural networks, which we have now added in Section 5. In the original submission, these were mentioned in the supplement. We hope that these points convince you of our contribution. Thank you.",
" We thank all the reviewers for their constructive comments. We are glad that you found our paper to be well-written, intuitive and novel. Reviewer 8rcz even pointed out (which we missed) that our work might be the first adaptive sampling work in this area. Thanks for recognizing this.\n\n\nBased on the reviewer comments we have made the following updates to the paper. Note that these updates are highlighted in *blue* in the updated paper.\n\n\nMain paper updates:\n1) We have now clarified what we mean by faithfulness and stability.\n2) We have added more details regarding related works that discuss stability.\n3) We have indicated that lower is better for all the metrics in the experiments.\n4) More discussion of observations added in the experimental section.\n5) Moved section B from the supplement to section 5 in the main paper which provides causal as well as a neural network behavioral motivation for our approach.\n\n\nThese were the major updates to the paper. We now address individual reviewer comments.",
" This paper is on improving explainability techniques for machine learning models. A popular technique is based on local linear models that make a locally linear approximation of the ML model around a provided input example for which an explanation is sought. Local linear approximations make use of random samples from around the provided input example to estimate the linear model. However, having samples too close to the input can affect the condition number of the covariance matrix, while on the other hand having the samples spread out too much can lead to an unfaithful model. \n\nThis paper proposes a strategy that chooses the samples in an adaptive manner to ensure that the samples span the locally linear region of the model, leading to a more reliable model. Experimental evaluation shows better performance compared to other locally related baselines. **Strengths**:\n\nThe proposed method samples points adaptively thus not requiring to set a sampling variance.\nThe method also takes into account the uncertainty of the estimated values a_n, b_n for improved robustness. Experimental evaluation shows that it leads to a better performance w.r.t. to a variety of performance metrics.\n\n**Weaknesses**:\nI do not work in explainability for ML, however it seems that the contribution is a minor improvement (the sampling strategy) over existing approaches. As such, the approach is not so novel, in my opinion, even though experimentally it is beneficial. NA NA",
" Many local explanation methods have been developed for explaining black box models in a post hoc fashion. These methods first sample around the example to be explained and build a linear model using the samples to form an explanation. However, the challenge in these schemes is that we do not know if we are sampling the right neighborhood around the example. If the neighbourhood is too small, the linear fitting can be unstable due to a bad condition number and if the neighborhood is too large, the function that these methods approximate may not be linear. This work provides an adaptive sampling procedure where they first estimate the linear region around the example using a few samples and then sample within that region only taking into account the uncertainty of the estimate. This work then shows using experiments on a few tabular datasets, that their procedure is much more query efficient and sample efficient and leads to stable predictions across multiple widths of sampling. The method proposed in this work seems like a very natural and simple idea which gives improvement over the existing methods. The explanations are stable over different widths which is what was desired. \n\nOne weakness is that this method is slower because the method involves running MPSLR schemes and the accuracy of the method depends a lot on the correctness of MPSLR methods. The method is also somewhat complicated to implement and involves choosing multiple hyper parameters like n, \\alpha. \n\nThe paper is generally well written and clear. \n\nThere could be more discussion on the related work. It would be nice if there were more details on how different methods achieve stability in their methods. Also, stability and faithfulness have been mentioned multiple times but it is not clearly defined anywhere what do the authors mean by that. I am guessing stability is defined as the change in explanations as the sampling width increases. But, in any case, these terms should be clearly explained in the beginning. The related work section does not mention any previous work which also does adaptive sampling of the neighborhood, I wanted to clarify if this is the first work which does the adaptive sampling of the neighborhood. \n\nThe stability with respect to kernel widths is shown in the figures in the appendix. I see that Melime is also quite stable across different widths. Can the authors please comment a little more on the Melime method?\n\nI also want to clarify if the metrics shown in the graphs are averaged over all test examples. The authors have adequately discussed the limitations of their work. They have showed that their explanations are much more stable across different sampling widths but as pointed out by the them, there is no way to check if the explanations are more accurate. Moreover, the correctness of their method depends on MPLSR schemes and if the range of the linear segment is found to be incorrect in these schemes, the quality of explanations may degrade. They have also pointed out that their methods is slower as compared to the vanilla methods and have also given a dataset example where their performance is weaker than the vanilla methods.",
" The paper proposes a rejection sampling method for neighborhood-based explanation methods including LIME. It argues that existing neighborhood-based explanation models are quite unstable in terms of the the neighborhood width, which is the boundary range for the local neighborhood and the mean value of a Gaussian distribution where the local neighborhood follows. In order to make the choice of neighborhood width insensitive, it first samples several data points (n) to compute the range $[a_n, b_n]$ and an uncertainty score $\\alpha$ to adaptively learn the neighborhood distribution from $N(\\mu, \\sigma I)$ to $N(\\alpha \\mu + (1-\\alpha)\\frac{a_n + b_n}{2}, \\sigma I)$. Then the LIME algorithm is applied for fitting an interpretable explainable model. It also analyses the sample efficiency and query efficiency compared to a naive version, showing that the method is query/sample-efficient. - Strengths\n - The proposed method is intuitive and relatively clearly explained.\n - The experiment shows that the method is simple and effective.\n - Efficiency analysis is covered in terms of sampling, query and training MPLSR.\n- Weakness\n - The experiments are limited to small-scale datasets.\n\n - It would be nice if the variance can be provided in Figure 2 to show the performance of the proposed method is stably outperforming other methods.\n The limitation is not discussed in the paper. ",
" This paper proposes a technique called adaptive neighborhood sampling scheme (ANS) to make local explanations more faithful, stable, and sample query efficient. ANS is built on multidimensional piecewise linear segmented regression (MPLSR) to indentify boundaries of each linear piece. Pros:\n\n(1) The motivation is clear and the method to use MPLSR is solid to identify linear pieces. \n\n(2) Experimental results show the better fidelity and sample efficiency of the proposed methods.\n\nCons:\n\n(1) I think this paper needs to give a formal definition of \"stability\" and \"faithfulness\".\n\n(2) It would be easier for readers to understand the metrics if the author can explicitly indicate for each metric whether it is higher the better or lower the better.\n\n(3) According to figure 2, for HELOC dataset, it seems that ANS-Basic and ANS are worse than the baselines wrt GI and CI. Also for CIFAR-10, a comparison to LIME is missing. (1) for Cifar10, how do you sample neighbors? in input image space or feature space? (1) It would be better if the author can provide some qualitative examples on the Tabular datasets."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
3
] | [
"8hOFzXRmqoH",
"uHmNV6lxChO",
"4Mvj2eHdn2C",
"6VnykH2hGeI",
"dmuUikECKPg",
"8SHaeoRw0D3",
"dv-84d3z5PC",
"NPYMtJUPLxW",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1",
"nips_2022_lJHkZbX6Ic1"
] |
nips_2022_ZQcpYaE1z1r | A Quantitative Geometric Approach to Neural-Network Smoothness | Fast and precise Lipschitz constant estimation of neural networks is an important task for deep learning. Researchers have recently found an intrinsic trade-off between the accuracy and smoothness of neural networks, so training a network with a loose Lipschitz constant estimation imposes a strong regularization, and can hurt the model accuracy significantly. In this work, we provide a unified theoretical framework, a quantitative geometric approach, to address the Lipschitz constant estimation. By adopting this framework, we can immediately obtain several theoretical results, including the computational hardness of Lipschitz constant estimation and its approximability. We implement the algorithms induced from this quantitative geometric approach, which are based on semidefinite programming (SDP). Our empirical evaluation demonstrates that they are more scalable and precise than existing tools on Lipschitz constant estimation for $\ell_\infty$-perturbations. Furthermore, we also show their intricate relations with other recent SDP-based techniques, both theoretically and empirically. We believe that this unified quantitative geometric perspective can bring new insights and theoretical tools to the investigation of neural-network smoothness and robustness. | Accept | All the reviewers agree that the paper is novel and interesting and it should be accepted. Please take into account the reviewers' comments while preparing the camera-ready version, particularly the ones on the clarity of the paper. | train | [
"hEjhb-6ix8W",
"df7EmIJvtbp",
"YK2UDZW9qeH",
"QJZyQAD96m6",
"lT7Is6FsHFN",
"tU_-GkdbeB",
"EEG_ckDPNLN",
"akn03o95q-H",
"yypPzzRiG6q",
"keIXIrqAIOM",
"Q-tJAJ0gag",
"I38kHNA-GAI",
"omxrrEh-xwN",
"VXZkdm7Vrzb",
"GRw98_X-XwB",
"T_L6oMlPaQ",
"UEd1w89AlHM",
"3qDyjcn06tx",
"oZ9eIBt2EKX",
"1RefYRy8i2g"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your continuing engagement in the discussion. \n- Changing the underlying geometry: yes.\n\n- Evaluation in the context of adversarial robustness: Thanks for the clarification. Given the limited time remaining for the discussion, we would not be able to provide additional experimental results. For example, we will need to compute 45 (10 choose 2) pairwise margin Lipschitz constants for each MNIST network, which is time-consuming to solve for the tools we are considering. Our existing experimental evaluation provides strong evidence to support the claims made in our work and is easy to reproduce.\n\n- Additional question: For $p=2$, the dual extension will give us exactly LipSDP-neuron. Because we give a compositional quadratic program interpretation of LipSDP-neuron, we can extend the structure in LipSDP-neuron to many new settings, and encode new computation structures beyond DNN. In contrast, LipSDP proposed an SDP as a whole, known as LipSDP-network, and then devised LipSDP-neuron as a simplified variant of LipSDP-network. Recently LipSDP-network was shown to fail to produce an upper bound of the Lipschitz constant, which would be an intuitive result under our framework. We will clarify this fact in Remark 4.3.\n\nWe are happy to address any further concerns and questions. If our responses have addressed all the concerns and the revised paper has improved the merit of our work, we would appreciate an increased rating.\n\n",
" Thank you for clarifying these points. I read and appreciated your answer. I have also read the other questions raised by all reviewers and the following discussions.\nI do not have any further question for now.",
" We appreciate your continuing engagement in the discussion. \n\nRegarding the measurement of sampling, it is possible that the network which we can brute-force search has only a few activation nodes (8 or16), and sampling many (200,000) inputs can activate all or most of the patterns. However, for many activation nodes, it is infeasible to have a brute-force enumeration of all the activation patterns, so we do not have groud-truth information. Sampling has no guarantee whether it can activate all patterns unless we have sampled all possible inputs, which is also impractical. We will add a discussion on this observation.\n\nAll reviewers have provided presentational suggestions, and we have incorporated them into our updated paper. We appreciate all the advice and are more than happy to take further suggestions that can improve our paper. \n\nWe believe that our work has lots of theoretical insights and practical implications (which is also hard to include in a paper with a small page budget), especially on why we can unify the $\\ell_2$ and $\\ell_\\infty$-perturbations. On Lipschitz constant (upper bound) estimations, we do not know any previous works that can achieve this. We hope that our approach, in particular, the application of tools from quantitative geometry that characterize the relationship between metrics, can bring awareness to the relationship between metric geometry and deep learning, and bring more ideas to the investigation of neural networks.",
" I thank the authors for considering my comments and answering my questions seriously.\n\n- Questions: OK\n- Limitation 1: I understand that the proposed methods apply activation functions with bounded derivatives and are not restricted to ReLU.\n- Limitation 2: I understand that theoretical guarantees for multi-layer NNs are an open problem and therefore, their non-existence is not a big negative point.\n- Comparison with sampling: I understand we cannot directly compare the proposed method with the sampling because it just gives a lower bound of the Lipschitz constant. \n- Organization: OK\n- Grothendieck constant: OK\n- Qualitative geometric principle: I thank the authors for the explanation. If my understanding is correct, the authors claim that we can treat the Lipschitz constant estimation problems for different $p$ by changing the underlying geometry.\n- Evaluation in the context of adversarial robustness: I appreciate the authors providing additional experiment results. My original intention was to evaluate how the proposed methods have a good effect on the performance of target NNs as machine learning models (e.g., prediction, estimation, generalization). I am sorry for the confusion. I understand that the proposed method gives a good Lipschitz constant estimation in the adversarial training setting.\n\nAdditional question: Section 4.2 extends the dual SDP problem to the multi-layer NNs for $p=\\infty$. Is it possible to do a similar extension for the $p=2$ setting?",
" Thanks for pointing this out. A missing discussion or comparison with sampling methods is not my main criticism of the paper but rather its relatively poor presentation. I just got struck by the fact that the sampled approximations of the Lipschitz constant precisely agree with the brute force estimations whereas the GeoLIP method yields a conservative upper bound in these cases.",
" Thanks for your response and the clarifications. \n\nRegarding the sampling issue: I am well aware that this only provides a lower bound and that smaller values in the tables don't indicate better performance. Nevertheless, I was surprised to see that sampling gives precisely the same values as the brute force estimation in table s 1 and 3 whereas GeoLIP and the other approaches give strictly larger upper bounds. \n\nI would appreciate if the revised version of the paper would have an improved structure and explanations as clear as in the general response of the authors.",
" Thanks for your continuing effort in our discussion. If our understanding of GroupSort activations is correct, we can encode GroupSort activations in quadratic relations.\n\nBecause GroupSort activates on pairs of consecutive neurons, and the semantics of group sort is to either preserve a fixed pair or switch the pair, the FGL is a maximization problem over all possible switching or keeping the pairs.\n\nLet $\\Delta x$ and $\\Delta y$ denote the input perturbations on a pair of neurons, and $\\Delta u$ and $\\Delta v$ be the output perturbation of the pair of GroupSort neurons. The relations we want to encode is $\\Delta u = \\Delta x$ and $\\Delta v = \\Delta y$, or $\\Delta u = \\Delta y$ and $\\Delta v = \\Delta x$.\n\n1. It is not hard to encode $\\Delta u = \\Delta x$ or $\\Delta u = \\Delta y$: $(\\Delta u - \\Delta x)(\\Delta u - \\Delta y)=0$.\n\n2. Similarly, we can have $(\\Delta v - \\Delta x)(\\Delta v - \\Delta y)=0$ for $\\Delta v = \\Delta x$ or $\\Delta v = \\Delta y$.\n\nThen we need to make sure $\\Delta u = \\Delta x$ and $\\Delta v = \\Delta y$, or $\\Delta u = \\Delta y$ and $\\Delta v = \\Delta x$, not other cases (for example, $\\Delta u = \\Delta x$ and $\\Delta v = \\Delta x$ also satisfies the two constraints above). Notice that if $\\Delta x = \\Delta y$, then we are already done. We need to consider when $\\Delta x \\neq \\Delta y$. We can add an extra constraint: $(\\Delta u - \\frac{\\Delta x+\\Delta y}{2})(\\Delta v - \\frac{\\Delta x+\\Delta y}{2})\\leq 0$. This constraint would make sure that $\\Delta u \\neq \\Delta v$ given $(\\Delta v - \\Delta x)(\\Delta v - \\Delta y)=0$ and $(\\Delta u - \\Delta x)(\\Delta u - \\Delta y)=0$.\n\nNotice that to apply Shor's relaxation [Chapter 4.3.1, 1], we need inequality relations, so $a=b$ will be two constraints: $a\\geq b$ and $a\\leq b$. As a result, we need five constraints for each pair of neurons, and then introduce five dual variables in Shor's relaxation, instead of one dual variable for one neuron in the elementwise activation cases.\n\nFor the elementwise quadratic encoding, we would have $(\\Delta u-a\\Delta x)(\\Delta u- b\\Delta x)\\leq 0$, corresponding to $a \\leq\\frac{\\Delta u}{\\Delta x}\\leq b$ (in the ReLU case, $b=1$ and $a=0$). For pairwise neurons, $\\Delta u$, $\\Delta v$, $\\Delta x$ and $\\Delta y$ are entangled. $\\Delta x$ and $\\Delta y$ are still the linear transformations from the previous layer output perturbations as in the elementwise activation network case; i.e., $\\Delta x = w\\cdot \\Delta z$, where $w$ is the weight vector corresponding to the neuron, and $\\Delta z$ denote the output perturbation from the previous layer.\n\n[1] Aharon Ben-Tal and Arkadi Nemirovski. 2001. Lectures on Modern Convex Optimization. Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9780898718829",
" As a fellow reviewer, I'd like to add that I agree with authors regarding the limitations pointed out by reviewer xmVx.\n\nSampling can only yield lower bounds. So, it is fact complementary with approaches based on SDP relaxations. \n\nIt does not make sense to compare sampling and SDP. They need to be used in conjonction to \"sandwich\" the true value of the Lipschitz constant.\n\nHence, the paper is a good contribution on its own that does not require comparison with sampling. The closeliness between true Lipschitz constant, sampling, and GeoLip bound is clearly in favor of the paper.\n\n",
" Thank you for your precisions.\n\nI was curious about **GroupSort** activation function. It is defined as follow :\n\n$\\text{GroupSort}(x_1,x_2)=(\\max(x_1,x_2),\\min(x_1,x_2))$\n\nLike ReLU it is piecewise affine, but operate on **pairs** of consecutive neurons.\n\nIt is more expressive than ReLU. See the paper below:\n\nTanielian, U. and Biau, G., 2021, March. Approximating Lipschitz continuous functions with GroupSort neural networks. In International Conference on Artificial Intelligence and Statistics (pp. 442-450). PMLR.\n\nIt is used to parametrized 1-Lipschitz NN since the Jacobian is orthogonal. It avoids vanishing gradients and benefit from universal approximation theorems.\n\nDo you believe a quadratic program can encode those activations ?",
" We appreciated the thoughtful review, and are available to answer any questions.",
" We appreciate your detailed and rigorous review a lot. Here are our responses to the questions and concerns:\n\n- l103: We removed it.\n\n- l107: We expanded the multiplication in the equation.\n\n- l160: We added a definition in Appendix A.2.\n\n- l169 eq (7): We removed $\\cdot$.\n\n- l209: We modified the notation to avoid confusion. Now we have $f(x) = u\\sigma(y)$.\n\n- l563: This holds because for $x\\in \\mathbb{R}^n$, ||x||_1= |x_1|+...+|x_n| = \\max_{y\\in{-1, 1}^n}\\langle x, y \\rangle. See **C6. Intervals or vertices on the hypercube** in the general response section and Appendix A.1 paragraph *Maximum over hypercube* in the updated paper for more clarification.\n\n- Limitation 1: the proposed method does not only apply to ReLU-DNNs. The essential reason is that our quadratic constraint is compositional, and each of the constraints encodes a neuron computation. See **C2. Applicability of the SDPs** for more discussion.\n\n- Limitation 2: No theoretical guarantees for multi-layer networks. We do not know theoretical guarantees for multi-layer networks, though the dual program exploits the low-rank structure of the FGL-estimation problem. It is not clear to us how this implies a theoretical guarantee. We leave it as an open problem. **C4. Multi-layer network theoretical guarantee** provides more discussion.\n\n- Baseline method by sampling: sampling is expected to produce a lower value, and we used it as a sanity check, especially because LipSDP was shown to fail to produce an upper bound [1], and we want our upper bound to be at least sound. See **C5. Sampling** for more clarification. \n\n- Organizations: we reorganized section 3.1, and plan to isolate section 4.1 into a single section when more space is allowed. \n\n- The Grothendieck constant: $K_G$ is independent of the Hilbert space $H$ and the matrix $A$. That is why the Grothendieck inequality implies a universal $K_G$-approximation guarantee. If we impose additional assumptions on $A$ or $H$, the approximation ratio can be smaller.\n\n- The quantitative geometric principle: The SDP relaxation presented in section 3 is the precise SDP relaxation of a nonconvex optimization problem, and the approximation guarantee comes from the underlying geometric inequalities. As for the principle, we interpret LipSDP-neuron for $\\ell_2$-FGL estimation as a relaxed compositional quadratic program. To transfer the techniques to the $\\ell_\\infty$ perturbations, we only changed the perturbation geometry encoding. In contrast, [2, 3] scaled LipSDP's result by $\\sqrt{d}$ when transferring LipSDP's result to the $\\ell_\\infty$ setting, which we believe was a wrong transfer. See **C3. How is quantitative geometry related** for more discussion.\n\n- Evaluation in the context of adversarial robustness: Lipschitz continuity, an essential mathematical property of a function, plays an important role on many topics, including adversarial robustness and learning theory. Measuring the Lipschitz constant is a self-contained problem. For example, [3,4] did not evaluate the Lipschitz measurement in the context of adversarial robustness. The evaluation of [2,5] in terms of adversarial robustness was to measure the Lipschitz constant of adversarially trained networks, so we conducted a similar experiment to measure the Lipschitz constant of PGD-adversarially trained networks. Selected results are presented below, and the code has been pushed to the repository.\n\n| Network | DGeoLIP | NGeoLIP | LiPopt | MP | Sample |\n| ----------- | ----------- | ----------| ----------| -----------| -------|\n|2-layer ,128 units, normally trained| 361.75 | 361.75 | 741.23 | 2049.77 | 294.66\n| 2-layer, 128 units, adversarially trained | 54.49 |54.49 | 133.52 | 419.11 | 39.12\n\n| Network | DGeoLIP |MP | Sample |\n| ----------- | ----------- | ----------| ----------|\n|7-layer, 64 units per hidden layer, normally trained| 3782.94 |$1.123 * 10^7$ | 924.59\n|7-layer, 64 units per hidden layer, adversarially trained | 424.68 | $2.598 * 10^6$ | 58.85\n\nIt is easy to see that PGD-adversarial training strongly regularizes the network Lipschitzness.\n\n[1]Patricia Pauli, Anne Koch, Julian Berberich, Paul Kohler, and Frank Allgöwer. 2022. Training Robust Neural Networks Using Lipschitz Bounds. IEEE 31 Control Systems Letters 6 (2022), 121–126. \n\n[2]Matt Jordan and Alexandros G Dimakis. 2020. Exactly Computing the Local Lipschitz Constant of ReLU Networks. NeurIPS 2020\n\n[3]Fabian Latorre, Paul Rolland, and Volkan Cevher. 2020. Lipschitz constant estimation of Neural Networks via sparse polynomial optimization. ICLR 2020. \n\n[4]Kevin Scaman and Aladin Virmaux. 2018. Lipschitz Regularity of Deep Neural Networks: Analysis and Efficient Estimation. NIPS’18. \n\n[5]Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. 2019. Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks. NeurIPS 2019",
" We appreciate your careful review and are excited that you found our paper helpful. Here are our responses to the questions and concerns:\n\n- We have updated the paper with respect to the forward network notations. Hopefully, this improves the presentational weakness. See **C1.Notational confusion for forward networks** in the general response section for more clarification. We also provide a brief overview of our work in the general response section, which might be helpful to clarify the weakness concern.\n\n- Adaptation to non-elementwise activation functions: In the most general sense, we only need to write the problem of interest as a quadratic program, and then we can apply Shor's relaxation to obtain an SDP. See **C2. Applicability of the SDPs** in the general response section for more discussion. Whether we can extend the (dual) SDP to non-elementwise activations should depend on what the activation is, and whether we can use quadratic constraints to encode the activation. We are more than happy to discuss concrete examples if there are any.\n\n- Multi-layer theoretical guarantees: We do not know theoretical guarantees for multi-layer networks, even though our dual interpretation is the precise SDP relaxation (in the dual sense) of the FGL estimation, and provides a practical algorithm. However, it is unclear how the dual interpretation can provide a theoretical guarantee, and we leave this as an open problem. See **C4. Multi-layer network theoretical guarantee** in the general response section for more discussion. \n\n- Multi-layer $\\ell_2$-norm experiments: you are right because we do not have a duality program for multi-layer $\\ell_2$-FGL estimation; also there are few benchmark tools for $\\ell_2$-FGL estimation after LipSDP was known. The experiment on $\\ell_2$-FGL for multi-layer networks would be comparing LipSDP-neuron with sampling and matrix-norm product, and we should not take credit for LipSDP-neuron's empirical excellence. ",
" We appreciate your review. Here are our response to the concerns:\n\n- In the general response section, we provide a brief explanation and overall contribution of this work. We hope it can help clarify some of the concerns on weakness.\n\n- For questions related to the forward network notations. We apologize for the notational confusion and have already updated the paper. See **C1.Notational confusion for forward networks** in the general response section for more explanation.\n\n- [a, b] versus {0, 1}: [a, b] is used to denote the derivative range of the activation function, and {0, 1} are the (almost everywhere) derivative values of ReLU. We have changed the presentation on {0, 1} to [0, 1] to avoid unnecessary technical details in the main paper, and provide more explanation why we can use {0, 1} in Appendix A1. See **C6. Intervals or vertices on the hypercube** in the general response section for more discussion.\n\n- Sampling seems the best way to estimate the Lipschitz constant: sampling can only provide a lower bound of the true Lipschitz constant, and it is supposed to be lower than the upper bound computed in our experiments, so this is used as a sanity check. See **C5. Sampling** in the general response section for more discussion.\n\n- Related works on Lipschitz regularization: Lipschitz regularization and Lipschitz measurement of neural networks are related, but different problems and subjects. We will add a paragraph of Lipschitz regularization in the paper and cite those works once we are allowed more space in the main paper.\n\n- Negative societal impact: we discussed this in the checklist, and add a section in the updated appendix.\n\nWe are happy to address any further concerns and questions.",
" We really appreciate your detailed and professional review. Here are our responses to the questions and comments:\n\n- Q1: GeoLIP comes from the geometric ideas applied in this work. **C3. How is quantitative geometry related** in the general response section provides more discussion.\n\n- Q2: We are not very sure about the question. The result from NGeoLIP is the returned value from the Matlab solver to the corresponding SDP program, independent of the approximation result, i.e., even if we were unaware of the approximation guarantee, the experimental result would still stay the same. The SDP relaxation means maximizing over a larger space, so the maximum is also larger than the unrelaxed program, which provides an upper bound, but we do not usually know how good this upper bound is. The approximation guarantee indicates that this upper bound is not much larger than the maximum of the original program (see appendix A2 in the updated paper). If this does not answer your question, we are happy to provide more information.\n\n- Q3: In this case, the strong duality holds. The natural relaxation programs in section 3 are strictly feasible because the identity matrix is a positive definite solution, so Slater’s condition holds. We add this fact as a remark (Remark 4.2) in the updated paper. We have run a few hundred pairs of SDP programs during the development of this work, and all the dual pairs produce the same results, up to a negligible numerical difference from the solver’s numerical tolerance. (Each MNIST network produces 10 pairs of $\\ell_\\infty$ programs and 10 pairs of $\\ell_2$ programs because there are 10 predictions.)\n\n- Q4: Extending the SDPs in section 4 to larger networks is easy when it comes to composing the SDP, because our program reasoning technique is compositional, and it is straightforward to write the CNN or residual connections as quadratic constraints. We can then apply Shor’s relaxation scheme to derive an SDP. The bottleneck is solving the SDP. \nThere are two ways of improving the SDP solving: The first is to exploit the chordal sparsity [1,2], to decompose a large SDP constraint to a few smaller ones. The second is to implement faster matrix operations, such as multiplication and inverse, with respect to the SDP constraint matrix block structure, as proposed in [3]. \n\n- Convention: The upper indices are necessary when we introduce multiple layer networks in section 4.2, paragraph *Multi-layer extension*. If we used lower indices, it would be confusing to distinguish which matrix and vector we are referring to.\n\n- ref for the majoration of KG and its link to eq (5): Krivine showed that $K_G\\leq \\frac{\\pi}{2 \\ln(1+\\sqrt{2})} = 1.782…$, and Braverman et al. showed that $K_G< \\frac{\\pi}{2 \\ln(1+\\sqrt{2})} $. One can view the SDP relaxation in eq (5) as the sum of the inner product of vectors, whose Euclidean norms are 1. Because $X\\succeq 0$, $X=M*M^T$, where $M\\in \\mathbb{R}^{(n+m)\\times d}$ for some $d\\geq 1$. $u_i$ are the first $n$ row vectors of $M$, and $v_j$ are the last $m$ row vectors of $M$, so $X_{ij} = \\langle u_i, v_j\\rangle$ for $i\\leq n$ and $j\\geq n+1$. $X_{kk}=1$ means $||u_k||=1$ for $1\\leq k\\leq n$ and $||v_k||=1$ for $n+1\\leq k\\leq n+m$. Thus, $K_G$ in Theorem 3.1 is the approximation guarantee. Notice that if $d=1$ in $M$'s dimension, the SDP coincides with the combinatorial problem, because the inner product degenerates to the multiplication of two scalars. So the SDP relaxation can be viewed as a continuous relaxation of a discrete problem, and the inequality quantifies this geometric transformation.\n\n- In all generality, the dual of ℓ∞ is ill-defined: we agree with this if the underlying space is infinite-dimensional, and we need to impose extra conditions on the functional. Is this the concern?\n\nWe have edited the paper as advised, and appreciate these suggestions.\n\n[1]Matthew Newton and Antonis Papachristodoulou. 2021. Exploiting Sparsity for Neural Network Verification. In Proceedings of the 3rd Conference on Learning for Dynamics and Control (Proceedings of Machine Learning Research, Vol. 144, PMLR, 715–727) https://proceedings.mlr.press/v144/newton21a.html\n\n[2]Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J. Pappas, and Rajeev Alur. 2022. Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural Networks. https://doi.org/10.48550/arxiv.2204.00846\n\n[3] Patricia Pauli, Niklas Funcke, Dennis Gramlich, Mohamed Amine Msalmi, and Frank Allgöwer. 2022. Neural network training under semidefinite constraints. https://doi.org/10.48550/ARXIV.2201.00632",
" ### C1.Notational confusion for forward networks\n We attempted to use a simplified presentation of the network, but it was incorrect and created confusion. The forward network notation was not directly related to our algorithms or theory, and our intention was to specify the dimensions of matrices in the network from the notation. For line 103, there is no diag. For two-layer networks, we intend the network to be $f(x) = u \\sigma (Wx + b_1)$, and using $y$ to denote the values of $\\sigma’$ at the hidden layer; and similarly for multi-layer network in section 4.2. We apologize for this confusion, and have already updated the paper.\n\n### C2. Applicability of the SDPs\nWe encode the FGL estimation as a compositional quadratic program, and then apply Shor’s relaxation to the program. It is easy to see that we can extend this analysis to other activations such as ELU and sigmoid, just as LipSDP; and structures like convolutional layers and skip connections. We can use the SDP to reason any neural-network property that can be written in the compositional quadratic program form. For example, if we want to know the output sensitivity to a single input change, we can substitute the perturbation constraint in the program with the single-input-perturbation encoding. \n \n### C3. How is quantitative geometry related\nGeoLIP comes from the fact that we are realizing geometric techniques to estimate the Lipchitz constant, for example, our approximation results come from the geometric inequalities: Grothendieck (for $\\infty\\rightarrow 1$) and little Grothendieck (for $\\infty\\rightarrow 2$) inequalities [5], and their computational implications [6].\n\nThe specific example of the quantitative geometric principle considered in this paper is how we transfer LipSDP-neuron to the $\\ell_\\infty$-perturbation. We interpret LipSDP-neuron as Shor’s relaxation for a quadratic program, and the quadratic program exactly encodes the underlying perturbation geometry, neural network computation, and the Lipschitz objective. To transfer the LipSDP-neuron to the $\\ell_\\infty$-setting, we only need to encode a different geometry, and the rest remains the same. Our work is an example that deep learning is connected to metric geometry, and we believe that this perspective can bring more theoretical and mathematical tools to the investigation of neural networks, especially when the underlying space is equipped with different metrics.\n\n### C4. Multi-layer network theoretical guarantee\nWe do not know the theoretical guarantees for multi-layer networks. As discussed in Appendix B, if we interpret the FGL estimation from the polynomial-optimization perspective, this can be viewed as the tensor-norm problem, which the theory community does not know whether is easy or hard in the approximability sense. However, because of the low-rank structure, the FGL estimation can be easier than the general tensor-norm problem. Our perturbation analysis in Section 4 can be viewed as exploiting this structure in practice. We leave the theoretical guarantee for multi-layer networks as an open problem.\n\n### C5. Sampling\nNotice that sampling can only give a lower bound on the Lipschitz constant, while we need an upper bound as in most works [1,2,4]. We use sampling as a sanity check to ensure that the SDP method is at least sound and indeed provides an upper bound of the FGL. For example, LipSDP was shown to fail to produce an upper bound by [2]. Note that it is not the case that if the number is smaller in the table, the measurement is better. In the extreme case, if we only sampled one input and computed its gradient operator norm, we would have a very small number. However, it is useless and does not reflect the stability of the neural network.\n\n### C6. Intervals or vertices on the hypercube\nIn practice, the interval and vertex representations of cubes do not make any difference in the paper. The algorithm and approximation results remain the same. The difference is the MAXSNP result (whether we can build a reduction to a combinatorial problem) and whether we can provide a ground truth of FGL in the evaluation, i.e., the brute-force method for hypercube vertices. To avoid unnecessary technical details, we change the paper slightly: we use [0, 1] instead of {0, 1} for ReLU’s derivative, which is also consistent with the forward perturbation analysis (section 4); and point out that for the two-layer case, [0,1] is equivalent to the {0, 1} in the maximization problems considered in the paper. The equivalence is provided in Appendix A1.\n\n[5]Holden Lee, Assaf Naor, and Kiran Vodrahalli. 2016. Metric embeddings and geometric inequalities (Lecture Notes). https://web.math.princeton.edu/~naor/mat529.pdf\n\n[6]Vijay Bhattiprolu, Euiwoong Lee, and Madhur Tulsiani. 2022. Separating the NP-Hardness of the Grothendieck Problem from the Little-Grothendieck Problem. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). https://doi.org/10.4230/LIPIcs.ITCS.2022.22\n",
" We thank all reviewers. Some reviews point out our paper is not well organized. Because our paper has lots of content and we have a relatively small page budget, our presentation is not perfected and some details are omitted. We update the paper (**in the supplementary material**) as the reviews suggest. Due to the page limit, we keep some of the edits in the appendix. Some organizational changes in the main text have not been updated yet. We will update the main paper if it is accepted and we have an extra page. Here we provide an intuitive explanation and the overall contribution of our work, and then address some common concerns.\n\n### Overview\nOur goal is roughly to upper bound the operator-norm of all possible gradients. For a given input, we know its activation pattern, so we can compute the operator-norm of the gradient at that point. Note that the activation pattern of an input is not only decided by the weight matrix, but also the bias term. That’s why if any optimization program does not utilize the bias term, it cannot produce the true Lipschitz constant. However, analyzing the activation pattern for all inputs is infeasible, and we know that the derivative at each hidden node has a fixed range, so we can upper bound the true Lipschitz constant with the Formal Global Lipschitz constant (FGL), which is the maximum of the gradient operator norm assuming all activation patterns are independent and possible even though some of them are not in reality. \n\nNote that in this work, we have two relaxations: the first is to relax the true Lipschitz constant to FGL, and then use SDP to relax the FGL estimation.\n\nOn two-layer networks, we reduce the FGL estimation to the mixed-norm problem, by devising a cube transformation technique. This allows us to apply well-studied theoretical techniques (natural SDP relaxations) and results, and also hints that on two-layer networks, it is unlikely to provide more precise estimations on FGLs within polynomial time. \n\nOn multi-layer networks, we interpret the derivative at a hidden node as $\\frac{\\Delta \\sigma(y)}{\\Delta y}$, and restate the FGL optimization problem as a quadratic program. We then apply Shor’s relaxation to relax the quadratic program to an SDP. Notice that our reasoning for transforming a neural network to a quadratic program is compositional, and not restricted to DNN or $\\ell_p$-norm perturbations. This is a novel program-analysis technique for data-independent perturbation analysis of neural networks, and has more applications.\n\n ### Contributions\n\n1. Theoretics: We connect the FGL estimation problem with the mixed-norm problem, which establishes the theoretical aspect of the FGL estimation such as its computational hardness and approximability. Moreover, we reveal the hidden connections between [1] and LipSDP-neuron, and also hint that [1] and LipSDP-neuron are likely optimal within their application scope, so LipSDP-network might be wrong, which was confirmed by [2]. \n\n2. Practicality: We give a program-analysis interpretation to LipSDP-neuron, which allows LipSDP-neuron applicable to broader settings. Our data-independent perturbation reasoning is compositional, and can be extended to other analyses or structures. In particular, we only need to write the neural-network property of interest as a quadratic program. \n\n3. Relevance: We are one of the few known techniques that can work for both $\\ell_2$ and $\\ell_\\infty$ perturbations. We hope that our high-level idea can benefit future works on transferring techniques between metric spaces. There are also several optimization works [3,4] studying the structure proposed in LipSDP. However, given [2] refuted LipSDP, these works might appear ungrounded. Our work reestablishes the correctness of LipSDP-neuron, and shows that [3,4] potentially have broader impacts.\n\n[1] Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified Defenses against Adversarial Examples. In International Conference on Learning Representations. https://openreview.net/forum?id=Bys4ob-Rb\n\n[2]Patricia Pauli, Anne Koch, Julian Berberich, Paul Kohler, and Frank Allgöwer. 2022. Training Robust Neural Networks Using Lipschitz Bounds. IEEE 31 Control Systems Letters 6 (2022), 121–126. https://doi.org/10.1109/LCSYS.2021.3050444\n\n[3]Matthew Newton and Antonis Papachristodoulou. 2021. Exploiting Sparsity for Neural Network Verification. In Proceedings of the 3rd Conference on Learning for Dynamics and Control (Proceedings of Machine Learning Research, Vol. 144, PMLR, 715–727) https://proceedings.mlr.press/v144/newton21a.html\n\n[4]Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J. Pappas, and Rajeev Alur. 2022. Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural Networks. https://doi.org/10.48550/arxiv.2204.00846\n",
" The article tackle the estimation of the Lipschitz constant of a neural network for norms $p \\in \\{ 2, \\infty \\}$. The authors rely on previously existing SDP techniques and propose new formulations that give new theoretical insights on Lipschitz estimations.\nIndeed, the authors bound the approximation of the polynomial SDP relaxation, and provide formulation for norms $2$ and $\\infty$.\nThe theory is developed for NN with one hidden layer and then extend to the general case, giving a new algorithm called GeoLip.\nFinally, the paper provide experimental results that shows that GeoLip outperforms in accuracy and time previous state of the art methods on multi-layers straightforward neural networks. The paper is clear, well written and properly motivated. The other approach in the literature are well discussed and the paper is honest at comparing itself. Overall, I really enjoyed reading this paper.\n\nIn all generality, the dual of $\\ell^\\infty$ is ill-defined (l120) and requires some more hypothesis, even though it does not matter here.\n\nl131:\n In the problem description describe what is $y$ (one line it is $\\sigma$, the next one it is $\\sigma'$ if I get it) and explain the simplification made from the definition of neural network above (no bias).\n Use notations defined before, what is y here?\n Could we write f(x) = W2 sigma (W1 x), which would then reduce to eq(4) directly thanks to eq(3), while preserving the notations introduced before.\nl147: ref for the majoration of $K_G$ and its link to eq (5).\n\n\n* Conventions\n\nGlobally, why upper indices? In the paper it seems that lower indices should be enough and this would make the paper even clearer.\n\n* Typos:\n - l92: \"X [is] positive semidefinite\";\n - l88,92,146: Sentences beginning with a symbol;\n - l103: no diag in the equation;\n - l109: almost everywhere (no parenthesis);\n - l112: say.\n\nCheck the end of sentences in particular after formulas as the punctuation is very often missing or incorrect (e.g. sentence finishing with a ','). In particular in the Appendix where the punctuation is chaotic.\n\nReferences: many references are incomplete (lacks conference/journal) Where does the name 'GeoLIP' comes from?\n\nIn the experimental section, are NGeoLIP results taking into account the approximation bound in order to be upper-bounds?\n\nIt would be interesting to show an example in which NGeoLIP and DGeoLIP give different bounds in order to better assess the advantage of using a (fast) approximation algorithm.\n\nWould it be possible to extend the current approach to \"much bigger\" feed-forward network such as CNN or residual nets? It seems it is a difficult question as is, but would specific SDP solving strategies exploit the characteristics of these operations instead of their matrix form (which would make it intractable)? Yes",
" This paper proposed a new method for estimating the Lipschitz constant (more specifically, the ($\\ell^p$-) Formal Global Lipschitz (FGL) constant) of DNNs.\nFirst, this paper analyzed two-layer ReLU-DNNs and showed that $\\ell^{\\infty}$-FGL estimation is MAXSNP-hard (Theorem 3.2). Then, by relaxing the FGL-estimation problem and reducing it to SDP, this paper derived a polynomial algorithm with an approximation ratio $K_G$ (Theorem 3.3) for general $p=\\infty$ and $\\sqrt{\\frac{\\pi}{2}}$ for $p=2$ (Theorem 3.4).\nNext, the $\\ell^p$-FGL estimation problem of two-layer NNs was formulated as the maximization of perturbation of intermediate units with respect to that of the input. Then, this paper showed that this formulation is equivalent to the first formulation (l.211) and is dual to LipSDP (l.212--220) when $p=2$. In addition, this paper extended it to multi-layer NNs.\nFinally, this paper evaluated the proposed method numerically and claimed that the proposed method performed tighter estimation than existing methods for the $\\ell^{\\infty}$-estimation problem of two-layer and multi-layer DNNs. Strengths\n\n- NGeoLip proposed in this paper has theoretical guarantees for the FGL-estimation problem of two-layer NNs.\n- DGeoLIP proposed in this paper applies to multi-layer NNs, while competitive methods are limited to two-layer NNs.\n- SDP allows a unified approach to $\\ell^2$-FGL and $\\ell^\\infty$-FGL.\n\nWeaknesses\n\n- This paper is intended to apply the Lipshitz constant estimation to mitigate the vulnerability against adversarial attacks. However, this paper did not experimentally evaluate the effectiveness of the proposed methods in adversarial attacks.\n- There is room for discussion on whether the proposed method sufficiently demonstrates its effectiveness (either theoretically or empirically) to the estimation problem in the multi-layer settings.\n- There is room for improvements in the organization of the paper (see Clarity section).\n\n\nOriginality(Novelty)\n\n- As this paper mentioned, existing literature employed the idea of reducing the Lipschitz constant estimation problem to SDP (l.55). However, this paper adopted this idea differently and proposed different algorithms.\n- The authors claimed their work is guided by the principle that we should separate geometry-dependent and independent components. The authors think that this is a crucial idea underlying this paper. However, it was not clear to me how this principle works. For example, in Sections 3.1 and 3.2, $\\ell^\\infty$-FGL and $\\ell^2$-FGL were first reduced to the mixed-norm problem and then were analyzed separately. In this example, is it correct to understand that the reduction to the mixed-norm problem is the geometrically-independent part? I suggest the authors write how they applied this principle to actual algorithms. \n\n\nQuality(Soundness)\n\n- As far as I have checked, the mathematical statements and proofs are correct all in all.\n- The details of the experimental setup are described in the Appendix. I can confirm the validity of the experiment.\n- l.144: On which does the constant $K_G$ depend? Does $K_G$ depend on the Hilbert space $H$?\n\n\nClarity(Presentation)\n\n- There is room for improvement in the organization of the paper. However, I think it does not significantly impact the paper's evaluation because I expect the authors can correct it quickly.\n- I found Section 3.1 to be a bit difficult to read. Specifically, this section (1) explained the SDP relaxation for the matrix-norma problem (l.140--148), (2) changed the topic to the problem without relaxation and showed its hardness (l.149--152), and (3) went back to the problem with relaxation (l.153--169). Therefore, I suggest reconsidering the order of the three.\n- I think the content of Section 4.1 should be in Section 3 rather than Section 4 since it is about two-layer NNs. Raghunathan et al. (2018), which applied two-layer NNs, is compared to eq. (6) in Section 3.1. Also, LipSDP and LipSDP-network appeared in Section 4.1. is compared with eq. (7) in Section 3.2.\n\nSignificance\n\n- For two-layer NNs, this paper showed the theoretical and empirical effectiveness of the proposed methods well.\n- For multi-layer NNs, the DGeoLIP does not have theoretical guarantees. In addition, it is not better than the baseline method by sampling (Table 1). Therefore, I have a question about the effectiveness of the proposed method for multi-layer settings.\n- This paper intends to apply the estimation of the Lipschitz constant to mitigate the vulnerability of NNs to adversarial attacks. However, the numerical experiments are only performed on the pure Lipschitz constant estimation problem. Therefore, this paper would be more significant if it could show that the proposed method is effective against adversarial attacks in some way. - l.103: $f_i(x)=W^i\\mathrm{diag}(\\sigma(f_{i-1}(x))) + b_i$ → I think $\\mathrm{diag}$ is not needed here.\n- l.107: We do not usually use $\\prod$ for matrix multiplications, which are non-commutative.\n- l.160: Although we can understand the meaning of the term \"approximation ratio\" intuitively, I would suggest writing its mathematical definition explicitly since it is used in mathematical statements.\n- l.169 (7): $A\\cdot B$ is undefined.\n- l.209: $f(x) = u\\sigma(x)$ → $f(x) = u\\sigma(Wx)$ ?\n- l.563: I was not aware that $x$ in l.138 takes value in {-1, 1}^n. I suggest writing the range of $x$ more explicitly. I think l.563 does not hold in general (at least without any assumption). We should have:\n\n\\max_{x\\in \\{-1, 1\\}^n} \\|Ax\\|_{1} = \\max_{x\\in \\{-1, 1\\}^n, , y\\in \\mathbb{R}^m, \\|y\\|_{\\infty}=1} \\langle Ax, y\\rangle\n\nSo, I am wondering why we can restrict the range of $y$ to {-1, 1}^{m} As far as I have checked, this paper did not discuss the proposed methods' limitations. However, the proposed method (1) only applies to ReLU-DNNs and (2) has no theoretical guarantees for multi-layer NNs. Therefore, it is desirable to discuss such limitations.",
" In this paper the authors propose a new tool to estimate an upper bound on the Lipschitz constant of ReLU based feed forward networks. They focus on Lipschitz constant for L-infinity norms and L2 norms. The computation of the upper bound is based on SDP relaxations of the original problem, a standard approach of the literature. \n\nFor L-infinity, as said by the authors, they \"provide a rigorous derivation and simpler formulation, and also a sound theoretical analysis of the bound, which illustrate more insights to this problem\" compared to the work of Raghunathan et al. (2018). Whereas for for L2-bounds they \" show that LipSDP is dual of Equation (7) to estimate the ℓ2-FGL on two-layer networks\".\n\nFinally the release a software, GeoLip, that implements the SDP of interest for multil-layers neural networks (for l-infinity) and 2-layers neural networks (for l2). Not only the upper bound is improved compared to concurrent methods, but it is even cloaser to the lower bound obtained by sample or brute force by a small factor, even on deep networks. Moreover the running time is competitive even on those deep networks. Strengths:\n\nThe paper makes a clear theoretial contribution by posing rigorously posing the problem. The link the Grothendieck inequality is very interesting and establish a clear bound toward algorithms must tend. The link between dual formulation (7) and previous work LipSDP \nis interesting. Claims in all theorems 3.2 3.3 and 3.4 seem to be significant contributions.\n\nThe SDP are clearly specified which allows independent implementation, and easier reproducibility. \n\nThe speed and quality of the bounds of GeoLip software, whose code is made public, is very convincing and is validating theoretical claims.\n\nThe paper is well written, I learned a lot about this topic with which I am unfamiliar.\n\nWeaknesses:\n\nfor reader unfamiliar with the topic some notations are a bit harsh: it took me some time to understand the motivation behind the definition of two-layers networks (l 135) : no bias since it does not play any role in computation of the upper bound (only in the pattern of activations between true Lipschitz constant and the FGL). diag(y) the pattern of activations, and u the weight of the last layer (with the NN seen as a function from R^n to R)\n\nminor/typo: a parenthesis is lacking in equation (2)\n l 125 : \"However, the algorithms presented in this work can be adapted with minor adjustments to other common activation functions.\"\nDoes it also work with non elementwise activation functions ? In this case the diag(y) is no longer diagonal. Is it a simple way to adapt the method to arbitrary diag(y) ?\n\nFrom my limited understanding of the topic, I have two more naive (related) questions.\n\n* For multi-layers ( i.e > 2) neural networks , do you have any theoretical guarantees like the ones of theorems 3.3 and 3.4 ?\n* experiments on l2 norms for multi-layers neural networks are lacking. Is it because the duality result of section 4.1 does not extend further than the case of two-layers networks ? Every result is clearly stated along with its hypothesis. The only part that could be subject to caution is the the running time of their solver, which might unfairly benefit from better software than other SDP based relaxations of literature, as acknowledged by the authors: \"Notice that the running time is implementation and solver-dependent\".",
" The paper deals with estimating the Lipschitz constants of neural networks, aiming to unify different approaches based on semidefinite programming (SDP). More precisely, the authors would like to estimate the \"formal global Lipschitz constant\" (FGL) which is an upper-bound of the actual Lipschitz constant that's however sharper than the naive product of the norms of all weight matrices. They consider FGL estimation for two layer networks in the $\\ell_\\infty$ and $\\ell_2$ topologies and also outline FGP estimation for multi layer networks. Furthermore, they show that some existing SDP based models are duals of the FGP problem. In their numerical results section the authors compare their formulations to some other methods which estimate the Lipschitz constant, like dual SDP problems or brute force methods.\n\n########POST-REBUTTAL#########\n\nSince there seems to be relatively strong support for the paper, I increase my score to 5 and ask the authors improve the organization of the paper. Strengths: The numerical results seem to indicate that the presented method computes the same estimate for the Lipschitz constant as its dual SDP variants, however, in shorter time. Furthermore, it compares favorably against the chosen baseline method LiPopt and brute force estimatation. \n\nWeaknesses: I find the paper pretty poorly written and it's not clear what exactly the contribution or the novelty is. The theoretical part of the paper is relatively ad hoc. Furthermore, the numerical results are not entirely conclusive, see \"Questions\" further down. I have a couple of questions and remarks that should be addressed: \n\n- p.3, l.103: I think the $\\operatorname{diag}$ operator shouldn't be part of the forward pass of the network. Is this a typo?\n- p.3, l.112, eq. (3): I guess that the matrices $W^i$ for $i=2,\\dots,d-1$ should also appear in this formula, right?\n- p.3, l.122: an interval of the form $[a,b]$ can never coincide with the set $\\lbrace 0,1\\rbrace$.\n- p.3, l.131: I don't understand why $f(x)$ has this form. This is linear in $x$. Also: what's $y$ in $\\operatorname{diag}(y)$?\n- p.7, l.242: same thing here\n- In general, the presentation of sections 3 and 4 is suboptimal and lacks a golden thread.\n- The numerical results in Table 1 seem to suggest that sampling is the best way to estimate the Lipschitz constant. Maybe you should compare your methods with sampling based Lipschitz constant estimation methods like for instance [2].\n- Given that Lipschitz regularization of neural networks is a very active field, the related work section and introduction does not paint a clear picture of the different approaches out there, such as e.g. the articles [1-5].\n\n[1] Aziznejad, S., Gupta, H., Campos, J., & Unser, M. (2020). Deep neural networks with trainable activations and controlled Lipschitz constant. IEEE Transactions on Signal Processing, 68, 4688-4699.\n\n[2] Bungert, L., Raab, R., Roith, T., Schwinn, L. and Tenbrinck, D., 2021, May. CLIP: Cheap Lipschitz training of neural networks. In International Conference on Scale Space and Variational Methods in Computer Vision (pp. 307-319). Springer, Cham.\n\n[3] Gouk, H., Frank, E., Pfahringer, B., & Cree, M. J. (2021). Regularisation of neural networks by enforcing lipschitz continuity. Machine Learning, 110(2), 393-416.\n\n[4] Krishnan, V., Makdah, A., AlRahman, A., & Pasqualetti, F. (2020). Lipschitz bounds and provably robust training by Laplacian smoothing. Advances in Neural Information Processing Systems, 33, 10924-10935.\n\n[5] Terjék, D. (2019). Adversarial lipschitz regularization. arXiv preprint arXiv:1907.05681.\n\n\n\n To my mind, a negative societal impact cannot be expected and is also not discussed. Also there is a discussion section, the possible shortcomings (and advantages) of the proposed method over sampling type methods for estimating the Lipschitz constant is not discussed."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2,
3
] | [
"QJZyQAD96m6",
"VXZkdm7Vrzb",
"tU_-GkdbeB",
"Q-tJAJ0gag",
"akn03o95q-H",
"omxrrEh-xwN",
"yypPzzRiG6q",
"omxrrEh-xwN",
"I38kHNA-GAI",
"nips_2022_ZQcpYaE1z1r",
"3qDyjcn06tx",
"oZ9eIBt2EKX",
"1RefYRy8i2g",
"UEd1w89AlHM",
"T_L6oMlPaQ",
"nips_2022_ZQcpYaE1z1r",
"nips_2022_ZQcpYaE1z1r",
"nips_2022_ZQcpYaE1z1r",
"nips_2022_ZQcpYaE1z1r",
"nips_2022_ZQcpYaE1z1r"
] |
nips_2022_foNVYPnQbhk | SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration | Next Best View computation (NBV) is a long-standing problem in robotics, and consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Like most current methods, we consider NBV prediction from a depth sensor like Lidar systems. Learning-based methods relying on a volumetric representation of the scene are suitable for path planning, but have lower accuracy than methods using a surface-based representation. However, the latter do not scale well with the size of the scene and constrain the camera to a small number of poses. To obtain the advantages of both representations, we show that we can maximize surface metrics by Monte Carlo integration over a volumetric representation. In particular, we propose an approach, SCONE, that relies on two neural modules: The first module predicts occupancy probability in the entire volume of the scene. Given any new camera pose, the second module samples points in the scene based on their occupancy probability and leverages a self-attention mechanism to predict the visibility of the samples. Finally, we integrate the visibility to evaluate the gain in surface coverage for the new camera pose. NBV is selected as the pose that maximizes the gain in total surface coverage. Our method scales to large scenes and handles free camera motion: It takes as input an arbitrarily large point cloud gathered by a depth sensor as well as camera poses to predict NBV. We demonstrate our approach on a novel dataset made of large and complex 3D scenes. | Accept | The paper describes an approach to next-best-view (NBV) planning for the reconstruction of large-scale 3D scenes using depth sensors. The proposed framework models the scene using a probabilistic occupancy map and chooses the next-best-view as the free camera pose that maximizes the gain in surface coverage. Integral to the approach's ability to handle large-scale scenes is the paper's formulation of surface coverage estimation as sample-based volumetric integration. Based on this formulation, the approach employs one neural network to predict the visibilities that are used to calculate surface coverage gain, and a second network to estimate the probabilistic occupancy map from the point cloud input. The paper presents experimental evaluations on the benchmark ShapeNet dataset as well as a proposed large-scale dataset, demonstrating gains over contemporary methods.
The paper was reviewed by three reviewers who read the author response and discussed the paper with the AC. The reviewers agree that the proposal to approximate surface coverage via sample-based volumetric integration, which is integral to the approach, is both novel and principled. To that end, the reviewers appreciate that the proposed architecture is well grounded in rigorous theoretical foundations. The experimental evaluation is thorough, with ablations that clearly demonstrate the advantages of the proposed architectural components. A key concern raised by several reviewers is that the readability of the submission is poor, which makes it difficult to relate the formal derivations to the neural network architecture. This lack of clarity lead to notable misunderstandings on the part of at least two reviewers. During the discussion phase, the reviewers acknowledged that the author response largely resolves this concern, but it is critical that the paper be updated to address these issues as well. | train | [
"7u4h6Svx6R",
"7B51TekbLmU",
"cLepXPm7f_",
"KVoyXjAEfES",
"4UZ-O36ErK3",
"sRyYvef_vn",
"swIq_bomfa",
"V40GNEqt3r1",
"yuBntlB-rcc",
"VrvROZ7Mt7c"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for the rebuttle. Many of the doubts are clear. Please include the monte carlo and neural aggregator discussion briefly in the paper.",
" In this third comment, we would like to answer the last questions asked by the reviewer.\n\n**Q7: L292: ‘Model suffers ... to compute coverage gain’ - does it mean that the second model suffers to predict coverage gain from such a set of points?**\n\nExactly. When using only the points gathered on the surface by the sensor to compute coverage gain, the performance of the second module decreases. This result shows that making a geometric prediction with the first module increases performance.\n\n**Q8: Would be nice to see how accurate the occupancy prediction model is, how far can it extrapolate geometry, as I think it is the main property this model should have. The second model should decide where can it expect to see more geometry that was previously unseen. Essentially it means that the first model should do occupancy map completion. How well does the model perform in this task is a good question for an additional ablation study.**\n\nAs the reviewer suggests, the first model is designed to make occupancy map completion. However, we want our model to be as general as possible; we do not want it to overfit on specific categories of objects nor make too strong assumptions on the geometry.\n\nThis is why our occupancy prediction model outputs a gradient of probabilities from 0 to 1 that helps to identify the areas that should have high or low density, as well as the areas where incertitude relies. When a single observation (i.e., a single depth map) of the scene is available, the predicted occupancy map generally identifies large areas of incertitude (i.e. with values around 0.5). However, the more depth maps are gathered, the better the prediction becomes, even for unknown environments or categories of objects never seen by the model.\n\nThe second model is indeed trained to decide where it can expect to see more geometry that was previously unseen.",
" In this second comment, we would like to answer the first questions asked by the reviewer.\n\n**Q1: It remains unclear, what kind of input does the method expect.**\n\nAt time $t$, our method takes as input: \n1. A partial point cloud gathered by a depth sensor before time $t$ in the scene, as well as the previous sensor poses. We will explain the nature of the sensor in the following questions. \n2. Any camera pose $c$ that could be a candidate to the Next Best View.\n\nThe partial point cloud is processed by the first module of our model to predict an occupancy probability map representing the geometry of the unknown scene.\n\nGiven the predicted occupancy map, the second module of our model predicts the surface coverage gain achieved by camera $c$.\n\nWe will follow the reviewer's suggestion and modify the paper to clarify the input expected by the model.\n\n**Q2: L67: Could you please explain the meaning of “volumetric methods are less efficient .. because they dilute information in the 3D space”**\n\nSince most volumetric methods rely on memory-heavy representations (like a voxelization or a 3D-grid), they are generally less efficient for encoding very fine details and optimizing dense reconstructions. As an example, to represent a 3D mesh, an occupancy grid or a volumetric voxelization will necessarily downgrade the resolution compared to a point cloud directly sampled on the surface.\n\nTo address this issue while still working with a volumetric representation of the scene, we use a deep implicit function to encode the 3D mapping of occupancy efficiently. Such a function has a virtually infinite resolution, and prevents us from saving a large 3D grid in memory.\n\n**Q3: L78 ‘predict unknown geometry’ - What do these words refer to?**\n\nThese words refer to the computation of the occupancy probability map, which encodes and makes predictions about the 3D geometry of the scene based on the partial point cloud gathered by the depth sensor. \n\n**Q4: L122: the volume is supposed to be opaque - what property exactly is expected here, in more precise terms?**\n\nTo maximize surface coverage with depth sensors, we consider the surface to be opaque, i.e., surfaces have no transparency: A point is considered occluded as soon as another part of the surface is located between the camera and the point, and its visibility value is considered to be 0.\n\n**Q5: L177: How does the model sample 3D points using the predicted occupancy probability function?**\n\nFor a given camera $c$, we first compute the occupancy probabilities of proxy points located in the camera field of view. Then, we sample a subset of these points with a Monte-Carlo sampling: We sample $N$ points with probabilities that are proportional to their occupancy values. The sequence of coordinates of the sampled points are concatenated to their occupancy values and fed to the second module represented in Figure 3.\n\nThe number $N$ can be set arbitrarily high depending on the GPU memory used to run the model. In our experiments, we used $N=2048$.\n\n**Q6: In experiment 3.1, how are the partial point clouds generated? L256-257: how can ray tracing be used to render a point cloud? What is done to make this rendering approximate specifically the LIDAR sensor, but not time-of-flight or active stereo depth sensors?**\n\nThe partial point clouds are generated using the same process than experiment 3.2: we use a ray-casting renderer that outputs a depth map. The renderer casts rays inside the camera field of view from the camera position; When a ray intersects the surface, the depth value is saved and projected onto the corresponding pixels in the depth map. The depth map is then backprojected to 3D to create a partial point cloud.\n\nThis renderer is designed to mimic LiDAR-class sensors such as time-of-flight cameras, that use laser rays to estimate depth and output depth maps. The density of surface points gathered by these sensors vary with the angle between the normal of the surface and the direction of observation, just like stereo depth sensors. This is the most important property that we wanted to reproduce with our virtual renderer.\n\nFinally, as the reviewer mentioned, RGB cameras are widely used for 3D reconstruction (photogrammetry, structure-from-motion) from images that have already been captured by a sensor. However, when it comes to path planning strategies and NBV computation, point clouds and depth maps are mostly used as the input.",
" We thank the reviewer for their valuable comments and suggestions, that will help us clarify potential misunderstanding and increase the readability of the paper.\n\nBefore answering the questions, we would like to emphasize that the major weaknesses identified by the reviewer are due to misunderstanding, and that our model does not suffer from such weaknesses as explained below.\n\n**Weakness 1: I see a conceptual difficulty embedded in formula (8). However, the formula (8) defines dependency of point visibility prediction on point-camera direction. First of all, camera orientation does not enter the formula at all. It is unclear how to find the best orientation of the camera.**\n\nActually, formula (8) does take the camera's viewing direction into account, thanks to the term $\\hat{\\chi}_c$. \n\n$\\hat{\\chi}_c$ is the set of all occupied proxy points in the field of view of camera $c$. We define the camera field of view as a pyramidal frustum that depends on the camera orientation. For a given camera $c$, a 3D proxy point participates to the Monte Carlo Integral in Equation (8) if and only if it is in the field of view of camera $c$.\n\nConsequently, the model computes the best orientation of the camera as the orientation with the highest predicted coverage gain, i.e. the orientation that maximizes in its field of view the number of points with high predicted visibility gains.\n\n**Weakness 2: Next, an infinite number of camera viewpoints lying on a ray have the same camera-point direction. It is unclear which viewpoint should one choose.**\n\nSince formula (8) depends on the direction and field of view of the camera (see our answer above), this is incorrect: The camera viewpoints lying on a ray will not have the same predicted coverage gain, and are not equivalent. If the camera moves along the ray, the field of view will vary and points will enter or exit the field of view of the camera, which will modify the value of the predicted coverage gain. \n\nMoreover, the point-camera vector $c-p_i$ will change for the points $p_i$ located inside the camera field of view (we will add a figure to the supplementary material to better illustrate this point).\n\n**Weakness 3: Importantly, occupancy status of the point with respect to the cameras lying on this ray may change. One may move along the ray further away, and if the ray intersects some surface, then the point will become occluded starting from that intersection. This seems not to be addressed by the current model.**\n\nActually, this occlusion problem is addressed by the current model, and is precisely the reason why we use a self-attention unit in the second module, illustrated in Figure 3.\n\nLet $p$ be a point visible in the camera field of view. If the camera moves further away and intersects some surface, the new surface points that occlude the previous point $p$ will enter the camera field of view; thus, they will participate in the Monte-Carlo integral presented in Equation (8).\n\nIn particular, the second module takes as input a camera pose $c$ as well as the sequence of proxy points located in the camera field of view. Thanks to its self-attention unit, the model will understand that some points are located in front of the previous point $p$, and it will be aware of its occlusion. Consequently, the predicted visibility gain of the previous point $p$ will be lowered. In other words, our model does not process the points independently, but uses an attention mechanism to encode the occlusion effects between the points located in the current camera field of view.\n\n**Weakness 4: The method, as I see it, is based on a significant simplification: it downscales the space of camera viewpoints from the 5-dimensional space down to two dimensions. It would be very interesting to clarify this point.**\n\nThe method does not downscale the space of camera viewpoints. As explained in our answers above, both the orientation and the position of a camera are taken into account when computing its coverage gain. The formula (8) is not based on any simplification: it is just a Monte-Carlo approximation of the volumetric integral presented in equation (6) of the paper, where both the occupancy map and visibility gain are predicted using neural networks. Therefore, it is very general and designed to handle free camera motion on a 5D or even a 6D grid if needed, depending on the representation used for camera rotations.\n\nIn the supplementary material, we provide images as well as videos of trajectories in large 3D scenes that illustrate how the model is able to choose the best positions and orientations on a 5D grid to incrementally build a meaningful trajectory that consistently covers most of the scene's surface. We also provide images of the point clouds gathered by the sensor along the trajectories.",
" In this second comment, we would like to answer the remaining questions 8 and 5 asked by the reviewer.\n\n**Q8: How the occlusion is handled?**\n\nOcclusions are handled by a self-attention unit, as shown in red in Figure 3 of the paper. For a given camera $c$, we first compute the occupancy probabilities of proxy points located in the camera field of view using the first module of SCONE.\n\nThen, we sample a subset of these points with a Monte-Carlo sampling: Each point has a sampling probability proportional to its occupancy value.\n\nThe coordinates of the sampled points are concatenated to their occupancy values and fed to a small MLP, shared between all points.\n\nThe sequence of resulting features is then fed to the self-attention unit. This unit encodes the interaction between all points. Indeed, for each point $p$, the output of the self-attention unit encodes implicit information about the other points surrounding $p$, their occupancy, and which direction could be occluded.\n\nThe resulting feature is concatenated to an additional feature that encodes the relative location of previous camera poses, and finally fed to another shared MLP that outputs coordinates in spherical harmonics representing visibility gains.\n\n**Q5: If the explanation from Eq. 6 to the neural design is more lucidly written, then the readability will increase. [...] Its not clear if there were neural aggregator instead of Monte Carlo, then what could happened?**\n\nTo approximate the volumetric integral in Equation (6) for any camera pose $c$, we need to compute $\\chi_c$ as well as function $g_c^H$. In this regard, we need to compute both the occupancy map and the visibility gains of points for any camera pose. Since the environment is not perfectly known, we predict each one of these functions with a dedicated neural module.\n\nThe first module of our model directly takes as input the partial point cloud gathered by the depth sensor to predict the occupancy probability distribution $\\hat{\\sigma}$. To predict the occupancy map of an arbitrarily large scene, we need the model to be scalable. This is why we designed our first module around neighborhood features, rather than a single global encoding of the whole scene's geometry.\n\nThen, the second module takes as input a camera pose, a feature representing camera history (i.e. the previous camera positions) as well as a sequence of proxy points located in the camera field of view to predict the visibility gains and the resulting coverage gain of the camera. Proxy points are sampled using the predicted occupancy probability distribution $\\hat{\\sigma}$. We use a self-attention unit on the sequence of points located in the camera field of view to encode the occlusion effect between the points.\nWe use spherical harmonics to encode camera history. The output visibility gains are also sets of coordinates in spherical harmonics, so that we can predict the gain for several cameras in the same time if they share the same points in their field of view. Moreover, it makes the output very homogeneous to the input camera history, and helps the model to achieve faster convergence and better performance.\n\nTo aggregate the per-point visibility gains to approximate the integral in Equation (6), we choose to use a Monte-Carlo integral rather than a neural aggregator. In this regard, Equation (8), which represents the output computed by our model, is actually just a Monte Carlo approximation of the volumetric integral of Equation (6). This approach is simple, fast, makes training more stable, has good performance, better interpretability, and can handle sequences of arbitrary size. In particular, it implicitly encourages our model to compute meaningful visibility gains for each point since there is no asymmetry between the points. As we explained, we sample the 3D points in the camera field of view with probabilities proportional to their occupancy values, just as we expect with a MC integration.\n\nPlease note that the unknown variable $\\mu$ appearing in $g_c^H$ in Equation (6) is not explicitly fed to the networks, but is implicitly handled by the model; the only inputs to the full model are the partial point cloud gathered by the depth sensor and the camera poses.\n\nTo make the results reproducible, we will release code and also add details in the supplementary material about the neural design, in particular about the number and size of the layers in the different MLP and SA units.",
" We thank the reviewer for their valuable comments and suggestions, and would like to answer their questions.\n\n**Q1: Isn't the surface point could be obtained using ray intersection from camera in line 100? Or am I missing something?**\n\nThis would indeed work, but only if the occupancy map was known perfectly. In our case, we estimate the occupancy map iteratively, and we only have access to the occupancy probability distribution. \n\nThis is why we compute a volumetric integral of visibility gains instead of extracting surface points from our predicted occupancy maps. In 3D, a surface acts as a very concentrated set (with zero-measure), and requires high confidence to give meaningful results. On the contrary, our model outputs a gradient of probabilities that helps to identify the areas that should have high or low density, as well as the areas where incertitude relies. Extracting surface points from such a probabilistic occupancy map gives results that can differ a lot from the true surface. Instead, we found that a volumetric integration of visibility gain on the whole occupancy distribution was more efficient to make accurate predictions about geometry and NBVs. We will update the paper to clarify this point.\n\n**Q2: It's not clear if the point cloud has a noise how it impact the visibility gain and occupancy function? Though authors mention this in the limitation, but the discussion on sensitivity of noise could have been good.**\n\nSuch a discussion on sensitivity of noise has been conducted by Zeng *et al.* [33] on their model PC-NBV: They added a small Gaussian noise to the coordinates of the gathered partial point cloud for ShapeNet models, and found out that the model was quite resistant to noise. We actually conducted the same experiment, that led to similar results.\n\nIn the paper, we were referring to the noise and imperfections that exist in depth maps captured by real depth sensors. In other words, we wonder how our model would handle the domain gap between synthetic and real data. Unfortunately, we were not able to make experiments with a real UAV.\n\n**Q3: If the ablation is given with the reconstruction quality then it will be more clear to the reader about the significance of the numbers.**\n\nIn the supplementary material, we provide in figure 2 the evolution of surface coverage throughout reconstruction of small scale objects by our model SCONE as well as several other methods. We provide similar curves for 13 large 3D scenes in figure 5 of the supplementary material, as well as examples of reconstructions of the same 13 scenes in figure 4.\n\n**Q4: How the authors create the ground truth of the surface coverage gain?**\n\nThe ground truth of surface coverage gain is computed from ground truth surface points following Equation (12) in the supplementary material. To compute ground truth surface points, we sample points on the ground truth mesh triangles and make sure that the sampling follows an uniform distribution on the whole surface.\n\n**Q6.1: Is the method tried in indoor?**\n\nWe provide an image and a video in the supplementary material of the trajectory retrieved by our method for an indoor scene (the London Natural History Museum).\n\n**Q6.2: How this method is positioned w.r.t https://arxiv.org/abs/1805.07794**\n\nThe reference mentioned by the reviewer is very interesting, but specializes to indoor scenes where the objects belong to categories already known by the system. By contrast, our approach is much more general, as it does not need a data bank of object models and it is not object-centered.\n\n**Q7: Is it possible to use the Pointnet features for encoding the 3D point and its neighbourhood in Fig 2?**\n\nThe most important part of the occupancy prediction module is the computation of local neighbourhood features, and the type of encoders used to process the points have little importance. Therefore, the reviewer is entirely right, Pointnet features could be used. Actually, we tried Pointnet features trained from scratch to encode the 3D points during our development, but the self-attention units turned out to be slightly more efficient.\n\n**Additional Question: In table 1 the numbers are very close. What is the sensitivity of these numbers w.r.t the reconstruction?**\n\nFor this experiment, we constrain the camera to stay on a sphere centered on small scale objects. Thanks to this experiment, we can compare our model with previous methods trained for the very specific case of dense object reconstruction with camera motion constrained on a sphere. \n\nThe point of this experiment was to prove that, even if our model is designed to handle entire scene reconstruction with free camera motion, it is still able to beat other methods trained for this specific case. \n\nTherefore, the fact the numbers in table 1 are close is not a problem, since the main strength of our model is its scalability to free camera motion and arbitrarily large 3D scenes.",
" We thank the reviewer for their valuable comments and suggestions, and would like to answer their questions.\n\n**Q1: L89: the expression $(1- \\lambda) c_{pos} + \\lambda x$ seems to be interpolating between the camera position and the point, but it doesn't take the camera's viewing direction into account?**\n\nA1: Actually, the complete expression $\\mathbb{1}_{\\chi_c}(x) \\cdot \\mathbb{1}\\left(\\sigma\\left(\\{(1-\\lambda) c_\\text{pos} + \\lambda x \\text{ such that } \\lambda\\in[0,1)\\}\\right)=\\{0\\}\\right)$ does take the camera's viewing direction into account, thanks to the term $\\chi_c$: $\\chi_c$ is the set of all occupied points in the field of view of camera $c$. For a given camera $c$, a 3D point participates to Integral (1) if and only if it is in the field of view of camera $c$ and if there is no other point that occludes it.\n\nWe agree however the current notation can be misleading. We propose to move the indicator function directly in the definition of the visibility $v_c$ to avoid misunderstandings; we will change the definition of $v_c$ accordingly, to $v_c:x \\mapsto \\mathbb{1}_{\\chi_c}(x) \\cdot \\mathbb{1}\\left(\\sigma\\left(\\{(1-\\lambda) c_\\text{pos} + \\lambda x \\text{ such that } \\lambda\\in[0,1)\\}\\right)=\\{0\\}\\right)$.\n\n**Q2: L101: \"tubular\" -- is this the right word? or should it be \"spherical\"? I couldn't figure out why the neighborhood region would be tubular. Based on the expression in L105 it seems to be spherical (all points within a distance of $\\mu_0$ from x**\n\nA2: The reviewer is right, \"spherical\" fits well the expression in L105. We used the word \"tubular\" because it was used in particular in [10] (Gilbarg *et al*.), where Equation (4) comes from.\n\nTubular neighborhoods are specific neighborhoods of submanifolds resembling the normal bundle. Equation (4) actually only applies to tubular neighborhoods; However, the spherical neighborhoods of $C^2$ watertight surfaces also are tubular neighborhoods. Since we use both properties of tubular and spherical neighborhoods in our proof but wanted to keep the definitions as simple as possible, we used the definition of spherical neighborhoods. \n\nTo make the proof less confusing, we propose to follow the reviewer's suggestion and change the word \"tubular\" to \"spherical\". As a consequence, we will add a comment in the supplementary material to explain that the spherical neighborhoods we use also are tubular.\n",
" This paper describes a method for next-best view prediction for reconstructing large-scale 3D scenes with a depth sensor. They derive a formula to estimate the surface coverage gain for any potential camera pose given a camera pose history and a probabilistic occupancy map. They use one neural network to predict probabilistic occupancy map based on a point cloud input, and a second network to predict the visibility gain, which is used in the calculation of surface coverage gain. The first network is trained to match the ground truth occupancy map, and the second is trained to match the ground-truth surface coverage gain. Extensive experiments on synthetic datasets demonstrates an improvement in performance over SOTA. They provide a thorough derivation an approximation of surface coverage gain that they prove asymptotically approaches the true value. This formulation is novel to the best of my knowledge.\n\nThe structure of the neural network and the loss functions is interesting. They use separate networks to predict the probabilistic occupancy map and the visibility gain functions. The loss function for the visibility gain network is not visibility gain itself but the surface coverage gain. They also use attention mechanisms to model occlusion effects.\n\nThey provide a thorough set of experiments to demonstrate their method leads to an improvement on ShapeNet following a standard protocol.\n\nThey also provide some ablation studies to establish the usefulness of various aspects of their proposed approach.\n\nOverall, they have some interesting new ideas which lead to an increase in performance for NBV selection, and their work also can lead to new research such as handling noise in the depth map and selecting optimal paths rather than single viewpoints.\n\n\n\n\n\n L89: the expression (1- \\lambda) c_pos + \\lambda x seems to be interpolating between the camera position and the point, but it doesn't take the camera's viewing direction into account?\nL101: \"tubular\" -- is this the right word? or should it be \"spherical\"? I couldn't figure out why the neighborhood region would be tubular. Based on the expression in L105 it seems to be spherical (all points within a distance of \\mu_o from x.\n\n Limitations are discussed but not potential negative societal impacts.",
" This paper aims to solve the problem of next best view from partial point cloud towards a complete reconstruction. The method uses the principle of SDF and define a way of computing the maximum coverage gain if a camera is at position \"c\" given the history of camera position, the visibility of the surface on those historical position. This in turn expected to maximise the total coverage. The authors derive the relations of the incremental coverage gain mathematically and use a neural network to realise those relation to produce the coverage gain given a camera position \"c\". They show results on various dataset including large scale reconstruction. Strengths:\nThe paper has a strong mathematical foundation on defining the coverage gain. The use of spherical harmonics for visibility gain is nice, though SH is being used in recent volume rendering work which authors also referred. Supplementary method provides all the derivation required for the proof of the relations used in the main paper. The paper shows interesting results.\n\nWeaknesses\nThe paper is well grounded with the theoretical formulations. But the readability of the paper is not very good. It is relatively difficult to relate the equations with the neural architecture for a reader. The concepts are philosophically mapped but the derivation is not clearly mapped with the neural method or in the inductive bias design. The paragraph from line 132 to136 attempted to give some clarity but this needs to be expanded for a general reader. Its not clear if there were neural aggregator instead of Monte Carlo, then what could happened? In table 1 the numbers are very close. What is the sensitivity of these numbers w.r.t the reconstruction?\n 1. Isn't the surface point could be obtained using ray intersection from camera in line 100? Or am I missing something?\n2. Its not clear if the point cloud has a noise how it impact the visibility gain and occupancy function? Though authors mention this in the limitation, but the discussion on sensitivity of noise could have been good.\n3. If the ablation is given with the reconstruction quality then it will be more clear to the reader about the significance of the numbers.\n4. How the authors create the ground truth of the surface coverage gain?\n5. If the explanation from Eq. 6 to the neural design is more lucidly written, then the readability will increase. \n6. Is the method tried in indoor? How this method is positioned w.r.t https://arxiv.org/abs/1805.07794\n7. Is it possible to use the Pointnet features for encoding the 3D point and its neighbourhood in Fig 2?\n8. How the occlusion is handled? The limitation of this method is narrated by authors and some of the insight we can get from the above questions. For reproducibility, the crucial information regarding the training needs to be there in the main paper. ",
" The paper proposes a method for next-best-view prediction during three-dimensional surface reconstruction using depth sensors. A characteristic feature of the method is its scalability: most of the state-of-the-art methods are able to work only with artificial models from the ShapeNet dataset of very limited dimensions, while the proposed method is aiming to tackle large-scale structure-from-motion reconstructions such as the one of Colosseum. The paper introduces a formal theoretical framework explaining that it is possible to estimate surface visibility metrics by sampling points in the volume around the surface, not necessarily strictly on the surface. The proposed method build on this framework. It is evaluated on ShapeNet and on a proposed dataset of large-scale models. The paper develops a new theoretical framework, that explains an approach chosen by the authors to sample points for surface visibility prediction. In particular, the paper shows that it is possible to sample points in the volume around the surface rather than on the surface directly. \n\nThe paper proposes to use two models: one for occupancy prediction, and another one for point visibility estimation for a particular camera viewpoint. The first model for occupancy prediction is elegant and scalable, reminding of fully convolutional models for images, but working with point clouds. It shows an approach to build spatially localized models for occupancy prediction effectively working with local sub-sets of input points.\n\nThe approach with decomposing the problem into two models seems to be the key to achieving scalability of prediction. It is a distinct step forward in this field, because previous models assumed small object-like structures and simply predicted probabilities of choosing the next view from a pre-defined set of viewpoints evenly distributed on a sphere. The new method is significantly more powerful both in tackling arbitrary geometry and large scale models.\n\n\nOne of interesting questions possibly weakening the paper is that it is rather uncommon to use depth sensors for reconstructing large-scale objects. Usually people focus on Structure-from-motion or photogrammetry tools in these cases, and these tools rely on stereo algorithms rather than on the depth sensors. However, one may argue that a stereo method applied to a pair of images can be understood as a depth sensor.\n\nIn general, the theoretical framework developed in the paper is concerned with point sampling. However, when the method itself is explained, a particular approach to point sampling used in the method is not described well. This way, the theory leaves an impression of being rather disconnected from practice in this particular paper.\n\nI see a conceptual difficulty embedded in a formula (8). However, the formula (8) defines dependency of point visibility prediction on point-camera direction. First of all, camera orientation does not enter the formula at all. It is unclear how to find the best orientation of the camera. Next, an infinite number of camera viewpoints lying on a ray have the same camera-point direction. It is unclear which viewpoint should one choose. Importantly, occupancy status of the point with respect to the cameras lying on this ray may change. One may move along the ray further away, and if the ray intersects some surface, then the point will become occluded starting from that intersection. This seems not to be addressed by the current model. The method, as I see it, is based on a significant simplification: it downscales the space of camera viewpoints from the 5-dimensional space down to two dimensions. It would be very interesting to clarify this point.\n\nIt remains unclear, what kind of input does the method expect. Sometimes, we see a reference to a ‘depth sensor’. In the ShapeNet experiment, it remains unclear, what is given as input. I would really like to see some clarification in the paper, what particular type of depth sensor should to be used with this method. L67: Could you please explain the meaning of “volumetric methods are less efficient ..because they dilute information in the 3D space”\n\nL75 this maps is not -> this map could be\n\nRephrase introduction and abstract to emphasize the input precisely (e.g., ‘a method taking a point cloud of points observed so far as input’). Right now the introduction only tells about a probabilistic occupancy map, referring that the perfect map is not known at run-time, but not describing, what is exactly considered to be known.\n\nL78 ‘predict unknown geometry’ - What do these words refer to? \n\nL93: Maybe rename ‘knowledge factor’ to ‘knowledge indicator’ as it can only be equal to 0 or 1.\n\nIn (4) dx -> dx_0\n\nL122: the volume is supposed to be opaque - what property exactly is expected here, in more precise terms?\n\nL177: How does the model sample 3D points using the predicted occupancy probability function?\n\nIn experiment 3.1, how are the partial point clouds generated?\n\nL256-257: how can ray tracing be used to render a point cloud? What is done to make this rendering approximate specifically the LIDAR sensor, but not time-of-flight or active stereo depth sensors?\n\nL292: ‘Model suffers ... to compute coverage gain’ - does it mean that the second model suffers to predict coverage gain from such a set of points? The prediction of coverage gain is done in the 2D camera viewpoint space rather than in the 5D space\n\nThe input to the method is not clearly formalized\n\nIt is difficult to judge about the limitations of the current method in terms of accuracy. Would be nice to illustrate some cases when the method does not perform well, and reason about why it happens so.\n\nWould be nice to see how accurate the occupancy prediction model is, how far can it extrapolate geometry, as I think it is the main property this model should have. The second model should decide where can it expect to see more geometry that was previously unseen. Essentially it means that the first model should do occupancy map completion. How well does the model perform in this taskm is a good question for an additional ablation study.\n\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"4UZ-O36ErK3",
"VrvROZ7Mt7c",
"VrvROZ7Mt7c",
"VrvROZ7Mt7c",
"yuBntlB-rcc",
"yuBntlB-rcc",
"V40GNEqt3r1",
"nips_2022_foNVYPnQbhk",
"nips_2022_foNVYPnQbhk",
"nips_2022_foNVYPnQbhk"
] |
nips_2022_lxdWr1jN8-h | Integrating Symmetry into Differentiable Planning | We study how group symmetry helps improve data efficiency and generalization for end-to-end differentiable planning algorithms, specifically on 2D robotic path planning problems: navigation and manipulation. We first formalize the idea from Value Iteration Networks (VINs) on using convolutional networks for path planning, because it avoids explicitly constructing equivalence classes and enables end-to-end planning. We then show that value iteration can always be represented as some convolutional form for (2D) path planning, and name the resulting paradigm Symmetric Planner (SymPlan). In implementation, we use steerable convolution networks to incorporate symmetry. Our algorithms on navigation and manipulation, with given or learned maps, improve training efficiency and generalization performance by large margins over non-equivariant counterparts, VIN and GPPN. | Reject | The paper addresses path planning with RGB inputs by leveraging the workspace symmetry. To that end, the authors propose a end-to-end differentiable planner that builds on top of VINs and evaluate the method of several 2D-grid planning tasks.
The reviewers recognized that the method presents a performance improvement compared to VINs-like methods, but have raised the questions around the accessibility, scalability, and overall benefits of the method. During the rebuttal, the authors added new experiments to show the method's efficiency in larger environments, and reorganized the manuscript for the better accessibility.
I have read the final version of the manuscript. Based on the current state of the manuscript and the reviewers feedback, I do not believe that it is ready for the publication. My main concerns are around the positioning of the paper and the accessibility.
Positioning of the work -- the authors present the work as addressing robot path planning. The environments and evaluation tasks, even with the new experiments, are toy for the robotics. The 2D C-Space with image observations and no robot dynamics is not a suitable robotics problem. See PRM-RL [Faust et al., ICRA 2018], RL-RRT [Chiang et al., RA-L 2019], Critical PRMs [Ichter et al., ICRA 2020], optimal control w/ visual navigation [Bansal et al, CORL 2020] for methods that combine motion planning, controls and perception (ego sensors, motion planning, and non-trivial robot dynamics and geometry). Granted, they are not differential planning, but they solve more path planning more realistic and complex settings. (In the rebuttal the authors comment that the differentiable planning is capable of jointly training perception with the transition model is intractable for RRT or A*. However, the transition model is trivial here -- there are no kinodynamic constraints, or complicated geometry.) Perhaps a better framing for the presented work is as incorporating symmetry into latent planning, instead of framing it around the robotics.
It is not clear what problem the paper is seeking to solve. Please add a clear definition of the path planning problem. Are the policies goal-conditioned? Is the generalization over the workspaces, or the initial configurations or both? What is in the training set? Are the connections between the planning points known or not? And are there any other constraints on the transition function (beyond the workspace constraints)?
Accessibility -- Even after the rebuttal, Sections 3 and 4 are not clear. The symmetry is not introduced well for a non-expert. Some questions -- If I understand correctly, the symmetric NNs maps inputs to equivalent states. How is that different from latent spaces? Is the proposed method too specific for CNNs, which are rapidly becoming obsolete, in favor of newer models? How would the method compare to VAE? I suggest that authors take an intuitive example of the symmetry (for example, we expect the planner to learn when it sees the wall in a given direction, that the transition in that direction is not possible. The same will hold for left, right, top or bottom. So we hope that by exploiting symmetry, we can speed up the learning since the agent would need to learn only on a single instance of the equivalence class, and generalize to the others.) Lastly, the paper would be stronger, with a more in-depth analysis of the method. Where and how exactly did the symmetry help?
Overall, the exploitation of workspace symmetry in E2E differential planning has merit. But the framing around the robotics, VINs, and CNNs is too specific, yielding results which significance is not clear. With more generalized framing rooted into the current ML trends this paper can make a strong a valuable contribution. | train | [
"0VVFxi3E5p7",
"-py5oH4mgs6",
"k786IkHKBhC",
"fqtMB7jIJgt",
"mpILW2Ar6bq",
"zQgRMQ00pdC",
"aT_jLsUgJH",
"9N3ySU9xux",
"u7thZdG07nV",
"EWe-2DSsXUo",
"phH-_8k8DKu",
"FCQZ4JEdkg0",
"ggbLYM9CKNO",
"PoRrJJt9VjR",
"BkMT4Vef6nz",
"s-zxjhwEIAI"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As a kind reminder, additional to the first revision on adding pseudocode and a new experiment, we just add a new intuitive version of the technical sections (method + framework) in appendix Section D, which is written from scratch and contains minimal terminology for equivariant networks / steerable CNNs. We hope this addresses your concern on the writing side.\n\nWe also add a new figure for the generalization experiment (see Figure 8 (right) in **the appendix of the latest supplementary material**), which shows even larger gap between ConvGPPN and SymGPPN.\n\nWe would appreciate for any feedback or comments on our response, new sections, and added results.",
" As a kind reminder, additional to the first revision on adding pseudocode and a new experiment, we update a new revision of the paper with new intuitive version of the technical sections (method + framework) in appendix Section D. \n\nWe also add a new figure for the generalization experiment (see Figure 8 (right) in **the appendix of the latest supplementary material**), which shows even larger gap between ConvGPPN and SymGPPN. Our SymVIN even surpasses ConvGPPN. This figure may further address your concern on the performance of ConvGPPN.\n\nWe would appreciate for any feedback or comments on our response, new sections, and added results.",
" We uploaded a new revision of the paper. It includes a new intuitive version of the technical sections (method + framework) in appendix Section D, which is written from scratch and contains minimal terminology for equivariant networks / steerable CNNs. We hope this addresses your concern on the writing side.\n\nWe would appreciate any further feedback, and will continue on revising it.",
" Based on the concerns to writing of the technical sections and the discussion with Reviewer qXsg, we wrote a completely new section on the simplified version of the SymPlan method and framework (**Section D in the appendix of the latest supplementary material**). We hope this version provides more intuition and high-level idea. We'll respond to reviewers individually since this has been requested by two different reviewers.\n\nAdditionally, we also add a new figure for the generalization experiment (**Figure 8 (right) in the appendix of the latest supplementary material**), which shows even larger gap between ConvGPPN and SymGPPN. Our SymVIN even surpasses ConvGPPN.",
" We appreciate your comments on the presentation of Sections 4 and 5. We are currently writing a more intuitive/approachable version of those sections, and plan to add that to the supplementary material in the next few days. We will post another comment when that is done and would highly appreciate your further feedback on that.\n\nOur current plan is include this simplified version at the beginning of the supplementary material, with a clear note in the main text at the beginning of Section 3 or 4 along the lines of:\n\"We recommend that readers who are unfamiliar with group equivariance first read the alternative intuitive exposition included in Appendix B, and refer to Sections 4 and 5 on a second read for technical details.\"\n\nWe have debated whether to replace Sections 4 and 5 in the main text with this simplified version, but we also see that some readers (such as reviewer duzU) find the paper \"well-written\" and \"largely self-contained\", so we are also wary of making changes that will compromise technical accuracy and precision. We would appreciate your thoughts on this approach.",
" Thank you for your response and the additional supplementary material.\n\n**Accessibility of the writing**\n\nThe pytorch-code shows that your method is easy to implement within the current pytorch ecosystem, which is a definite plus. However, it does not solve the issue that Sections 4 and 5 are hard to follow for a reader unexperienced in steerable convolutions due to how high-level it is. \n\n**Comparison to ConvGPPN**\n\nThank you for adding more experiments which show that SymGPPN generalises better than ConvGPPN to larger environments.\n\nCurrently, my main concern is the presentation of the method, which is not solved by the new pytorch section. If you can make improvements here, I would consider raising my score.\n",
" - We thank all reviewers for thoughtful and detailed reviews! We found two common concerns from the reviewers: (1) accessibility of technical sections, and (2) benefits of SymGPPN over ConvGPPN.\n- We address other concerns and provide further details in individual responses.\n- Beyond response, we updated the **(the appendix in) supplementary material** and summarize the modifications to the paper below.\n - (1) A new section on new results and figure.\n - First, we show more concrete visualization to the visual navigation and workspace manipulation tasks, in order to highlight that our SymPlan framework can make use of differentiability to handle sensory input.\n - Second, to address the second concern, we run new experiment on generalization performance of all methods, including ConvGPPN and SymGPPN.\n - (2) A new section on more concrete explanation of SymVIN.\n - This is used to address the first concern. We explain two key steps (defining steerable convolution layer and symmetric VI) of the actual implementation SymVIN with about ~25 lines of code and compare line-by-line with VIN.\n - We also moved the additional experiment/result section from the end of the appendix to Section D now.",
" **Question — simpler way of incorporating symmetry?**\n- Yes, we have proved what symmetry exists in the (2D) path planning problem and what operation that is (steerable convolution), so we can manually choose the equivariant network to use (steerable CNNs).\n- There are three ways to consider symmetry here: (1) equivariant networks, (2) data augmentation, and (3) “canonicalization” / aggregation over the symmetry group.\n- The method proposed by the reviewer falls into the third category. There are some drawbacks for canonicalization-based methods.\n - It is hard to consider local symmetry, since for every iteration it needs to apply every transformation in the group, so the total cost is exponential in the planning horizon.\n - In other words, equivariant methods implicitly plan in a more abstract space/MDP that already quotients out symmetry.\n - Thus, it increases cost by the scale of the group size (8x for $D_4$), instead of saving computation by planning in a smaller MDP.\n - It also cannot generalize to continuous case.\n- Additionally, this can be mainly used at inference time, and at training time, applying transformations is equivalent to data augmentation.\n- For data augmentation method, it (1) cannot guarantee 0 equivariance error, and (2) since (2D) maps don’t have canonical orientation, augmentation with rotations/reflections is just effectively increase dataset size.\n- Therefore, equivariance is the best way to consider symmetry in our case.\n\n**Question — generalization to continuous actions?**\n- Yes.\n- One way is to use stochastic transition. We can define that an agent has transition probability proportional to the angle between the continuous action and one of four directions.\n- if we really consider continuous 2D actions, it is possible to work on 2D plane $\\mathbb{R}^2$, which has isometries $E(2) \\simeq \\mathbb{R}^2 \\rtimes SO(2)$. There is existing work in $SO(2)$-equivariant networks. For this case, the major issue is to choose the best action in value iteration, which needs to optimize over continuous functions.\n- References\n - Dian et al., $SO ( 2 )$-Equivariant Reinforcement Learning, ICLR 2022.\n - Walters et al., Trajectory Prediction using Equivariant Continuous Convolution, ICLR 2021.",
" **Question — extension to other rotations?**\n- Yes.\n- Although we have showed that the 2D grid has $D_4$ symmetry, we have tried other symmetry groups in the intermediate layers for the partially equivariant model (since input and output layer must have $D_4$ symmetry, while other layers can be customized). The results are in Figure 13 in L1056 in Appendix Section G.5.\n- It is also possible to extend to the continuous case (on $\\mathbb{R}^2$), which is symmetric under rotation group $SO(2)$.\n\n**Question — expecting improvements over data augmentation?**\n- Yes.\n- For our case, 2D maps have no canonical orientation in data generation. Thus, even if we apply data augmentation by random rotations/reflection, a rotated map is still in the distribution of the training data, and the only difference is that the model may see more maps in the same amount of gradient steps.\n- However, if we augment the dataset with all rotations/reflections, this effectively increase the training data size by 8x since non-equivariant models won’t relate them, but this does not contribute to our central goal: computational efficiency. Instead, equivariant methods, such as SymVIN, allow to implicitly plan in a smaller MDP.\n- Furthermore, even if we train on all rotations/reflections of a map, it is not guaranteed to have 0 equivariance error, while injecting equivariance to translation/rotation/reflection to a model can assure this.\n- There are also papers that compare data augmentation methods with equivariance methods. Equivariance methods are always better. Reference:\n - Wang et al. Data Augmentation vs. Equivariant Networks: A Theory of Generalization on Dynamics Forecasting. arXiv 2022.\n - Zhu et al. Sample Efficient Grasp Learning Using Equivariant Models. RSS 2022.\n- We also ran new experiments on data augmentation with random rotations/reflections, but didn’t observe significant difference.",
" We appreciate the reviewer for the time and effort spent on reviewing our work.\nWe address the concerns by individual responses, a new section in the appendix on explaining with PyTorch-style pseudocode step-by-step, as well as a new experiment section on generalization to larger maps to demonstrate the significant gap between VIN vs SymVIN and ConvGPPN vs SymGPPN. We uploaded them to the **supplementary material**.\n\nWe hope the new pseudocode section can help the reviewer understand from another perspective with minimal prerequisite of equivariant steerable CNNs. We are also open to provide a more intuitive section of the technical section in the next few days if helpful.\n\n**Concern — writing of the technical section. It is hard to understand some concepts, notations and jargons.**\n- Thank you for the feedback on the paper writing. We generally agree that the technical part is not easily accessible and realize this concern is shared with another reviewer.\n- We authors prefer different versions of the technical content (Sec 4+5), and provided a concise version in the main text and a more detailed version in the supplementary material. We wished to provide a more intuitive version for broader audience, while it is hard to do all in the main paper.\n- As a step to solve this, we write a section on explaining the SymVIN method with PyTorch-style pseudocode, since it directly corresponds to what we propose in Section 4 and 5. We try to relate (1) existing concepts with VIN, (2) what we propose in Section 4 and 5 for SymVIN, and (3) actual PyTorch implementation of VIN and SymVIN aligned line-by-line based on semantic correspondence.\n- Thanks to equivariant network community and e2cnn package, the actual implementation of SymVIN is painless and has close relationship with their non-equivariant counterpart. We show two snippets of SymVIN and compare with VIN: the definition of a steerable convolution layer in ~10 lines, and the symmetric value iteration procedure in ~15 lines.\n- We hope this new section can help make terminology more concrete in Section 4 and 5 and demonstrate what actual implementation looks like. We are happy to make the paper more accessible in the future and consider to swap some content in this section with the main text based on further feedback.\n- We will consider to have another short section on intuitively explaining our Symmetric Planning framework and practical considerations in the next few days.\n\n**Concern — ConvGPPN seems good enough. Tasks do seem challenging enough; unknown if algorithms are scalable to them.**\n- **To address this concern, we did new experiment on generalization to larger maps, but we would like to emphasize a few points before going into that.**\n - We have shown experiments on larger maps in the Section D in appendix (additional result section, moved above, originally at the end). The learning curves of training and validation success rate of SymGPPN and ConvGPPN showed gap between them.\n - We have done four tasks, all from prior work (VIN, GPPN, SPT and other work along this line [35-39]): (1) 2D path planning (used in VIN, GPPN, SPT, etc), (2) 2DoF C-space manipulation (used in SPT [37]), (3) visual navigation (used in GPPN, SPT [37], etc), (4) workspace manipulation (used in SPT [37]).\n - For the latter two tasks, since differentiable planning is able to jointly train the transition model with perception module, there is no need for known kinematics/dynamics. This would be intractable for path planning algorithms such as RRT or A*.\n - We want to highlight that the main algorithm we are studying is SymVIN (vs. VIN), as we use most Section 4 and 5 to explain it. In all experiments, SymVIN clearly outperforms VIN by a large gap. GPPN only empirically does computation of value iteration without theoretical justification, and we developed ConvGPPN and SymGPPN only for completeness. Even though SymGPPN empirically performed the best, it is unclear if the performance gain is due to symmetry in value iteration at all.\n - Additionally, as we will address for the next question, we already experimented on 50x50, which is larger than VIN and GPPN on 28x28 and match SPT (known for scalability using Transformers) also on 50x50.\n- **To better demonstrate the empirical difference, we conduct new experiment on generalization to larger maps. We hope this can alleviate some concern on (1) scalability and (2) performance gap between SymGPPN and ConvGPPN.**\n - We experiment all methods on map size 15x15 through 99x99, averaging over 3 seeds (3 model checkpoints, **all trained on 15x15 with K=30**) for each method and 1000 maps for each size. Between 15x15 and 49x49 we use all odd-size maps, and between 51x51 and 99x99 we use interval of 4 (51x51 → 55x55 …).\n - We keep number of iterations to be K=30 and kernel size F=3 for all methods.\n - The figure has been added to the new Section B (Figure 8) in the updated appendix in the supplementary material.",
" **Concern — inherent limitations of VIN-based methods.**\n\nIn general, we agree that VIN has its shortcomings, and the scalability is exactly the major one we were considering. However, we would like to provide more background on prior work along this line and the reasons behind our choice on VIN despite the scalability concern.\n\n- **\"Is this (using low-dimensional problem & small size) because value iteration scales poorly with problem size, and that it is a core component of the approach?\"**\n - We choose this because we follow the prior work (as pointed out, in VIN, GPPN, SPT, and so on). Specifically, VIN mainly experimented on 15x15, and GPPN mainly used 15x15 and tried 28x28 as scalability experiment. SPT advertised to be much better scalable with Transformers and used up to 50x50.\n - We think integrating symmetry into differentiable planning is an orthogonal topic with the scalability of differentiable planning algorithm, although symmetry could potentially help on scalability.\n - Also, the new experiment shows that the model can generalizes to larger maps, which unveils potential of scalability. This has not been done in prior work along this line.\n- **Why we choose differentiable planning, specifically VIN, to incorporate symmetry?**\n - We implement based on value iteration network (VIN) for some reasons.\n - (1) The expected value operation in value iteration $\\sum_{s'} P(s'|s, a) V(s')$ is linear in value function. Since we also proved that value iteration for (2D) path planning is equivariant, this means the Bellman operator is a linear equivariant operator. According to Cohen et al. (2020) [12], any linear equivariant operator has one-to-one correspondence to a (group equivariant) convolution operator.\n - (2) Value iteration, or Bellman (optimality) operator $V_{k+1}(s) = \\max_a R^a(s) + \\gamma \\times \\left[ {P}^a \\star V_k \\right] (s)$, only relies on operating on fields (“images”) over $\\mathbb{Z}^2$, such as value function, reward function, and transition functions.\n - This enables to inject symmetry (8 states are symmetric under $D_4$) by enforcing same value (after transformation, $D_4$-equivariance), which avoids to find if a new state is symmetric to any existing state.\n - For the above reasons, we find VIN is empirically the simplest differentiable planning algorithm that satisfies both desiderata.\n - Additionally, equivariant network community developed techniques to apply convolution networks on non-Euclidean spaces, such as spheres (e.g. spherical CNNs) or even general manifold (gauge equivariant CNNs). It is possible to extend our framework to those cases, which may enable decision-theoretic planning\n - For example, it is possible to consider planning under uncertainty on a torus formed by a 2-joint arm, which is our experiment on 2DoF C-space/workspace manipulation.",
" We appreciate the reviewer for the time and effort spent on reviewing our work.\nWe address the concerns by individual responses on why we choose VIN despite of its known scalability issue and also a new experiment section on generalization to larger maps to demonstrate the significant gap between VIN vs SymVIN and ConvGPPN vs SymGPPN.\n\n**Concern — ConvGPPN seems good enough. Tasks do seem challenging enough; unknown if algorithms are scalable to them.**\n- **To address this concern, we did new experiment on generalization to larger maps, but we would like to emphasize a few points before going into that.**\n - We have shown experiments on larger maps in the Section D in appendix (additional result section, moved up, originally at the end). The learning curves of training and validation success rate of SymGPPN and ConvGPPN showed gap between them.\n - We have done four tasks, all from prior work (VIN, GPPN, SPT and other work along this line [35-39]): (1) 2D path planning (used in VIN, GPPN, SPT, etc), (2) 2DoF C-space manipulation (used in SPT [37]), (3) visual navigation (used in GPPN, SPT [37], etc), (4) workspace manipulation (used in SPT [37]).\n - For the latter two tasks, since differentiable planning is able to jointly train the transition model with perception module, there is no need for known kinematics/dynamics. This would be intractable for path planning algorithms such as RRT or A*.\n - We want to highlight that the main algorithm we are studying is SymVIN (vs. VIN), as we use most Section 4 and 5 to explain it. In all experiments, SymVIN clearly outperforms VIN by a large gap. GPPN only empirically does computation of value iteration without theoretical justification, and we developed ConvGPPN and SymGPPN only for completeness. Even though SymGPPN empirically performed the best, it is unclear if the performance gain is due to symmetry in value iteration at all.\n - Additionally, as we will address for the next question, we already experimented on 50x50, which is larger than VIN and GPPN on 28x28 and match SPT (known for scalability using Transformers) also on 50x50.\n- **To better demonstrate the empirical difference, we conduct new experiment on generalization to larger maps. We hope this can alleviate some concern on (1) scalability and (2) performance gap between SymGPPN and ConvGPPN.**\n - We experiment all methods on map size 15x15 through 99x99, averaging over 3 seeds (3 model checkpoints, **all trained on 15x15 with K=30**) for each method and 1000 maps for each size. Between 15x15 and 49x49 we use all odd-size maps, and between 51x51 and 99x99 we use interval of 4 (51x51 → 55x55 …).\n - We keep number of iterations to be K=30 and kernel size F=3 for all methods.\n - The figure has been added to the new Section B (Figure 8) in the updated appendix in the supplementary material.",
" We appreciate the reviewer for the time and effort spent on reviewing our work.\nWe address the concerns by individual responses and also a new section in the appendix on explaining with PyTorch-style implementation step-by-step.\nWe hope the new section can help the reviewer understand from another more concrete perspective.\nWe are also open to provide a more intuitive section of the technical section in the next few days if useful.\nWe uploaded them to the **supplementary material**.\n\n**Concern — writing of the technical section. It is hard to understand some concepts, notations and jargons.**\n- Thank you for the feedback on the paper writing. We generally agree that the technical part is not easily accessible and realize this concern is shared with another reviewer.\n- We authors prefer different versions of the technical content (Sec 4+5), and provided a concise version in the main text and a more detailed version in the supplementary material. We wished to provide a more intuitive version for broader audience, while it is hard to do all in the main paper.\n- As a step to solve this, we write a section on explaining the SymVIN method with PyTorch-style pseudocode, since it directly corresponds to what we propose in Section 4 and 5. We try to relate (1) existing concepts with VIN, (2) what we propose in Section 4 and 5 for SymVIN, and (3) actual PyTorch implementation of VIN and SymVIN aligned line-by-line based on semantic correspondence.\n- Thanks to equivariant network community and e2cnn package, the actual implementation of SymVIN is painless and has close relationship with their non-equivariant counterpart. We show two snippets of SymVIN and compare with VIN: the definition of a steerable convolution layer in ~10 lines, and the symmetric value iteration procedure in ~15 lines.\n- We hope this new section can help make terminology more concrete in Section 4 and 5 and demonstrate what actual implementation looks like. We are happy to make the paper more accessible in the future and consider to swap some content in this section with the main text based on further feedback.\n- We will consider to have another short section on intuitively explaining our Symmetric Planning framework and practical considerations in the next few days.\n\n**Concern — Why differentiable planning is potentially useful for robotics people? (”The benefit of differentiable planning may not be well known in the robotics community”)**\n- We would like to emphasize that we have done four tasks, all from prior work (VIN, GPPN, SPT and other work along this line [35-39]): (1) 2D path planning (used in VIN, GPPN, SPT, etc), (2) 2DoF C-space manipulation (used in SPT [37]), (3) visual navigation (used in GPPN, SPT [37], etc), (4) workspace manipulation (used in SPT [37]).\n - We will also edit the paper to make our tasks demonstrated more clear.\n- For the latter two tasks, since differentiable planning is able to jointly train the transition model with perception module, there is no need for known kinematics/dynamics. This would be intractable for path planning algorithms such as RRT or A*.\n - Concretely, for visual navigation, the input is a collection of 4 egocentric RGB images facing 4 directions (north, east, south, west) in every location, while the workspace manipulation has topdown pixel input.\n - They are all not typical input to RRT or A* and not trivial to handle. However, they can be easily processed by a perception network first (e.g. a mapper module). Differentiable planning is a known way to be compatible with that, since the planning module and the perception module can be trained together, as shown in our paper and SPT [37].\n- As Reviewer duzU points out, differentiable planning is of interest to the ML community (including NeurIPS), where the above strength (end-to-end differentiability) is one potential reason.",
" This paper presents an approach to leverage problem-domain symmetry for planning problems defined over small 2D lattices. The key idea is to extend value iteration networks by using steerable convolutions to exploit symmetric structure (e.g., translational/rotational equivariance). Evaluation is carried out on three problem domains, and the presented approach performs better than VIN (value iteration networks) and GPPN (gated path planning networks). Strengths\n=========\n\n**S1** The paper is very well-written. Differentiable path planning is an area of great interest to the Neurips(/ICLR/ICML) community, as evidenced by several similar papers in the past (e.g., VIN, GPPN, [35-38]).\n\n**S2** The paper is largely self-contained. As a reader who was not an expert in symmetries and VIN, I appreciated the pointers to relevant readings. This section helped establish context for the technical contributions sections.\n\n**S3** At a technical level, the paper looks well-executed. The core hypothesis is sound -- the 2D gridworld domain (with the considered transition) exhibits symmetries that can be leveraged by learning-based planners to both perform well on the planning problem and also to generalize better to novel problem instances. Building on VINs, which reformulate value iteration as a series of convolutional operators, the proposed approach additionally leverages group convolutions (specifically steerable CNNs) to induce symmetry. The formulation is sound, and achieves better performance compared to variants that do not explicitly assume symmetric structure in the problem domains.\n\n---\n\nWeaknesses\n==========\n\n**W1** *Experiments*: Currently, analysis is only carried out on three 2D problem domains (if counting the C-space and workspace manipulation environments as distinct). In table 1, ConvGPPN seems to already achieve stellar performance on all tasks (89.88 success rate % on the workspace manipulation env, >97% on all other envs). The gains due to SymGPPN, while consistent, do not appear to be significant. This might well be because of the inherent task complexity (ConvGPPNs essentially seem to 'solve' the task); to better investigate the benefits of SymGPPN either larger problem instances or more complex problem domains are necessary.\n\n**W2** *Inherent limitations*: One potential reason for choosing low-dimensional (2D) problem domains and further, small problem sizes (largest problem involves 50 x 50 grid) is the (apparent) poor scalability of the approach. Is this because value iteration scales poorly with problem size, and that it is a core component of the approach? Would(n't) this also impact scalability to more complex problem domains (e.g., 3D environments involving larger action spaces, for example)? (While I appreciate the fact that the manuscript lists this as an avenue for future work, I also believe this shortcoming greatly limits the problems this technique can be applied to, thereby affecting the perceived impact of this work).\n\n---\n\nIn summary, I think this work tackles an exciting direction; but I am of the opinion that it needs more experimental analysis and some strategies to mitigate the inherent limitations it brings along. I would like to see **W1** and **W2** discussed please see **W2** (while authors explicitly list a limitation as an avenue for future work, I believe that limits the scope of the current submission -- this is factored into my eventual score)",
" The paper identifies the symmetry in 2D path planning problem and proposes a framework for incoporating the symmetry into an end-to-end differentiable planning framework. This is done mainly by extending the CNN in value iteration network to steerable CNN. The paper demonstrates the advantages of such approaches in two 2D path planning domains. To say it upfront, with a background in robotic manipulation and reinforcement learning, I do not have background in steerable feature fields and consider the paper outside my area of expertise. I find it difficult to understand the technical details of the paper. As such, I can only provide some high-level feedback to the paper and hope the AC and other reviewers can provide more detailed evaluations.\n\nHere are some suggestions on writing:\n1. The explaination of steerable CNN in Sec. 3 heavily references [14, 15, 16], making it hard for me to understand the concept from reading this paragraph alone. Some notations and jargons are also not explained. Figure 2 does not help much as multiple concepts are squeezed into one figure. \n2. The benefit of differentiable planning may not be well known in the robotics community. The two tasks done in the paper seems almost trivial and can be easily solved using other path planning techniques like RRT or A*. The paper could benefit from a better motivation on why working on differentiable planning. 1. My understanding is that the symmetry group is manually defined instead of being learned through data. In such case, is there a simpler way of incoporating the symmetry? For example, one way could be applying each transformation in the group, run the planner and returns the plan with the lowest cost.\n\n2. Can the approach generalize to continuous actions? Yes",
" The paper proposes using steerable convolutions in the value iteration networks framework to incorporate equivariance under rotations and reflections. ### Strengths\n\n* The proposed approach brings quite a bit of improvement over a regular VIN.\n* Equivariance under rotation and reflection are valuable inductive biases.\n\n### Weaknesses\n\n* I believe the paper has an accessibility issue. The main audience for the paper will be researchers interested in VINs. I would expect these to generally not be too familiar with steerible CNNs and the theory that goes into them. Yet, the paper is quite heavy with mathematical notation and terminology from this area. I understand that the authors need to use the math that goes into steerible CNNs to be able to explain their ideas. Though at the moment specificity is getting in the way of clarity. Sections 4 and 5 currently contain too much jargon that is spread throughout the text in a way that makes it genuinely hard to follow the narrative through-line. I would like to give some passages as examples, just to make it clear, what exactly I mean: lines 104-115, 123-128, 171-175, 194-198, 243-249. I think it is in the best interest of the authors to rethink the presentation of sections 4 and 5, and present their ideas in a way that requires a minimum of knowledge about steerible CNNs. Otherwise, you are limiting your reach by creating a hurdle of mathematical prerequisites. Of course, the reader needs to eventually familiarize themselves with the math to truly understand the paper, but the first time reader shouldn't be completely lost either. Again, most people who read your paper won't be familiar with this math, and currently I find it unlikely they will have an easy time.\n* The gains in performance over ConvGPPN are somewhat marginal. This would be less of an issue if it weren't for my previous point. If I was a researcher or practitioner interested in using VINs, I believe Table 1 might convince me to use ConvGPPN and accept a slight drop in performance in exchange for the relative algorithmic and theoretical simplicity. ### Questions\n\n* Does the method extend to rotations that are not $90^\\circ$?\n\n* Do you think your method will bring improvements over data augmentation techniques that could extend the data set with rotated/reflected versions of the normal environments?\n These points are addressed well."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
2
] | [
"BkMT4Vef6nz",
"PoRrJJt9VjR",
"zQgRMQ00pdC",
"aT_jLsUgJH",
"zQgRMQ00pdC",
"EWe-2DSsXUo",
"nips_2022_lxdWr1jN8-h",
"BkMT4Vef6nz",
"s-zxjhwEIAI",
"s-zxjhwEIAI",
"PoRrJJt9VjR",
"PoRrJJt9VjR",
"BkMT4Vef6nz",
"nips_2022_lxdWr1jN8-h",
"nips_2022_lxdWr1jN8-h",
"nips_2022_lxdWr1jN8-h"
] |
nips_2022_jQR9YF2-Jhg | Respecting Transfer Gap in Knowledge Distillation | Knowledge distillation (KD) is essentially a process of transferring a teacher model's behavior, e.g., network response, to a student model. The network response serves as additional supervision to formulate the machine domain, which uses the data collected from the human domain as a transfer set. Traditional KD methods hold an underlying assumption that the data collected in both human domain and machine domain are both independent and identically distributed (IID). We point out that this naive assumption is unrealistic and there is indeed a transfer gap between the two domains. Although the gap offers the student model external knowledge from the machine domain, the imbalanced teacher knowledge would make us incorrectly estimate how much to transfer from teacher to student per sample on the non-IID transfer set. To tackle this challenge, we propose Inverse Probability Weighting Distillation (IPWD) that estimates the propensity of a training sample belonging to the machine domain, and assigns its inverse amount to compensate for under-represented samples. Experiments on CIFAR-100 and ImageNet demonstrate the effectiveness of \ours~for both two-stage distillation and one-stage self-distillation. | Accept | This paper analyzes the way in which most previous knowledge distillation methods violate IID assumptions and it aims to address the drop in performance on student models through this analysis. The paper proposes an Inverse Probability Weighting Distillation (IPWD) technique, derived in part through a causal analysis of the distillation setting. Results are mainly presented for CIFAR-100, but some ImageNet results are given and these result show that the proposed approach does indeed outperform a wide variety of prior work for distillation. The review scores for this paper place it right at the borderline of acceptance, with two weak accepts and one weak reject.
Given the paper was at the borderline of numerical acceptance and the signals from reviews and subsequent discussions were not conclusive, the Area Chair also read this paper and found the underlying idea to be quite interesting and novel. The application of causal analysis to the problem in this way does a nice job of brining together an important branch of machine learning (causal analysis) with deep learning and knowledge distillation. The AC also judged that the experimental work in this paper was substantial. Given that the method also yields better results than many other prior methods, AC recommends accepting this paper.
| train | [
"LI5cWy6Bi0x",
"MZi-KlZqo9i",
"yMbQ1tnXxA",
"pVcIn0Z7GS",
"JT9Yjei1eDd",
"AhklVs9q4xe",
"6Jrp6ou7CTU",
"-c8Bbfr-sN",
"hjqzjrnpXUe",
"UjsqdMlWrw",
"lLuoU59rfLJ",
"sJYLD78tLv",
"YldNQ3NtwQ",
"vgTzGNqb8mz",
"toqKWM5xUyy",
"clergAPJ-04",
"USK2rArPAaD",
"z-bNO_kiBJJ"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the acknowledgement of our responses and for upgrading the rating! For your remaining concerns, we would like to summarize the motivation, contributions (especially the **technical contribution** of propensity score estimation), and empirical performance on ImageNet in the following:\n\n* **Motivation**: We proposed to review KD in the perspective of transfer gap. We found that (1) the teacher's knowledge is imbalanced on the transfer set (Figure 1 and Line 51-61 in the revised version), and (2) the distillation performance of the under-represented classes is the bottleneck (Response to Reviewer 6yEk's W1). We hope to compensate for the under-weighted training samples.\n\n* **Analytical contributions**: We proposed IPWD inspired by the success of inverse probability weighting (IPW) in causal inference. We **interpret the transfer gap in KD from the perspective of causal inference**, and pointed out that the transfer set brings in the confounding effect (Line 148-169 in the revised paper). \n\n> The method of IPW has been well-established before. The technical contribution is limited.\n\n* **Technical contributions**: Although IPW is widely used in causal inference, how to apply IPW to KD remains challenging but has not been exploited. The key challenges are: (1) how to implement the idea of intervention and IPW, and (2) how to estimate the propensity score as there are no annotations. To tackle the first challenge, we **proposed a weighted distillation loss** (Eq. (6)) to compensate for the under-weighted training samples and overcome the confounding effect. To tackle the second challenge, we **proposed a new propensity score estimation strategy** to obtain the sample weights from data automatically. Specifically and technically, we (1) added an extra classification-trained (CLS-trained) head (Figure 3(b)) and compared the outputs of the CLS-trained head and KD-trained head (Eq. (5)), and (2) normalized the logits for stable training. We believe that our implementation of propensity score estimation (Line 202, Line 209, Eq. (5) in the revised paper) is novel and detailed, and the ablation studies in Table 5 validated the effect of our technical designs.\n\n> Empirical performance (e.g. on ImageNet) is not so significant.\n\n* **Performance on ImageNet**: We believe that the results on ImageNet validate the motivation, hypothesis, and effectiveness of our IPWD well. The results on ImageNet are three parts. First, the results with the **same architecture style for two-stage distillation**. Our IPWD **slightly underperforms WSLD by 0.16\\%** (71.88\\% vs. 72.04\\%) **but outperforms other baselines**, including the recent logit-based distillation method DKD published in CVPR'22. Second, the results with the **different architecture style for two-stage distillation**. Our IPWD **outperforms WSLD by 1.03\\%** (72.65\\% vs. 71.52\\%), **and outperforms recent DKD by 0.6\\%** (72.65\\% vs. 72.05\\%). We believe **the results with different architecture styles**, i.e., the performance gap between teacher and student is larger (3.56\\% vs. 7.29\\%), **are more convincing to validate our motivation and hypothesis of transfer gap**. Similar results can be observed on CIFAR-100. Third, the results with the **for one-stage distillation**. Our IPWD **consistently promote PS-KD on various metrics** (Table 6). In addition to increasing the accuracies, our IPWD significantly lowers the ECE (\\%) and AURC (×10$^3$) metrics (lower is better), which are used to indicate the calibration and separation of predictions. For example, for ResNet-101, IPWD lowers the ECE of PS-KD from 6.92 to 3.19, and lowers the AURC from 49.01 to 43.82.\n\n---\n\nThank you again for your effort and time in improving our work! We hope the further responses can address the your remaining concerns. Please let us know if there is anything unclear.",
" I would like to thank the authors for the detailed response to my earlier questions. The response clarifies my concerns over independence and confounding effect. Overall, I think the technical contribution and empirical performance (e.g. on ImageNet) are not so significant. I would like to increase my rating by 1. ",
" Thank you for the further comments!\n\n> Does this mean that we simply use feature-based KD instead of logits-KD?\n\nWe did not mean that. Logits-based strategies and feature-based strategies are two different research directions for KD. Our main focus is how to promote **logit-based distillation** methods (e.g., KD in Tables 2 and 4, SSKD in Table 3, PS-KD in Table 6 in our main paper) from the perspective of transfer gap, especially the imbalanced teacher's **logit knowledge**. As our analyses are all at the logit level, there is no guarantee that our analyses and conclusions on the logit level can be directly adopted to the feature level, and we won't over-claim the generalization of our conclusions.\n\n > I wonder whether the weight for the feature-based KD term is well set.\n\nWe would like to provide more details of implementing IPW on ReviewKD. The weights for feature-based KD term are calculated in the same way as logit-based KD, i.e., Line 212 and Eq. (6) in our revised paper. Note that the weight calculation in our IPWD are based on the outputs of a KD-trained classifier (Line 197) and a CLS-trained classifier (classification-trained classifier, Line 196, 204-205). Since the classifier of ReviewKD is updated using cross-entropy loss, we added an extra classifier which is trained with KD loss, and the gradients of KD loss would not be back-propagated to the visual backbone. The calculated sample weights are appended to the feature-based KD term for each sample. During the test stage, the extra classifier is discarded.\n\nA more narrowed and strict conclusion is that the weight calculation in our IPWD is not applicable to feature-based methods. We think the reweighting strategy for feature-based KD would be an interesting but less exploited direction, which is out of the scope of our paper. Inspired by the discussion with the reviewer, we would like to explore other reweighting strategies for feature-based KD in the future.\n\n-------\n\nPlease let us know if your have any further questions, concerns, or suggestions!",
" I really appreciate the response from the authors. However, I am concerned with your response to W3. First, the authors claim that \"although the logit knowledge of label is imbalanced, the representation knowledge of sample might be relatively balanced\". Does this mean that we simply use feature-based KD instead of logits-KD? Besides, I am not convinced by the results of combined feature-based KD method. I wonder whether the weight for the feature-based KD term is well set. ",
" Dear Reviewer ADzo,\n\nJust a kind reminder that the author-reviewer discussion will end soon on Tuesday, August 9. Reviewer 6yEk has acknowledged our rebuttal and raised the rating. If you have any further questions or concerns, please don't hesitate to let us know! We are looking forward to your follow-up feedback.\n\nBest,\n\nPaper 2438Authors",
" Thank you for the follow-up comments and suggestions! We have include the discussion about W1 in the revised paper. We will follow your suggestions and are working on the application to more SOTA KD methods, more results on ImageNet, and discussion with strong teachers. We will keep updating once we have new results.",
" Thanks for your detailed responses.\n\n**W1**: The results is convincing to me. I hope the authors can add this into the revised paper to better clarify your motivation.\n\n**W2**: Maybe a combination of IPWD and recent state-of-the-art KD methods can achieve better accuracy, it is nice to see that in the future revision.\n\n**Q1**: I suggest the authors to provide experiments on ImageNet in the future revision. Also, Reviewer T8CL raises a interesting topic about stronger teachers, the authors could have more experiments and discussions on it to show the superiority of IPWD.",
" Dear Reviewer ADzo,\n\nWe sincerely thank you for your efforts and time in our work. We tried our best to address all the concerns and questions. We have also updated the main paper and appendix following your comments. Please feel free to let us know if you have any further concerns or questions to discuss.\n\nBest,\n\nPaper 2438 Authors",
" Dear Reviewer 6yEk,\n\nWe sincerely thank you for your efforts and time in our work. We tried our best to address all the concerns and questions. We have also updated the main paper and appendix following your comments. Please feel free to let us know if you have any further concerns or questions to discuss.\n\nBest,\nPaper 2438 Authors",
" Dear Reviewer T8CL,\n\nWe sincerely thank you for your efforts and time in our work. We tried our best to address all the concerns and questions. We have also updated the main paper and appendix following your comments. Please feel free to let us know if you have any further concerns or questions to discuss.\n\nBest,\n\nPaper 2438 Authors",
" First of all, we gratefully thank all the reviewers for their thoughtful comments and feedback. \n\nWe are encouraged that the reviewers find our proposed domain transfer perspective of KD interesting (Reviewer T8CL, Reviewer 6yEk), reasonable and novel (Reviewer ADzo). We are glad that the reviewers find our paper has good structure and writing (Reviewer T8CL), our proposed method is helpful (Reviewer ADzo) and achieves gains significant improvements over the KD baseline (Reviewer 6yEk), and our experimental results are comprehensive (Reviewer T8CL).\n\nWe tried to address all the concerns and questions in detail. In particular, following the suggestions and comments of reviewers, we further provide (1) detailed explanations of propensity and confounding effect (Reviewer T8CL, Reviewer ADzo), (2) a discussion about applying long-tailed techniques on KD (Reviewer 6yEk), (3) a discussion about applying our method on feature-distillation (Reviewer 6yEk), (4) an analysis of KD's performance on under-represented classes (Reviewer 6yEk), and (5) the effect of training the teacher model with label smoothing on our method (Reviewer 6yEk).\n\nHope that our response answers the questions.",
" **Q4: Any other published baselines?**\n\nA: For fair comparisons, we compare our method with baseline methods using the same teacher model parameters, whether the teacher model may be the bottleneck of the student model's performance. One of the state-of-the-art methods SSKD achieves high performance with a better pre-trained teacher (e.g., improving teacher model WRN-40-2 from 75.61 to 76.46). In the main paper, we have also reimplemented SSKD and applied our IPWD on top of SSKD using the same teacher parameters. As shown in Table 2 in the main paper, our IPWD can improve SSKD with various architecture styles. \n\nAnother recent logit-based state-of-the-art method DKD (CVPR'22) is officially published after the NeurIPS submission deadline. \nWe reimplemented DKD (denoted as DKD$^*$) and IPWD using their released code and the sample teacher model for fair comparisons. \n\nAs shown in the below table, our IPWD achieves comparable performances on CIFAR-100, and outperforms DKD on ImageNet. In particular, IPWD outperforms DKD by 0.6\\% with ResNet-50 as teacher and MobileNet-v1 as student on ImageNet, i.e., teacher and student have different architecture styles. These comparisons demonstrate the effectiveness of our IPWD.\n\n*Table: Results on CIFAR-100*\n\n| Teacher | resnet50 | resnet32x4 |\tresnet32x4\t| WRN-40-2\t| vgg13 |\n|-------------------|----------------|----------------|----------------|----------------|----------------|\n| Student | MobileNetV2 | ShuffleNetV1 |\tShuffleNetV2\t| ShuffleNetV1\t| MobileNetV2 |\n| DKD | **70.35** |\t76.45 |\t**77.07** |\t76.70 |\t69.71 |\n| DKD$^*$ | 70.27\t| 76.03 |\t76.99 |\t76.49 |\t69.02 |\n| IPWD | 69.78\t| **76.61** |\t76.72 |\t**76.92** |\t**69.81** |\n\n*Table: Results on ImageNet*\n\n| Teacher | ResNet-34 | | ResNet-50 | |\n|----|------|----------|--------|-------|\n| Student | ResNet-18 | | MobileNet-v1 | |\n| | Top-1 | Top-5 | Top-1 | Top-5 |\n| DKD | 71.70 |\t90.41 | 72.05 |\t91.05 |\n| IPWD | **71.88**\t| **90.50** |\t**72.65** | **91.08** |\n\n[SSKD] Knowledge Distillation Meets Self-Supervision. ECCV'20.\n\n[DKD] Decoupled Knowledge Distillation. CVPR'22.\n",
" **Q1: Given the parameter $\\theta^t$, are $(x, y^t)$ independent from each other? If so, the justification of the use of IPW is weakened.**\n\nA: No, $(x,y^t)$ are non-independent from each other. Note that hard targets $y$ only provides context-invariant class-specific information (Line 39), while $y^t$ further contains context information. As the teacher model is trained on the training set $\\mathcal{D}$, the parameter $\\theta^t$ is conditioned on all the training samples $(x,y)$. As $y^t$ is obtained by $y^t=f(x;\\theta^t)$, $y^t$ is conditioned on $\\theta^t$, and further conditioned on other training pairs $\\mathcal{D}\\setminus${$(x,y)$} (Line 47-49). In other words, $y^t$ is obtained by seeing other training samples. Therefore, $(x, y^t)$ are non-independent from each other, and samples with frequent context information are more likely to be observed in the transfer set.\n\n---\n\n**Q2: What is the confounding effect brought by the transfer gap and occur in the dataset of teacher model?**\n\nA: We are sorry for the confusion. We provided the detailed explanations of confounder, confounding effect, and the effect of IPW in the following.\n\nThe training data $\\mathcal{D}=${$(x,y)$} and teacher model $\\theta^t$ jointly act as the confounder of image $x$ and teacher prediction $y^t$ in the transfer set. First, the training set $\\mathcal{D}$ and transfer set of teacher model $\\mathcal{D}^t=${$(x,y^t)$} share the same image set, and $x$ is sampled from the image set of $\\mathcal{D}$, i.e., $\\mathcal{D}$ serves the cause of $x$. Second, the teacher $\\theta^t$ is trained on $\\mathcal{D}$, and $y^t$ is calculated based on $\\theta^t$ and $x$, i.e., $y^t=f(x;\\theta^t)$. Therefore, $x$ and $\\theta^t$ are the cause of $y^t$. Note that the transfer set is constructed based on the images on $\\mathcal{D}$ and teacher model $\\theta^t$. Therefore, we regard $\\mathcal{D}^t$, the joint of $\\mathcal{D}$ and $\\theta^t$, as the confounder of $x$ and $y^t$.\n\nAlthough $\\mathcal{D}$ is balanced when considering the context-invariant class-specific information, the context information (e.g., attributes) is overlooked, which makes the $\\mathcal{D}$ imbalanced in context. Such imbalanced context leads to an imbalanced transfer set $\\mathcal{D}^t$ (as shown in Figure 1 in the main paper), and further affects the distillation of teacher's knowledge.\n\nTo overcome the above confounding effect, a commonly used technique is intervention via $P(y^t|do(x))$ instead of $P(y^t|x)$, which is formulated as $P(y^t|do(x))=\\sum_{\\mathcal{D}^t} P(y^t|x,\\mathcal{D}^t)P(\\mathcal{D}^t)=\\sum_{\\mathcal{D}^t} \\frac{P(x,y^t,\\mathcal{D}^t)}{P(x|\\mathcal{D}^t)}$. This transformation suggests that we can use the inverse of propensity score $P(x|\\mathcal{D}^t)$ (i.e., $P(x|machine)$ in the main paper) as sample weight to implement the intervention and overcome the confounding effect. \n\n---\n\n**Q3: Why is the label distribution of ImageNet a straight line? How is the average probability on the y-axis calculated for labels or predictions from a teacher model?**\n\nA: We agree and have noticed that the training set of ImageNet is not perfectly balanced. However, 895 out of 1000 classes have 1300 images, and only 36 classes have less than 1100 images. Therefore, we think the training set is still relatively balanced. For simplicity, we use a straight line to denote the relative balance of training set and highlight the imbalance of teacher predictions. We will update the figure to avoid misleading.\n\nFor calculating the average probability from a teacher model, we first obtain the teacher's predicted probability distribution for each training sample, and then sum up the probabilities over all the training samples. The simplification of training distribution on ImageNet does not affect the correctness and sharpness of teachers' prediction distribution.",
" **W1: What is the impact of imbalanced teacher knowledge in conventional KD methods, e.g., comparing the predictions of under-represented class of student trained with or without KD?**\n\nA: Thanks for pointing this out. Following the reviewer's suggestion, we rank and divide the 100 classes of CIFAR-100 into 4 groups according to the averaged predicted probability of the teacher model on the training set. Following the long-tail recognition task that also groups classes according to their numbers of samples, we take the macro-average recall as the metric. We report the improvement of KD compared to vanilla training (i.e., student trained without KD) in the below table. \n\nCompared to vanilla training, KD achieves better performance in all the subgroups. However, going deeper into the improvement for each subgroup, we found that the increase in the top 25 classes (i.e., over-represented) is much higher than the last 25 classes (i.e., under-represented), i.e., averagely 5.14 vs. 0.85. This observation verified our hypothesis that the effectiveness of KD on the under-represented samples is the bottleneck of KD, which is an interesting but overlooked issue in existing works.\n\n| Teacher->Student | Top 1-25 | Top 26-50 | Top 51-75 | Top 76-100 |\n|-------------------|----------------|----------------|----------------|----------------|\n| ResNet50 -> MobileNetV2 | +4.96 | +5.92 | +1.76 | +1.20 |\n| resnet32x4 -> ShuffleNetV1 | +5.80 | +2.68 | +2.52 | +0.84 |\n| resnet32x4 -> ShuffleNetV2 | +4.72 | +1.92 | +2.24 | +0.76 |\n| WRN-40-2 -> ShuffleNetV1 | +5.08 | +7.20 | +4.48 | +0.60 |\n\n---\n\n**W2: In Table 3, IPWD performs worse than WSLD on ResNet-18 student (71.88\\% vs. 72.04\\%) on ImageNet.**\n\nA: Actually, our IPWD performs better than WSLD in most cases on CIFAR-100 and ImageNet. \n\nOn CIFAR-100, IPWD outperforms WSLD with various teacher and student architectures, especially when their architecture styles are different, i.e., the gap between teacher and student is large. On ImageNet, although our IPWD underperforms WSLD by 0.16\\% when the teacher and student have the same architecture style (ResNet-34 -> ResNet-18) and their performance gap is relatively small (top1 accuracy gap: 3.56\\%), IPWD outperforms WSLD by 1.13\\% when their architecture style are different (ResNet-50 -> MobileNet-v1) and their performance gap is relatively large (top1 accuracy gap: 7.09\\%). \n\nThese comparisons demonstrate the effectiveness of IPWD to bridge the transfer gap, especially when the gap between teacher and student models is large, We believe this setting is more practical and general in real-world applications. (Lines 253-267)\n\n---\n\n**Q1: Can IPWD work with distribution shifts of teacher models caused by label smoothing?**\n\nA: Similar to KD, the performance of IPWD drops when the teacher model is trained with label smoothing, but still outperforms KD. However, we found that the improvement of IPWD compared to KD also decreases with label smoothing. \n\nFor example, on CIFAR-100, given ResNet50 as teacher and MobileNetV2 as student, IPWD outperforms KD by 1.12\\% (69.67\\% vs. 68.55\\%) without label smoothing, but the improvements drop to 0.56\\% (66.79\\% vs. 66.23\\%) with label smoothing. Given resnet32x4 as teacher and ShuffleNetV1 as student, IPWD outperforms KD by 1.52\\% (75.79\\% vs. 74.27\\%) without label smoothing, but the improvement drops to 0.53\\% (73.27\\% vs. 72.74\\%) with label smoothing. \n\nWe observed that teacher trained with label smoothing produces more balanced predictions compared to teacher trained without label smoothing. Therefore, the results are consistent with our hypothesis and conclusion that IPWD helps to bridge the transfer gap especially when the context information of teacher is imbalanced.\n",
" **W1: Can techniques for long-tailed classification fix the long-tailed property of teacher predictions?**\n\nA: Thanks for the insightful question! We select LA (ICLR'21) as the recent representative technique for long-tailed classification. LA proposed a logit adjusted softmax cross-entropy loss by applying a class prior to each logit. LA does not require extra modules (compared to TDE, NeurIPS'20), post-hoc logit adjustment (compared to LADE, CVPR'21), or ensemble of multiple models (compared to RIDE, ICLR'21).\n\nFollowing LA, we applied the class prior to the student output when calculating the KL divergence distillation loss. We found that KD+LA underperforms KD by averagely 0.5\\% on CIFAR-100. The possible reason is that the introduced prior indirectly breaks the teacher's knowledge for each training sample, which hurts the effectiveness of distillation. These results indicate that logit-adjust-based long-tailed techniques are not applicable to the issue of KD. We will explore other types of long-tailed techniques in the future.\n\n[LA] Long-tail learning via logit adjustment. ICLR'21.\n\n[TDE] Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect. NeurIPS'20.\n\n[LADE] Disentangling Label Distribution for Long-tailed Visual Recognition. CVPR'21.\n\n[RIDE] Long-Tailed Recognition by Routing Diverse Distribution-Aware Experts. ICLR'21.\n\n---\n\n**W2: How to understand propensity in a principal and mathematical way?**\n\nA: Let us first introduce the concepts of confounder in distilling teacher's knowledge. The training data $\\mathcal{D}=${$(x,y)$} and teacher model $\\theta^t$ jointly act as the confounder of image $x$ and teacher prediction $y^t$ in the transfer set. First, the training set $\\mathcal{D}$ and transfer set of teacher model $\\mathcal{D}^t=${$(x,y^t)$} share the same image set, and $x$ is sampled from the image set of $\\mathcal{D}$, i.e., $\\mathcal{D}$ serves the cause of $x$. Second, the teacher $\\theta^t$ is trained on $\\mathcal{D}$, and $y^t$ is calculated based on $\\theta^t$ and $x$, i.e., $y^t=f(x;\\theta^t)$. Therefore, $x$ and $\\theta^t$ are the cause of $y^t$. Note that the transfer set is constructed based on the images on $\\mathcal{D}$ and teacher model $\\theta^t$. Therefore, we regard $\\mathcal{D}^t$, the joint of $\\mathcal{D}$ and $\\theta^t$, as the confounder of $x$ and $y^t$.\n\nAlthough $\\mathcal{D}$ is balanced when considering the context-invariant class-specific information, the context information (e.g., attributes) is overlooked, which makes the $\\mathcal{D}$ imbalanced in context. Such imbalanced context leads to an imbalanced transfer set $\\mathcal{D}^t$ (as shown in Figure 1 in the main paper), and further affects the distillation performance of teacher's knowledge.\n\nTo overcome such confounding effect, a commonly used technique is intervention via $P(y^t|do(x))$ instead of $P(y^t|x)$, which is formulated as $P(y^t|do(x))=\\sum_{\\mathcal{D}^t} P(y^t|x,\\mathcal{D}^t)P(\\mathcal{D}^t)=\\sum_{\\mathcal{D}^t} \\frac{P(x,y^t,\\mathcal{D}^t)}{P(x|\\mathcal{D}^t)}$. This transformation suggests that we can use the inverse of propensity score $P(x|\\mathcal{D}^t)$ (i.e., $P(x|machine)$ in the main paper) as sample weight to implement the intervention and overcome the confounding effect.\n\n---\n\n**W3: Can IPWD work with feature-based distillation method?**\n\nA: Perhaps not. ReviewKD is a recent representative feature-based distillation method. We applied IPWD on ReviewKD at the feature level. We found that IPWD slightly decreases the performance of ReviewKD in most cases with marginal gaps, which indicates that IPWD is not applicable to feature-based distillation. \n\n| Teacher | WRN-40-2 | resnet56 | resnet110 | resnet32x4\t| WRN-40-2 |\n|--------|------|----|-----|-----|-----|\n| Student | WRN-16-2 | resnet20 |\tresnet32\t| ShuffleNetV2\t| ShuffleNetV1 |\n| ReviewKD | 76.12 |\t**71.89** | **73.89** |\t**77.78** |\t**77.14** |\n| ReviewKD+IPWD | **76.25**\t| 71.51 |\t73.79 |\t77.74 |\t77.06 |\n\nThe possible reasons are two-fold. First, although the logit knowledge of label $y$ is imbalanced, the representation knowledge of sample $x$ might be relatively balanced. Second, as pointed out by Decouple for long-tailed classification, \"data imbalance might not be an issue in learning high-quality representations\", which implies that the reweighting strategy is not compatible at the feature level. \n\nWe will include this discussion in the revised version and add it to limitation discussion.\n\n[ReviewKD] Distilling Knowledge via Knowledge Review. CVPR'21.\n\n[Decouple] Decoupling Representation and Classifier for Long-Tailed Recognition. ICLR'20.\n\n---\n\n**W4: Can IPWD work with a huge or strong teacher?**\n\nA: Thanks for the interesting research question. Note that the reference is an ArXiv preprint released after the NeurIPS submission deadline. We are happy to include this work in our future work discussion. ",
" This paper investigates the knowledge distillation problem by formulating it as a transfer learning problem. It finds that even if the teacher is trained with balanced training data, the transferred information could be imbalanced and it hurts the performance of KD. Therefore, this paper proposes Inverse Probability Weighting Distillation (IPWD) as an improvement of typical knowledge distillation loss, and extensive experimental results show the effectiveness of the proposed IPWD. + The paper investigates KD from a transfer learning perspective, which is interesting compared to the typical dark knowledge. \n+ The structure and writing are good. \n+ The experimental results are comprehensive. \n\n- Figure 1 shows that there seems long-tailed property of teacher predictions. I wonder whether it could be fixed by some techniques from long-tailed classification. \n\n- Formulation of propensity in Section 4.2 seems sort of heuristic. I do not really get how to understand it in a principal and mathematical way. \n\n- IPDW is a logits-based KD method. Would it be better if combined with the feature-based method? Necessary discussions and experimental comparisons are needed. \n\n- This paper claims that knowledge distillation will work if there is a valid transfer gap. However, recent work argues that if the teacher becomes way too strong, the KD seems to degrade. Will the proposed IPDW handle this issue? If so, experiments about a strong teacher should be included. A recent work DIST investigates a similar setting, which should be helpful if the authors want to verify the transfer gap issue in terms of a huge or strong teacher. \n\nKnowledge Distillation from A Stronger Teacher, https://arxiv.org/abs/2205.10536\n\n\n\n Please refer to the \"Weaknesses\". Yes.",
" This paper proposes a new KD method named IPWD. The authors state that the teacher's knowledge is imbalanced due to the imbalanced soft labels in teacher's predictions, and propose to use inverse probability weighting (IPW) to balance the weight of each sample in KD loss. Experiments on CIFAR-100 and ImageNet are provided to show the superiority of IPWD. Strengths:\n1. This paper gives an interesting domain transfer perspective of KD, in which the distributions of ground-truth labels and teacher's soft labels are different, this conflict makes KD less effective. Based on this, the authors propose a simple KD loss to balance the teacher knowledge using IPW, which gains significant improvements over the KD baseline.\n\nWeaknesses:\n1. The authors should discuss the impact of imbalanced teacher knowledge in conventional KD methods, to show the necessity of balanced knowledge. For example, compare the predictions of under-represented class of student trained with or without KD.\n2. In Table 3, IPWD performs worse than WSLD on ResNet-18 student (71.88% vs. 72.04%). 1. The distribution of teacher's soft labels would be varied a lot according to different training strategies. Can IPWD adapt better to these distribution shifts? For example, some works [1, 2] observed that KD performs poorly when with label smoothing.\n\nI will raise my rating if the authors can well address my concerns.\n\n\n[1] Müller, R., Kornblith, S. and Hinton, G.E., 2019. **When does label smoothing help?**. Advances in neural information processing systems, 32. \n[2] Shen, Z., Liu, Z., Xu, D., Chen, Z., Cheng, K.T. and Savvides, M., 2020, September. **Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study**. In International Conference on Learning Representations. Both limitations and potential negative societal impact were discussed.",
" This paper proposes a new loss function for teacher-student learning by observing the fact that the label predictions from a teacher model has a distribution shift compared with ground-truth labels. It is pointed out that this distribution mismatch can cause a gap during transfer due to over-representation and under-representation of examples, e.g. the existence of context-equivariance and context-invariance information in some image datasets. Specifically, inverse probability weighting is used in the loss function to assign larger weights to the KL divergence loss of under-represented examples. The weight is implemented by comparing the cross-entropies of both classification outputs and knowledge distillation outputs to the ground-truth label. Finally, empirical evaluations are conducted on some image datasets and results show that the proposed method can achieve improved performance over some baselines. Strengths: The motivation of solving the distribution mismatch problem of the teacher model is reasonable and novel. This observation is of practical importance in knowledge distillation as this problem is expected to occur in many teacher student learning datasets. The use of inverse probability weighting (IPW) for KD in the proposed method seems to be helpful for this problem according to the results of experiments and ablation study.\n\nWeaknesses: Although the aforementioned problem is well clarified, the use of IPW is not well motivated or explained. Many explanations of IPW and its motivation relate to the problem of non-IID in the model dataset. However, my concern is that this does not exactly fit or target the problem of distribution mismatch problem. Please see the questions below for more details. Moreover, the method of IPW has been well-established before. The technical contribution is limited.\n Q1: It is claimed several times in the paper that the dataset generated by the teacher model is no longer IID (e.g. L142). I don't understand why this is the case. Once we have a teacher model, the dataset is generated from the distribution induced by this teacher model. Given the parameter $\\theta^t$, individual examples $(x,y^t)$ are also independent from each other. As introduced in Section 2, IPW is practically suitable to non-IID observations. The resulting distribution is likely to be long tailed but the examples still seem to be independent, which weakens the justification of the use of IPW. Could the authors please explain more on this?\n\nQ2: L146: \"Thanks to the causality-based theory, we can use the IPW technique to overcome the confounding effect brought by the transfer gap\". In my opinion, this claim is not well supported by current writing. Which part of the causality-based theory is relevant to the problem described in this paper? In this paragraph only references are mentioned but detailed explanation is missing. For the second half of the sentence, I don't understand what is the confounding effect brought by the transfer gap. How does the confounding effect occur in the dataset of teacher model?\n\nQ3: In Fig.1, the blue dashed line refers to the histogram of different labels. The ImageNet dataset (ILSVRC 2012) has 1,000 classes with varying number of images per class ranging from around 732 to 1300. Its training set is imbalanced. I don't see why in the right subfigure the label distribution is a straight line. How is the average probability on the y-axis calculated for labels or predictions from a teacher model?\n\nQ4: In experimental comparisons, the proposed method is compared with some baselines including CRD and WSLD. I'm not sure whether the reported results are SOTA on knowledge distillation benchmark. Are there any other published baselines?\n See weakness and questions."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"MZi-KlZqo9i",
"YldNQ3NtwQ",
"pVcIn0Z7GS",
"toqKWM5xUyy",
"-c8Bbfr-sN",
"6Jrp6ou7CTU",
"vgTzGNqb8mz",
"z-bNO_kiBJJ",
"USK2rArPAaD",
"clergAPJ-04",
"nips_2022_jQR9YF2-Jhg",
"z-bNO_kiBJJ",
"z-bNO_kiBJJ",
"USK2rArPAaD",
"clergAPJ-04",
"nips_2022_jQR9YF2-Jhg",
"nips_2022_jQR9YF2-Jhg",
"nips_2022_jQR9YF2-Jhg"
] |
nips_2022_mfxq7BrMfga | Generalized One-shot Domain Adaptation of Generative Adversarial Networks | The adaptation of a Generative Adversarial Network (GAN) aims to transfer a pre-trained GAN to a target domain with limited training data. In this paper, we focus on the one-shot case, which is more challenging and rarely explored in previous works. We consider that the adaptation from a source domain to a target domain can be decoupled into two parts: the transfer of global style like texture and color, and the emergence of new entities that do not belong to the source domain. While previous works mainly focus on style transfer, we propose a novel and concise framework to address the \textit{generalized one-shot adaptation} task for both style and entity transfer, in which a reference image and its binary entity mask are provided. Our core idea is to constrain the gap between the internal distributions of the reference and syntheses by sliced Wasserstein distance. To better achieve it, style fixation is used at first to roughly obtain the exemplary style, and an auxiliary network is introduced to the generator to disentangle entity and style transfer. Besides, to realize cross-domain correspondence, we propose the variational Laplacian regularization to constrain the smoothness of the adapted generator. Both quantitative and qualitative experiments demonstrate the effectiveness of our method in various scenarios. Code is available at \url{https://github.com/zhangzc21/Generalized-One-shot-GAN-adaptation}. | Accept | This paper focuses on the one-shot domain adaption of GAN model. The idea of disentangling style and entity transfer is simple and effective. The meta-reviewer recommends acceptance of the paper, and the authors are encouraged to take the reviews into consideration when preparing a final version of the paper. | train | [
"hsiD26gpvdC",
"cAE_szxsbrJ",
"QeQy-Up1NEF",
"wj81F2gCTUN",
"Q407KHuMRTm",
"jqrXa4XzR6u",
"u_zgrZoJ53P",
"0jMWltjrp7e",
"HL6Z2D41woH",
"GNfMuIxERD",
"uzAn-WduQh6",
"E03tlvtrlRW",
"gQ3-gX4tK3m",
"ACwlD9gVT3C"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author addressed most of my concerns. Thus, I tend to raise my score.",
" Thank you for answers and extra experiments. Most of my concerns were addressed and I am raising the score accordingly. ",
" __Q4. Where can see other methods produce artifacts when the entities are big?__\n\nA4. Please see the last three cases in Fig.3 in the main paper. For other methods, the black hat pollutes their\nsynthesized hairs. The Zelda ornaments introduce obvious artifacts to the synthesized faces, and so does the mask. More\nresults can be seen in Fig. 22 to Fig. 31 of revised supplementary materials.\n\n__Q5. How were the hyperparameters chosen?__\n\nA5. We provide the details of parameter selection in the Sec.8 of the revised supplementary materials.\n\n__Q6. Details on clarity__\n\nA6. Thanks for your careful review. We will continue to revise and polish our paper according to your advice.\n\n1. Task definition in l.106. \n A: We have modified the definition as \"With the knowledge stored in a generative model $G_s$ pre-trained on source\n domain $\\mathcal{S}$, a generative model $G_{t}$ is learned from $\\boldsymbol{y}\\_{ref}$ and $\\boldsymbol{m}\\_{ref}$\n to generate diverse images belonging to domain $\\mathcal{T}$.\"\n\n2. The methodology section focuses on one particular case. The methodology should be general. \n A: We heavily agree with your point that the methodology should be general. Actually, we follow previous works like\n MTG, Oneshot-CLIP, and JoJoGAN to take the face domain as the main example, which will be easy to understand for most \nreaders.\n\n3. Figure 2 is not clear. \n A: We are sorry for this point, and working on ways to improve it. As the texts in the caption, we guide the readers\n to refer to Sec. 4.1. Although we do not describe the model too much in the caption, we think it has been clearly\n stated in Sec. 4.1 and self-consistent. We will continue to revise it.\n\n4. About UNet and Eq. (1). \n A: Please note that texts from l.123 to l.35 are used to describe the networks. The aux adopts UNet architecture and\n will be trained from scratch. Eq. (1) is right since $m$ and $f_\\\n {ent}$ are predicted by the UNet in aux. The architecture of UNet has been illustrated in Fig.4 of the revised\n supplementary materials.\n\n5. The mean of “upsampled by m”. \n A: Since the size of $m$ is smaller than that of $y$, it should be upsampled to do the Hardmard product.\n\n6. Images in l.241, no reference is provided. \n A: Thanks for your reminder. We have added the citations to them.\n",
" Thanks for your careful review. We will explain your concerns in detail.\n\n__Q1. Averaged quantitative metrics over many source/target samples.__\n\nA1. Thanks for your advice. Since all the compared works (i.e., FSGA, MTG, JoJoGAN, and OSCLIP) only report the quantitative metrics on each source/target domain, we follow them to evaluate each source/target domain for a fair comparison. To get a more general conclusion, we conduct experiments on 50 target images including 25 images with entities and 25 images without entities. The average evaluation results are provided in the tables below.\n\n- The comparison of different methods. Here we do not evaluate the results of FSGA, since it suffers serious\n over-fitting as depicted in Fig. 3 of the main paper. We can conclude our model performs better than previous works. For the\n details and qualitative samples, please refer to Sec. 9 and Sec. 10 of the revised supplementary materials. The value\n represents $mean_{std}^{ci}$, and $mean \\pm ci$ denotes the confidence interval at 95\\% confidence level.\n\n | Metric | MTG | OSCLIP | JoJoGAN | Ours |\n | :------: | :-----: | :------: | :-------: | :----: |\n | NME$\\downarrow$ | $0.12^{0.01}\\_{0.03}$ | $0.17^{0.04}\\_{0.10}$ | $0.12^{0.03}\\_{0.10}$ | $0.10^{0.04}\\_{0.11}$ |\n | ID$\\uparrow$ | $0.16^{0.02}\\_{0.06}$ | $0.19^{0.01}\\_{0.03}$ | $0.17^{0.01}\\_{0.04}$ | $0.27^{0.02}\\_{0.05}$ |\n\n- We also provide the results of different weights of $\\mathcal{L}\\_{VLapR}$. The results prove that the regularization\n is effective to preserve the source contents. For quantitative results please refer to Fig. 10 of the revised\n supplementary materials.\n\n | Metric | $\\lambda\\_{4}=0$ | $\\lambda\\_{4}=0.5$ | $\\lambda\\_{4}=2$ | $\\lambda\\_{4}=10$ |\n | :------: | :----: | :------: | :-------: | :----: |\n | NME$\\downarrow$ | $0.11^{0.03}\\_{0.10}$ | $0.10^{0.04}\\_{0.11}$ | $0.10^{0.02}\\_{0.06}$ | $0.09^{0.02}\\_{0.06}$ |\n | ID$\\uparrow$ | $0.23^{0.02}\\_{0.05}$ | $0.27^{0.02}_{0.05}$ | $0.30^{0.02}_{0.05}$ | $0.34^{0.02}_{0.04}$ |\n\n__Q2. The related work is not well-redacted.__\n\nA2: Thanks for your advice. We have reorganized the related work in the revised paper to make it better fit our article.\n\n__Q3. About slice Wasserstein distance in l.176-177.__\n\nA3: In brief, SWD can completely capture the target distribution and computed fastly. The following explains why\noptimizing SWD is a more efficient and elegant way in our framework.\n\n- Firstly, we claim that both the style transfer and entity generation can be interpreted by learning the internal\n distribution of example. As stated in l.171 to l.176, many works (_e.g._, SinGAN) adopt patch GAN loss to learn\n internal distribution to generate new images. In theory, GAN losses essentially correspond to the divergence (vanilla\n GAN loss versus JS-divergence) or distance (WGAN-gp loss versus Wasserstein distance) of the distributions. For style\n transfer, the commonly used style losses, like gram loss and moment loss, are proved to align feature distributions of\n stylized images and example [1]\n . [2] also shows that the more precise the alignment, the more faithful the stylization.\n\n- Then, although optimizing (patch) GAN loss is the most prevalent way to train the generative model, some drawbacks make it\n inept for our framework. For example, since the discriminator of StyleGAN is very huge, FSGA (trained by GAN loss)\n spent nearly 50 minutes per image and takes much more than 23Gib GPU memory. It also performs obvious mode collapse.\n Besides, hence our core idea is to decouple the domain into style and entity, we can design specific losses for them.\n\n- Finally, SWD is very applicable to our framework in both theory and practice. As proved in [3,4], for distributions\n $p$ and $q$, $p = q \\Leftrightarrow SWD(p,q)=0$. The property makes it differ from and superior to the gram loss,\n moment matching loss which cannot capture the complete distribution. In the experiment (See Sec. 2 in revised\n supplementary materials), for style adaption, after careful adjustment the Gram and moment style loss have similar\n performance to SWD in style adaption. However, for entity adaption they cannot work. Moreover, projecting the\n high-dimensional data into the one-dimension space makes SWD superior in speed and memory cost. Our framework takes\n about no more than 13GiB GPU memory and learns fast.\n\n [1] Li, Yanghao et al. “Demystifying Neural Style Transfer.” IJCAI 2017.\n\n [2] Kalischek, Nikolai et al. “In the light of feature distributions: moment matching for Neural Style Transfer.” CVPR\n 2021.\n\n [3] Pitié, François et al. “N-dimensional probability density function transfer and its application to color\n transfer.” ICCV 2005.\n\n [4] Kolouri, Soheil et al. “The Radon Cumulative Distribution Transform and Its Application to Image Classification.”\n IEEE Transactions on Image Processing 25 (2016): 920-934.\n\n",
" Thanks for your careful review and appreciation of our work. We will explain your concern in detail.\n\n__Q1. The definition of style. The geometric feature is also part of the style and not just the color.__\n\nA1: We agree with your point that the geometric feature is also part of the style. We will make a specific explanation\nabout the concept of style in this paper, and show that by adjusting the weight of style, the geometric features can be\nchanged in vision.\n\n- Since different people may have a different perceptions of style, it is really hard to put the exact concept of style\n into words. Nonetheless, in the community of computer vision, the style of image is usually defined [1] to its __texture__, which can be represented by its internal statistics. By aligning the statistical features of the content image\n with that of the style image, the content image will obtain the style. In our paper, we have also followed this rule\n and used the internal distributions to describe the texture.\n\n- We consider that a successful and practical adapted model should preserve the user's identity. Thus our algorithm\n not only transfers the style and entity knowledge, but also makes the contents of adapted syntheses faithful to that of the\n source image, _i.e._, cross-domain correspondence. As shown in the figures of our paper and the supplementary materials, for most images the adapted models can generate joyful results and can be applied for editing faces. The user studies also prove the\n point.\n\n- We provide an experiment in Fig. 9 of the revised supplementary materials, which shows increasing the weight of style loss can improve the performance of style effects with geometric change. Therefore, our method can control the degree of stylization by adjusting the style loss weight. And in our paper, we set it to a moderate value to preserve the content of the source image while adequately transferring the style. \n\n[1] Jing, Yongcheng et al. “Neural Style Transfer: A Review.” IEEE Transactions on Visualization and Computer Graphics\n26 (2020): 3365-3385.",
" __Q1. The writing of this paper can be improved.__\n\nA1. Thanks for your careful review and advice on our work, we will polish our paper carefully. The last paragraph of the Introduction has been rewritten to highlight our contributions.\n\n__Q2, What is the reason to produce a mask using UNet since we can directly use the mask provided by users?__\n\nA2. Taking the StyleGAN pretrained on FFHQ as an example, we think about the \"mask provided by users\" could mean two\nthings, the reference mask provided by users, or (most likely) the masks for the face images that the users want to\nstylize. To avoid misunderstandings, we will explain both of them.\n\n- Utilizing the reference mask provided by users as the masks for all syntheses. It is infeasible since the adapted\n generator should synthesize the image with a reasonable entity that is in a proper location, of the proper shape, and\n high-quality texture in vision. Fixing the mask will make most entities placed in the wrong location and of improper\n shape.\n- Users provide the masks for their desired face images. It is a good idea for more accurate control and a viable\n solution for our limitations. However, in this paper, we focus on adapting GAN. Since it is hard to manually get the\n masks for all latent codes, we need to predict the masks by the unet. As your say, we think that the masks provided by the user could be helpful when the results are not reasonable, or the user wants to specify the shape and position of the entity. In Sec.6 (Mask-guided transfer) of the revised supplementary materials, we provide the\n experiment to show the feasibility of the idea.\n\n__Q3. Comparison with other internal distribution distances like mse between gram matrices.__\n\nA3. We will discuss it both theoretically and experimentally, and the conclusion is that Gram matrix loss\ncan be used as Style loss, but cannot be used as entity loss. \n\n- In theory. As proved in [1], the Gram matrix loss is equivalent to the maximum mean discrepancy. For two distributions\n $p$ and $q$, optimizing Gram matrix loss matches the mean statistics of $p$ and $q$, but $GramLoss(p,q)=0 \\nRightarrow\n p = q$. Some improved losses like BN statistics matching and moment matching [2] try to capture distribution more\n exactly but still cannot capture the complete distribution. By contrast, [3,4] shows that $SWD(p,q)=0 \\Leftrightarrow\n p = q$. Hence, in theory, both Gram matrix loss and SWD can be applied to style transfer, but only SWD is suitable for\n the generation task, where the internal distribution needs to be fully captured. For example, [5] proves that SWD can\n be applied to simple texture synthesis by image-based optimization, while Gram loss cannot. Our works also prove that\n combined with pretrained generators, SWD can be used to learn more complex images.\n\n- In the experiment. After careful adjustment of weights, we find the Gram loss can be used as style loss in our framework.\n With the same vgg features as used in SWD, and weight 2e-6 for balancing the large Gram loss value, it gets similar\n style adapted results like SWD in visual. We also try the moment matching [1], they do work very similarly. But for\n entity adaption, taking all these style losses fail to generate the entities. Please refer to the Fig. 2 of the revised supplementary materials.\n\n[1] Li, Yanghao et al. “Demystifying Neural Style Transfer.” IJCAI 2017.\n\n[2] Kalischek, Nikolai et al. “In the light of feature distributions: moment matching for Neural Style Transfer.” CVPR 2021.\n\n[3] Pitié, François et al. “N-dimensional probability density function transfer and its application to color transfer.”\nICCV 2005.\n\n[4] Kolouri, Soheil et al. “The Radon Cumulative Distribution Transform and Its Application to Image Classification.”\nIEEE Transactions on Image Processing 25 (2016): 920-934.\n\n[5] Heitz, Eric et al. “A Sliced Wasserstein Loss for Neural Texture Synthesis.” CVPR 2021.\n\n__Q4. Why do you give the task of this paper a new name--generalized one-shot GAN adaption?__\n\nA4. Firstly, in mathematics, if there exists a problem, the generalized problem means that it is more general, and the\noriginal problem is just a special case of the generalized problem. In our paper, the previous one-shot domain adaption\nis a special case when the mask is full-zero.\n\nSecondly, to our best knowledge, there has never been a study of entity transfer in either classic style transfer or\nthe latest GAN adaption. Although only adding a mask of entity to the task settings, as shown in the paper, it will bring\nmore interesting applications for artistic creations. Most importantly, it may bring more insightful eyes to consider\nhow to better utilize the high-level knowledge restored in the pre-trained generators, rather than continuing to focus\non the color or style transfer that is relatively mature nowadays. Hence, we gave the task a new name.\n",
" __Q5. About The disentangled GAN structure and main contributions__\n\nA5. Although disentangled GAN structure has been well studied, we are the first to propose the meaningful\ndisentanglement of style and entity for the GAN adaption. Our main contributions can be concluded into two aspects.\n\n1) We generalize the one-shot GAN adaption (OSGA) task focusing on style transfer with the _entity adaption_. To our\n best knowledge, there has never been a study of entity adaptation in either classic style transfer or the latest OSGA\n field. This task is much more challenging since for each synthesis the entity should be in the right location, of\n proper shape and have high-quality texture in visual. We believe this task could lead to more breakthroughs in the\n field of the generative model, and bring more help to artistic creation.\n\n2) In technique, we propose a novel and concise framework with the disentangled style and entity loss, and a manifold\n regularization. The SWD for internal learning makes sense and speeds up the training tremendously. The regularization\n is effective to avoid distortion by SWD. As mentioned by other reviewers, our method makes sense (Reviewer AEH2)\n and sound (Reviewer Wx24).\n\n__Q6.The experiments are insufficient. The evaluation size is too small.__\n\nA6. Since the few-shot GAN adaption works like FSGA reports the metric results on each source/target domain, and\none-shot adaption works MTG, JoJoGAN, OSCLIP only report user study results on each source/target domain, we follow them\nto report the results on each source/target domain for a fair comparison. \nTo get a more general conclusion, we conduct experiments on 50 target images. The average evaluation results are\nshown in the tables below.\n\n- The comparison of different methods. Here we do not evaluate the results of FSGA, since it suffers serious\n over-fitting as illustrated in our paper. We can conclude our model performs better than previous works. For the\n details and qualitative samples, please refer to Sec. 9 and Sec. 10 of the revised supplementary materials. The value\n represents $mean_{std}^{ci}$, and $mean \\pm ci$ denotes the confidence interval at 95\\% confidence level.\n\n | Metric | MTG | OSCLIP | JoJoGAN | Ours |\n | :------: | :-----: | :------: | :-------: | :----: |\n | NME$\\downarrow$ | $0.12^{0.01}\\_{0.03}$ | $0.17^{0.04}\\_{0.10}$ | $0.12^{0.03}\\_{0.10}$ | $0.10^{0.04}\\_{0.11}$ |\n | ID$\\uparrow$ | $0.16^{0.02}\\_{0.06}$ | $0.19^{0.01}\\_{0.03}$ | $0.17^{0.01}\\_{0.04}$ | $0.27^{0.02}\\_{0.05}$ |\n\n- We also provide the results of different weights of $\\mathcal{L}\\_{VLapR}$. The results prove that the proposed regularization\n is effective in preserving the source contents. Please refer to Fig. 10 of the revised\n supplementary materials for the qualitative results.\n\n | Metric | $\\lambda\\_{4}=0$ | $\\lambda\\_{4}=0.5$ | $\\lambda\\_{4}=2$ | $\\lambda\\_{4}=10$ |\n | :------: | :----: | :------: | :-------: | :----: |\n | NME$\\downarrow$ | $0.11^{0.03}\\_{0.10}$ | $0.10^{0.04}\\_{0.11}$ | $0.10^{0.02}\\_{0.06}$ | $0.09^{0.02}\\_{0.06}$ |\n | ID$\\uparrow$ | $0.23^{0.02}\\_{0.05}$ | $0.27^{0.02}_{0.05}$ | $0.30^{0.02}_{0.05}$ | $0.34^{0.02}_{0.04}$ |\n\n__Q7. Comparison to other manifold GANs.__\n\nA7. We do not make the comparison, since Manifold GANs aim to fit the _real manifold_ of the large-scale dataset. In contrast, for the one-shot task, the single image cannot formulate a manifold, which means that there does not exist a real manifold as the fitting target. Thus it cannot define the manifold concepts like radius or center in MMGAN. Our me method just utilizes the smoothness information of the source generator to preserve the relative relation invariant. As stated in line 207, we have discussed and compared our $\\mathcal{L}\\_{VLapR}$ with $\\mathcal{L}\\_{CDC}$ proposed in FSGA, which is a classic manifold regularization like Stochastic Neighborhood Embedding (SNE) in essence. The results show that our $\\mathcal{L}\\_{VLapR}$ has advantages in both theory and experiments for this task. \n",
" __Q3. Why use sliced Wasserstein distance?__\n\nA3: In brief, SWD can completely capture the target distribution and computed fastly. The following explains why\noptimizing SWD is a more efficient and elegant way in our framework.\n\n- Firstly, we claim that both the style transfer and entity generation can be interpreted by learning the internal\n distribution of an example. As stated in l.171 to l.176, recent many works (e.g., SinGAN) adopt patch GAN loss to learn\n internal distribution to generate new images. In theory, GAN losses essentially correspond to the divergence (vanilla\n GAN loss versus JS-divergence) or distance (WGAN-gp loss versus Wasserstein distance) of the distributions. For style\n transfer, the commonly used style losses, like gram loss and moment loss, are proved to align feature distributions of\n stylized images and example [1]\n . [2] also shows that the more precise the alignment, the more faithful the stylization.\n\n- Then, although optimizing (patch) GAN loss is the most prevalent way to train the generative model, some drawbacks make it\n inept for our framework. For example, since the discriminator of StyleGAN is very huge, FSGA (trained by GAN loss)\n spent nearly 50 minutes per image and takes much more than 23Gib GPU memory. It also performs obvious mode collapse.\n Besides, hence our core idea is to decouple the domain into style and entity, we can design specific losses for them.\n\n- Finally, SWD is very applicable to our framework in both theory and practice. As proved in [3,4], for distributions\n $p$ and $q$, $p = q \\Leftrightarrow SWD(p,q)=0$. The property makes it differ from and superior to the gram loss,\n moment matching loss which cannot capture the complete distribution. In the experiment (See Sec. 2 in the revised supplementary\n materials), for style adaption, after careful adjustment the Gram and moment style loss have similar performance to\n SWD in style adaption. However, they cannot work for entity adaption. Moreover, projecting the high-dimensional data\n into the one-dimension space makes SWD superior in speed and memory cost. Our framework takes about no more than 13GiB\n GPU memory and learns fast.\n\n [1] Li, Yanghao et al. “Demystifying Neural Style Transfer.” IJCAI 2017.\n\n [2] Kalischek, Nikolai et al. “In the light of feature distributions: moment matching for Neural Style Transfer.” CVPR 2021.\n\n [3] Pitié, François et al. “N-dimensional probability density function transfer and its application to color\n transfer.” ICCV 2005.\n\n [4] Kolouri, Soheil et al. “The Radon Cumulative Distribution Transform and Its Application to Image Classification.”\n IEEE Transactions on Image Processing 25 (2016): 920-934.\n\n__Q4. How to get the style-fixed code?__\n\nA4: Please refer to Sec.4.2. We replace the style part (the latest $18-l$ vectors) of $\\boldsymbol{w}$ by that\nof $\\boldsymbol{w}\\_{ref}$ to get the style-fixed code $\\boldsymbol{w}^{\\sharp}$. The process is formulated by Eq. (3) in the main paper.\n\n$\\boldsymbol{w}^{\\sharp} = diag(\\boldsymbol{\\alpha}) \\boldsymbol{w} + diag({1}-\\boldsymbol{\\alpha})\n\\boldsymbol{w}_{ref},\\ \\alpha\\_{i}=\\textbf{1}\\_{i<=l}(i), \\ i = 1,\\dots,18. $\n\n$\\textbf{1}$ is the indicative function, $diag$ is the diagonalization operator, and $l$ is a hyperparameter to control the\ntrade-off between content and style. Visualization results are provided in Fig.2 in the main paper, in which the yellow arrow brings the style part of\n$\\boldsymbol{w}_{ref}$ to $\\boldsymbol{w}^{\\sharp}$, and the syntheses will get the style of reference.\n",
" __Q1. About the problem setting: the clear binary entity mask does not always exist for the general one-shot domain\nadaption setting.__\n\nA1: Thanks for your careful reviews. Because there may exist different interpretations of the words _clear_ and _exist_. We\nwill explain our generalized one-shot GAN task, and the entity mask in more detail.\n\n- About the task setting. Our task is generalized from the previous one-shot GAN adaption (OSGA) task. It not only focuses on (a) traditional style adaption like OSGA, but also (b) imports the entity from the target image to the syntheses. \nTo achieve this goal, we introduce an extra mask to bring the two cases into a unified framework. When the users do not provide any mask, an all-zeros mask is computed automatically and the problem is solved as Case (a). Otherwise, both the style and the located entity will be transferred into the syntheses (Case (b)), which have never been studied and realized in previous works.\n\n- About the clear mask. We consider that the term “style” is a global concept to describe the distribution of color and texture in the target image, while the “entity” (like the hat) is a local concept about the specific object in the target image, which usually has explicit boundaries (e.g., the green lines in our paper). However, our method is robust to the mask and there is no need to provide a precise segmentation mask for the entity. A rough mask will still accomplish our purpose. For example, we locate the hat by a rough polygon, and the syntheses also look good (see Fig. 38 in Supplementary Materials).\n\n__Q2. How to get the mask?__\n\nA2: Since our target dataset contains only one image, obtaining its mask annotation is very efficient. In our\npaper, for the image containing the entity, we manually annotate the mask with the opensource tool LabelMe. We enclose the\nentity with lines that meet end to end, and the mask will be extracted automatically. The annotation usually takes no\nmore than 1 minute. We think the increased labor cost is negligible compared with that of creating or looking for the desired\ntarget image. Moreover, the mask of the entity (e.g., a hat) can also be obtained by the pre-trained segmentation\nmodels.\n",
" We thank all the reviewers for their thoughtful reviews and constructive comments on our work! We are encouraged and\nglad to hear the feedback from reviewers that:\n\n1. Our work is well motivated (Wx24), well written and easy to follow (5BYZ).\n2. Our proposed __task__, _i.e._, generalized one-shot domain adaption, is a novel contribution (Wx24) and\n interesting (fun1). It targets a useful application to the community, and will be helpful to content creators (Wx24)\n and artistic creation (5BYZ).\n3. Our proposed __idea__ that decouples the domain adaption into style and entity transfer is straightforward (fun1).\n4. Our __method__ using sliced Wasserstein distance and variant Laplacian regularization makes sense (AEH2). It is also\n sound (Wx24) and faster (AEH2).\n5. Our proposed __metrics__ are reasonable and a good addition to the existing user studies (Wx24).\n6. The __results__ look promising (AEH2), interesting, and sufficient visual advantages over the competition (5BYZ).\n\nAs the advantages are comprehensive in various aspects, the reviewers have raised different questions, which\nare quite helpful to improve the paper and dig into the task. We have addressed these questions with additional\nexperiments and clarifications, which have been added to the\nupdated paper and supplementary materials. In response to feedback, we provide individual responses below to address each reviewer’s concerns.",
" This paper considers the one-shot domain adaption problem, it decoupled adaptation into two parts: style and entity transfer. Unlike most previous works that mainly focus on style transfer, the proposed method utilises binary entity mask with concise manifold regularized GAN design. The author modifies the architecture of the original generator to decouple the adaption of style and entity, and proposes the variant Laplacian regularization to smooth the network. Extensive experiments are conducted on various references with and without entities.\n This paper focuses on an interesting one-shot domain adaption problem, the idea of disentangling style and entity transfer is straightforward.\n\nThe motivation of this paper is to solve one-shot domain adaption problem. However, I have some concerns as follows:\n1. About the problem setting: the paper only explores the domain adaption with a clear binary entity\nmask on the target image, which does not always exist for the general one-shot domain adaption setting. \n\n2. The disentangled GAN structure has been well studied by many existing works, it is hard for the reviewer to identify the real contributions of this paper.\n\n3. The experiments are insufficient. The evaluation size is too small, the main result Tabel.1, only uses Fig. 3's images for quantitative evaluation.\n\n4. Since the proposed method focuses on manifold regularization, some related Manifold GAN methods [1][2][3] are missing in comparison.\n[1]MR-GAN: Manifold Regularized Generative Adversarial Networks\n[2]MMGAN: Manifold-Matching Generative Adversarial Network\n[3]Manifold-preserved GANs How to get binary entity masks? The author just mentioned it in the abstract and fig.1, should add descriptions in the experiment part.\n\nWhy did this paper chose sliced Wasserstein distance? Need more insight and theoretical depth.\n\nIn L139 'each w will be transformed into the style-fixed code...', how to do transform? Need more details. \n The authors adequately addressed the limitations.",
" This paper performs one-shot domain adaption of StyleGAN model and the model additionally supports adding new entities compared to previous methods. The proposed method is able to do so thanks to a handful of points. Among them, two techniques are interesting: 1) using sliced Wasserstein distance to compute the internal distribution distance between two images; 2) adding variant Laplacian regularization to alleviate cross-domain correspondence distortion. Strengths:\n\nThe experimental results are promising. Using sliced Wasserstein distance and variant Laplacian regularization make sense. The authors provide code and additional materials so the method seems reproducible. The learning speed of this approach is faster than previous methods.\n\n\nWeaknesses:\n\nThe writing of this paper can be improved. From the abstract and introduction, it is hard to identify the contributions of the proposed approach. Fig. 2 is hard to follow. It seems weird to have Fig. 1 below the title. What is the reason to produce a mask using UNet since we can directly use the mask provided by users (if I understand correctly)? The underlying motivation is unclear. Plus, the authors should conduct an experiment to explain the benefit of doing it.\n\nThe paper missed comparison with other internal distribution distances like mse between gram matrices. To me, entity loss and style loss can be directly implemented with MSE between the gram matrices of deep features from two images. The authors are advised to try these loss functions to show the advantage of using sliced Wasserstein distance.\n\nWhy do you give the task of this paper a new name--generalized one-shot GAN adaption? From my understanding, the method additionally can add entities but that doesn't mean it is generalized. The authors provide a section to talk about the limitations of this work. I agree with the authors about these limitations. I think one viable solution for the entity position problem is to inquire the users for an extra mask representing the desired location of the entity in the synthesized image.",
" This paper proposes a manifold regularized GAN adaption framework to deal with the generalized one-shot GAN adaption problem (i.e. the target domain to transfer contains both artistic styles and entities). To tackle this novel task, the paper modifies the architecture of the original generator by adding an additional auxiliary network to facilitate entity generation. To reduce the domain gap, they employ the sliced Wasserstein distance to minimize the divergence of the internal distributions between the exemplar and synthesis. Besides, they propose to use the variational Laplacian regularization $L_VlapR$ to mitigate the content distortion during training by preserving the geometric structure of the source manifold.\n The paper is well written and easy to follow. Both quantitative and qualitative experiments show that the proposed method has sufficient visual advantages over the competition. In addition, their image processing results look interesting, which I believe will be helpful for artistic creation.\n\nMy only concern is the definition of style. In my opinion, the geometric feature is also part of the style and not just the color. I feel the results for Disney and Zelda are not so successful. Domain adaptation should also include some geometric changes, but the Zelda results in the paper don't look like CG images, and the Disney results cannot keep the big eyes and exaggerated expressions.\n See Weakness Adequate",
" This paper presents a method for one-shot domain adaptation, where a pre-trained GAN is leveraged to fine-tune the generator on one target image. First, given a source image, a latent vector is obtained by GAN inversion. Then, the generator is trained using a reconstruction loss, style and entity losses and a Laplacian regularizer, in order to transfer the style content of the target image into the source image while retaining the entity of the latter. Moreover, the proposed method admits a binary mask to select parts of the target image and to blend them into the source image, which is a novel contribution. Quantitative and qualitative results are presented when transferring from a few source images to a few target images from AAHQ dataset, as well as qualitative results for a few source and target images on AFHQ dog dataset and some source and target images on LSUN church dataset, showing a successful one-shot style transfer with optionally transferring objects from the target image. \n **Strengths**\n\nThe work is well motivated and the introduction of a binary mask to optionally import objects from the target image is a novel contribution. It targets a useful application and can be of use to the community and for content creators. \n\nThe usage of quantitative metrics to evaluate the results is a good addition to the user studies. Moreover, the two metrics used are reasonable in the context of face to face translation. \n\nThe methodology is sound and the claims are adjusted to the results presented in the experimental section. \n\n**Weaknesses**\n\nThe paper lacks clarity and misses important information (more details in \"Questions\"). The related work is not well redacted and fails to situate the proposed method within other methods in the literature. \n\nI would have expected the quantitative metrics to be computed and averaged across multiple samples for the target and source domains. Instead, those are only computed on hand-picked source-target images and risks providing only a biased positive view of the results. \nThe experimental setup is lacking to properly quantitatively evaluate the method with respect to the baselines. Instead of computing them on a handful of samples, they should be computed over a dataset (or a randomly chosen subset of data) for the source and target domains in order to draw more robust conclusions.\n As it was written above, I would like to see averaged quantitative metrics over many source/target samples.\n\n\nA sentence in l.176-177 claims that slice Wasserstein distance “leads to the same destination but with greater efficiency”, compared to the GAN loss. There is no reference cited nor I have seen an ablation experiment in the paper. Could the authors elaborate on this with empirical evidence or otherwise, remove the sentence altogether?\n\nIn l.35 it is mentioned that other methods produce artifacts when the entities are big. Could you point out where can this be observed in the figures?\n\nHow were the hyper parameters chosen? No discussion is provided with respect to that.\n\n\n\n*Details on clarity*\n* The task definition in l.106 is not clear and also fails to mention one of the most important details: the generator is fine-tuned on only one image from the target domain. As it stands, it appears as if a collection of target images were used instead. \n* The methodology section focuses on one particular case, pretrained on FFHQ. Instead, the methodology should be general and only introducing the datasets and particular examples in the experimental section and Figure 2 as an illustration. \n* Figure 2 is not clear, misses notation to quickly cross-reference it with the text. Moreover, the figure is not self-contained as there is some notation that is not explained in the caption. Additionally, this figure is referenced in the text before anything about the model is explained, which generates confusion. \n* The explanation of the auxiliary net is lacking, specially to understand how the feature map $f_{ent}$ and mask $m$ are obtained. Is the mask predicted on the feature maps from the style-gan architecture? And if so, did you use a pre-trained U-Net or you trained one from scratch? How did you adapt a pre-trained U-Net in this case?\n* Equation (1) misses $f_{ent}$ and $m$ as inputs to $aux$.\n* l.187: “layers from pre-trained LPIPS” this is a experimental detail that does not belong in the methodology section.\n* l.187: “upsampled by m” what does hat mean?\n* Paragraph in lines l. 158-164 would be better suited in the experimental section.\n\n* Where is the source of the images “Sketch, Disney and Arcane” that are mentioned in l.241? No reference is provided. \n\n The limitations were discussed and the broader impact briefly touched upon."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"u_zgrZoJ53P",
"QeQy-Up1NEF",
"wj81F2gCTUN",
"ACwlD9gVT3C",
"gQ3-gX4tK3m",
"E03tlvtrlRW",
"0jMWltjrp7e",
"HL6Z2D41woH",
"uzAn-WduQh6",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga",
"nips_2022_mfxq7BrMfga"
] |
nips_2022_htUvh7xPoa | Random Sharpness-Aware Minimization | Currently, Sharpness-Aware Minimization (SAM) is proposed to seek the parameters that lie in a flat region to improve the generalization when training neural networks. In particular, a minimax optimization objective is defined to find the maximum loss value centered on the weight, out of the purpose of simultaneously minimizing loss value and loss sharpness. For the sake of simplicity, SAM applies one-step gradient ascent to approximate the solution of the inner maximization. However, one-step gradient ascent may not be sufficient and multi-step gradient ascents will cause additional training costs. Based on this observation, we propose a novel random smoothing based SAM (R-SAM) algorithm. To be specific, R-SAM essentially smooths the loss landscape, based on which we are able to apply the one-step gradient ascent on the smoothed weights to improve the approximation of the inner maximization. Further, we evaluate our proposed R-SAM on CIFAR and ImageNet datasets. The experimental results illustrate that R-SAM can consistently improve the performance on ResNet and Vision Transformer (ViT) training. | Accept | All reviewers except one agreed that this paper should be accepted because of the strong author response during the rebuttal phase. Specifically the reviewers appreciated the significance of the problem being addressed, the clarity of the paper, the simplicity of the method, and the analysis. Authors: please carefully revise the manuscript based on the suggestions by the reviewers: they made many careful suggestions to improve the work and stressed that the paper should only be accepted once these changes are implemented. Once these are done the paper will be a nice addition to the conference! | train | [
"z1LZuatxMOD",
"EPc2L6vvxbh",
"poNL1Gg5BX8G",
"Jzga_pj59p6",
"51hzizhZq_sv",
"vDGFc9xkDmCM",
"msihoDDRAD",
"GXFt8247X_D",
"vl1GRud_nmJ",
"hLhcnFC65sR",
"zpsvtTxl37",
"2E2zVUc1Yi2",
"K-5dAOXnViD",
"dIGDYo8l07Z",
"SMdKl8qeio",
"BJxQxxDvN9j",
"RACcRH0dqV7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers! Thank you so much for your time on this paper so far.\n\nThe authors have written a detailed response to your concerns. How does this change your review?\n\nPlease engage with the authors in the way that you would like reviewers to engage your submitted papers: critically and open to changing your mind. Thank you Reviewers tXqv and Dv1G for your initial engagement!\n\nLooking forward to the discussion!\n",
" I thank the authors for a detailed response. The additional experimental analysis including the comparison with LPF-SGD, Hessian Eigenvalue analysis (Q5 of uTRr) and the sensitivity analysis of $\\gamma$ strengthen the paper. I have increased my score accordingly.\n",
" Thank you for your detailed responses.\n\nMy concerns have been mostly addressed, and I raised my score. ",
" ### **Q12:**\n\n> It is not entirely clear why performing the inner maximization gradient ascent step on a smoothed loss function necessarily leads to a more \"accurate\" solution or exactly what that means.} Following the same argument, we would expect that we want to apply SGD (solving a minimization objective) on a smoothed loss function too. Yet, this is not standard practice. Works like [1] recently proposed to explore such smoothening; however, with the motivation of finding flat minima (akin to SAM), not \"gradient stability\" or more \"accurate\" solutions. It would be great if the authors could clarify why \"gradient stability\" is important for inner maximization.\n\n \n\nThanks for your comments. As shown in the previous answer about Q1, let us restate the motivation of R-SAM. \n\nConsider the inner maximization problem of SAM within a region \\rhoρ. When the gradient is a constant within this region, then one-step gradient ascent can get the exact maximum. However, when the objective is very non-smooth – which means gradient changes quickly within the region, one-step gradient ascent becomes a bad approximation. Therefore, when we smooth the inner maximization problem, the gradient will be more stable (change slower) within the region, and the one-step gradient ascent will obtain a better approximated solution for inner maximization. Based on that, in this paper, we try to firstly smooth the loss function and obtain the approximated solution on the smoothed loss function to simplify the above problem. We prove the smoothing effect in Theorem 1, and our algorithm is equivalent to conduct a gradient descent on the randomly smoothed function.\n\nThe hyper-parameter $\\gamma$ which controls the level of randomness is important. When $\\gamma$ is small, the function is closer to the original one but the gradient is less smooth. When $\\gamma$ is too large, the function will be very smooth but will be not close to the original function.",
" ### **Q9:**\n\n> I appreciate the additional R-GSAM experiment, but again, I find the number of tasks (which is only one here) too limited. It would be great to see how R-SAM compares against GSAM, ASAM, SWA+SAM and on more tasks, and even if they reach similar performances; it would then be interesting e.g. to combine these and see if they are complementary.\n\n \n\nThanks for your comments. Due to the limited time, we provide the experimental results about R-GSAM on ViT in this table. We will report more results in the revision. \n\n |Model |AdamQ | SAM | GSAM | R-SAM | R-GSAM|\n|---------|------|---------|------|-------|-------|\n|ViT-B-16 | 74.7 | 79.8 | 80.8 | 80.7 | 81.2 | \n|ViT-S-16 | 74.9 | 77.9 | 78.8 | 78.7 | 79.1 |\n\nFrom this table, we can find that R-SAM can achieve competitive accuracy compared with GSAM. In addition, R-GSAM can obtain better performance than GSAM. \n\n### **Q10:**\n\n> This method introduces another hyper-parameter , which may incur additional tuning overhead for practictioners. While Figure 3c) shows that it is rather stable for three different values on one particular dataset, it would be nice to see a more thorough sensitivity analysis with e.g. more values or a default value used on a few more datasets/architectures (like SAM's rho=0.05).\n\n \n\n\nThanks for your comments. We try to provide more thorough analysis about $\\rho$ and $\\lambda$ in the tables: \n\n - Sensitivity Analysis about $\\rho$\n\n|Model | 0.8 | 1.0 | 1.2 | 1.3 | 1.4 | 1.5 | 1.6 | 1.8 | 2.0 |\n|---------|------|-----|------|-----|-----|------|------|-----|------|\n|ViT-B-16 | 80.2 | 80.3| 80.3 | 80.6| 80.7| 80.6 | 80.5 | 80.4| 80.1 | \n\n - Sensitivity Analysis about $\\lambda$\n \n|Model | 0.5 | 1.0 | 1.5 | 2.0 | 2.5 | 3.0 | \n|---------|------|-----|------|-----|-----|------|\n|ViT-B-16 | 80.5 | 80.6 | 80.7| 80.7| 80.7| 80.7 |\n\nWe can find that R-SAM shows a stable pattern for these parameters. We will update these results in the revision. \n\n### **Q11:**\n\n> The paper includes thorough image classification results; however, it lacks empirical analyses on other tasks such as NLP or graph learning [2,3]. All empirical results seem to be the result of one training run. It would great to see averaged results across at least three random seeds std.\n\n \n\nThanks for your comments. The reported experimental results are the mean accuracy with three random seed. We will also provide the error bar in the table:\n\n- CIFAR-10:\n\n\n\n|Model |SGD |SGD+M | RMSProp | AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|-------|---------|-------|---------|------|-------|\n|ResNet-18| 95.4 $\\pm0.1$ | 95.6 $\\pm0.2$ | 95.4 $\\pm0.2$ | 95.1 $\\pm0.1$ | 95.9 $\\pm0.1$ | 96.4 $\\pm0.2$ | 96.5 $\\pm0.1$ |\n|ResNet-50| 95.8 $\\pm0.1$ | 95.7 $\\pm0.1$ | 95.7 $\\pm0.1$ | 96.0 $\\pm0.1$ | 96.3 $\\pm0.2$ | 96.7 $\\pm0.1$ | 96.9 $\\pm0.1$ |\n|WRN-28-10| 96.4 $\\pm0.1$ | 96.5 $\\pm0.1$ | 96.4 $\\pm0.2$ | 96.0 $\\pm0.1$ | 96.8 $\\pm0.1$ | 97.3 $\\pm0.1$ | 97.5 $\\pm0.1$ |\n\n \n\n- CIFAR-100\n\n \n\n|Model |SGD |SGD+M | RMSProp | AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|-------|---------|-------|---------|------|-------|\n|ResNet-18| 78.0 $\\pm0.1$ | 78.9 $\\pm0.2$ | 79.4 $\\pm0.1$ | 77.7 $\\pm0.1$ | 80.2 $\\pm0.1$ | 80.9 $\\pm0.2$ | 81.4 $\\pm0.2$ |\n|ResNet-50| 80.9 $\\pm0.2$ | 81.4 $\\pm0.2$ | 81.4 $\\pm0.1$ | 80.8 $\\pm0.1$ | 82.1 $\\pm0.2$ | 83.3 $\\pm0.1$ | 84.0 $\\pm0.1$ |\n|WRN-28-10| 81.1 $\\pm0.1$ | 81.7 $\\pm0.1$ | 81.7 $\\pm0.1$ | 80.1 $\\pm0.2$ | 82.6 $\\pm0.1$ | 84.6 $\\pm0.1$ | 85.2 $\\pm0.1$ |\n\n \n\n- ImageNet\n\n \n\n|Model |AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|---------|------|-------|\n|ViT-B-16 | 74.7 $\\pm0.1$ | 75.9 $\\pm0.2$ | 79.8 $\\pm0.1$ | 80.7 $\\pm0.1$ |\n|ViT-S-16 | 74.9 $\\pm0.2$ | 75.8 $\\pm0.1$ | 77.9 $\\pm0.2$ | 78.7 $\\pm0.2$ |\n\n \n \n\nIn addition, due to the limited time, we will further explore the proposed R-SAM in NLP and graph learning in our future work.\n\n\n",
" ### **Q4:**\n\n> Figure 2 shows that the RS loss function increases faster in loss when rho is increased. Again, it is unclear why reaching higher losses by perturbing the weights further and further with increasing standard deviations is strictly better or important for ultimately reaching flatter (and therefore, better generalizing) minima.\n\n \n\nThanks for your comments. Increasing standard deviations can smooth the loss landscape based on Theorem 1. We would like to make sure $\\frac{\\alpha}{\\gamma} < \\beta$ to obtain smoother loss function. SAM uses one-step gradient ascent to solve the inner maximization problem to help the model converge to a flat region. we would like to use Figure 2 to show that random smoothing can obtain a larger loss value and obtain a better approximation for the inner maximization problem when $\\rho$ is fixed. That may better help the model converge to a flat region.\n\n \n\n### **Q5:**\n\n> The authors claim that Vanilla can not reach loss values as high as RS reaches (although this is not fully proven as rho's range ends at $\\rho=1.0$). However, first perturbing the weights before approximating the sharpest loss in their neighborhood effectively moves the neighborhood ball in a worse-performing region -- it is not centered around $w$ but at $w+\\delta_{0}$. Since $\\delta_{0} \\neq \\nabla L(w)$, almost surely $L(w) \\leq L(w+\\delta_{0})$. Therefore, it is not surprising that an ascent step starting from $L(w+\\delta_{0})$ will reach even higher losses.\n\n \n\nThanks for your comments. Vanilla SAM use one-step gradient ascent to approximate the solution of inner maximization. We find that one-step gradient ascent maybe difficult to solve the inner maximization problem when the model lies in a not very flat region. That motivates us to firstly smooth the loss landscape and then obtain the approximated solution in the smoothed loss function. We agree with you that random smoothing is easy to cause $L(w) \\leq L(w+\\delta_{0})$. However, our experimental results illustrate that the loss difference is minimal, especially compared with the difference between $L(w+\\rho\\frac{g(w)}{||g(w)||})$ and $L(w+\\delta_{0}+\\rho\\frac{g(w)}{||g(w)||})$.\n\n### **Q6:**\n\n> Further, we may note that the authors generated Figure 2 using the checkpoint of the 200th epoch on CIFAR-100. Using a decent learning rate/schedule, 200 epochs are enough for training a Resnet-18/WRN to (almost) convergence on CIFAR100. Hence, it is not clear why we want to reach higher-loss regions at this point during training.\n\n \n\nThanks for your comments. The main reason that we use the checkpoint of 200th epoch is that related work [1] illustrates that the last epochs are more important for the performance gain. So we try to evaluate and analyze the smoothness at the last epoch.\n\n[1] Maksym Andriushchenko, Nicolas Flammarion. \"Towards Understanding Sharpness-Aware Minimization\". ICML 2022. \n\n### **Q7:**\n\n> Lastly, the loss function/y-axis of Figure 2 is not precisely specified. Assuming that it is sth like the cross-entropy loss and based on my experience, my guess would be that parameterizations with losses above 3 (which is what vanilla still reaches) resemble the performance of randomly initialized networks or within the first few optimization steps epochs during training. That is when the parameters are far from fruitful valleys and hence, not where we want the gradient ascent step jumping towards.\n\n \n\nThanks for your comments. The loss function in Figure 2 is cross-entropy loss.\n\nTo obtain a better analysis and easy to understand the phenomenon, we try to expand the $\\rho$ value.\n\n### **Q8:**\n\n> While the empirical performance gains over SAM are consistent across the shown experiments, they are less than 1\\% on average, which resembles the performance gains of other SAM variants, such as GSAM (cited in the paper), ASAM [1], SAM+SWA[3]. However, only for one task (model/dataset combination), the authors compare against GSAM, in which their performance differs by 0.1\\% and is identical (as the authors say themselves).\n\n \n \n\nThanks for your comments. Our main contribution is to illustrate that random smoothing can also improve the performance and provide a new direction for SAM, which can make more people to further investigate it. We also try to combine our method with G-SAM for ViT on ImageNet and the improvement is about 0.4\\%.\n\n\n",
" Thank you for your constructive and positive feedback, we carefully address your concerns below. \n\n### **Q1:**\n\n> randomly smoothening (RS) the loss function significantly improves over a non-smoothed loss function for approximating the gradient ascent step -- is true. RS seems very heuristical to me; hence, I'd appreciate either more clarification on why it always improves (if so) or more discussion on when it may not hold and what its trade-offs are.\n\n \n\nThanks for your comments. Let us restate the motivation of R-SAM.\n\n\nConsider the inner maximization problem of SAM within a region $\\rho$. When the gradient is a constant within this region, then one-step gradient ascent can get the exact maximum. However, when the objective is very non-smooth -- which means gradient changes quickly within the region, one-step gradient ascent becomes a bad approximation. Therefore, when we smooth the inner maximization problem, the gradient will be more stable (change slower) within the region, and the one-step gradient ascent will obtain a better approximated solution for inner maximization. Based on that, in this paper, we try to firstly smooth the loss function and obtain the approximated solution on the smoothed loss function to simplify the above problem. We prove the smoothing effect in Theorem 1, and our algorithm is equivalent to conduct a gradient descent on the randomly smoothed function.\n\nThe hyper-parameter $\\gamma$ which controls the level of randomness is important. When $\\gamma$ is small, the function is closer to the original one but the gradient is less smooth. When $\\gamma$ is too large, the function will be very smooth but will be not close to the original function. For example, we try to provide the experimental results about the effects of $\\gamma$ for the accuracy. We can find that it exists a trade-off and $\\gamma$ is very important for the performance gain.\n\n|Model |2e-3 | 1.5e-3 |1e-3 | 5e-4 | 1e-4 | 5e-5 |\n|---------|------|---------|------|------|------|------|\n|WRN-28-10| 84.7 | 85.1 | 85.3 | 85.4 | 85.1 | 84.5 |\n|ViT-B-16 | 79.7 | 80.2 | 80.7 | 80.7 | 80.4 | 79.7 |\n|ViT-S-16 | 77.3 | 77.9 | 78.5 | 78.2 | 77.9 | 77.8 |\n\n### **Q2:**\n\n> SAM's inner objective's gradient ascent step involves the same gradient computation as standard SGD. Yet, when using standard SGD, we typically do not additionally smooth the loss function. For example, [4] recently proposed doing so, however, with the motivation of finding flatter minima (akin to SAM) and by sampling multiple perturbations instead of just one. If RS leads to more \"stable\" or \"accurate\" gradients, why not apply it to the descent step too? For example, in Algorithm 1, \"Compute SAM gradient\", why do we not additionally perturb $w_{adv}$ as well if such smoothening is always beneficial?\n\n \n\nThanks for your comments. As you mentioned, [4] tries to use random noise to improve the performance in the gradient descent step. We think gradient ascent step and gradient descent step focus on different aspects. In the gradient ascent step, we do not need to consider the generalization problem and we will pay more attention to the solution of the maximization problem in Equation (1). However, for the gradient descent step, we do not only consider minimizing the loss value but also consider some other problems like the generalization problem.\n\n### **Q3:**\n\n> Figure 1 shows Vanilla's cosine similarity between the standard and perturbed gradient decays faster than when RS gets applied. However, it is not clear to me why this matters and can't be simply resolved by re-scaling rho.\n\n \n\nThanks for your comments. Vanilla SAM uses one-step gradient ascent to solve the inner maximization, which means that it uses the gradient $g(w)$ to approximate the gradient on the weight that locates in the trajectory from $w$ to $w_{adv} = w + \\rho \\frac{g(w)}{||g(w)||}$. Gradient stability can be briefly defined as $\\frac{||g(w) - g(w+\\rho\\frac{g(w)}{||g(w)||})||}{||\\epsilon||}$, where $||\\epsilon||$ is not 0. We use gradient similarity (cosine similarity between $g(w)$ and $g(w+\\rho\\frac{g(w)}{||g(w)||}))$ to approximate the definition when $\\rho$ is fixed: $\\frac{g(w) \\cdot g(w+\\rho\\frac{g(w)}{||g(w)||})}{||g(w)|| ||g(w+\\rho\\frac{g(w)}{||g(w)||}||}$. Therefore, if the gradient is more stable in the trajectory from $w$ to $w_{adv}$, we can obtain a better approximated solution in the gradient ascent step.\n\n \n\nAs for the reason why we can't be simply resolved by re-scaling is that we find that SAM is sensitive with the selection of $\\rho$ value. Therefore, we would like to obtain a better solution when $\\rho$ is fixed. If $\\rho$ is too large, that may hurt the accuracy performance. In addition, the generalization will be hurt and it's difficult to converge to a flat region if $\\rho$ is too low. \n\n\n",
" ### **Q5:**\n\n> The experimental details of the CIFAR-100 experiments, including the $\\rho$ value, seem to be missing. A note comparing the value with SAM and R-SAM can be added. It is to be observed whether training with high $\\rho$ is also stable for ResNets. With ViTs (Table 6), the optimal value is higher for R-SAM.\n\n \n\nThanks for your comments. Let me briefly introduce the experimental details of CIFAR-100 training.\n\n|Model |Batch Size |Epoch | LR | Weight Decay | $\\rho$ |$\\lambda$|\n|-----------------|-------------|-------|---------|--------------|---------|------|\n|WRN-28-10 + SAM | 128 | 200 | 0.1 | 5E-3 | 0.1 | / |\n|WRN-28-10 + R-SAM| 128 | 200 | 0.1 | 5E-3 | 0.3 | 1 |\n|ResNet-50 + SAM | 128 | 200 | 0.1 | 5E-3 | 0.1 | / |\n|ResNet-50 + R-SAM| 128 | 200 | 0.1 | 5E-3 | 0.3 | 1 |\n\n\n\nWe will provide more details in the supplement.\n\n### **Q6:**\n\n> The details regarding the value of $\\gamma$ have not been mentioned. A sensitivity analysis of the value of $\\gamma$ can also be added.\n\n \n\nThanks for your comments. We will provide the analysis about $\\gamma$ on ViT training in the revision paper. We can briefly introduce the results:\n\n \n\n|Model |2e-3 | 1.5e-3 |1e-3 | 5e-4 |\n|---------|------|---------|------|-------|\n|WRN-28-10| 84.7 | 85.1 | 85.3 | 85.4 |\n|ViT-B-16 | 79.7 | 80.2 | 80.7 | 80.7 |\n|ViT-S-16 | 77.3 | 77.9 | 78.5 | 78.2 |\n\n \n\nWe can find that $\\gamma$ shows a relatively stable pattern. For most cases, R-SAM can achieve a better performance compared with vanilla SAM. \n\n### **Q7:**\n\n> Question 3c of the checklist is answered as, \"It would be too computationally expensive for us. We conduct all the experiments at least three times and report the average value.\" This is a bit confusing. I just wanted to clarify whether the authors have run all the experiments three times. This is significant because the increase with R-SAM is small in many experiments and within the error bound (as mentioned in SAM).\n\n \n\nThanks for your comments. We have run the experiments for 3 times and report the mean value in the tables of the paper. We will update it in the revision and also provide the error bar in the table:\n\n \n\n- CIFAR-10:\n\n\n\n|Model |SGD |SGD+M | RMSProp | AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|-------|---------|-------|---------|------|-------|\n|ResNet-18| 95.4 $\\pm0.1$ | 95.6 $\\pm0.2$ | 95.4 $\\pm0.2$ | 95.1 $\\pm0.1$ | 95.9 $\\pm0.1$ | 96.4 $\\pm0.2$ | 96.5 $\\pm0.1$ |\n|ResNet-50| 95.8 $\\pm0.1$ | 95.7 $\\pm0.1$ | 95.7 $\\pm0.1$ | 96.0 $\\pm0.1$ | 96.3 $\\pm0.2$ | 96.7 $\\pm0.1$ | 96.9 $\\pm0.1$ |\n|WRN-28-10| 96.4 $\\pm0.1$ | 96.5 $\\pm0.1$ | 96.4 $\\pm0.2$ | 96.0 $\\pm0.1$ | 96.8 $\\pm0.1$ | 97.3 $\\pm0.1$ | 97.5 $\\pm0.1$ |\n\n \n\n- CIFAR-100\n\n \n\n|Model |SGD |SGD+M | RMSProp | AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|-------|---------|-------|---------|------|-------|\n|ResNet-18| 78.0 $\\pm0.1$ | 78.9 $\\pm0.2$ | 79.4 $\\pm0.1$ | 77.7 $\\pm0.1$ | 80.2 $\\pm0.1$ | 80.9 $\\pm0.2$ | 81.4 $\\pm0.2$ |\n|ResNet-50| 80.9 $\\pm0.2$ | 81.4 $\\pm0.2$ | 81.4 $\\pm0.1$ | 80.8 $\\pm0.1$ | 82.1 $\\pm0.2$ | 83.3 $\\pm0.1$ | 84.0 $\\pm0.1$ |\n|WRN-28-10| 81.1 $\\pm0.1$ | 81.7 $\\pm0.1$ | 81.7 $\\pm0.1$ | 80.1 $\\pm0.2$ | 82.6 $\\pm0.1$ | 84.6 $\\pm0.1$ | 85.2 $\\pm0.1$ |\n\n \n\n- ImageNet\n\n \n\n|Model |AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|---------|------|-------|\n|ViT-B-16 | 74.7 $\\pm0.1$ | 75.9 $\\pm0.2$ | 79.8 $\\pm0.1$ | 80.7 $\\pm0.1$ |\n|ViT-S-16 | 74.9 $\\pm0.2$ | 75.8 $\\pm0.1$ | 77.9 $\\pm0.2$ | 78.7 $\\pm0.2$ |\n\n### **Q8:**\n\n> In the results section, it is unclear why the authors are discussing the improvement of the original SAM over SGD+M and AdamW (L233-238, L246-250, L261-264). These results have been discussed in the SAM work.\n\n \n\nThanks for your comments. We will rewrite the part in the experiments.\n\n### **Q9:**\n\n> The method proposed by Chaudhari et al. is called Entropy-SGD and not \"Energy-SGD,\" as mentioned by the authors. (L53)\n\n \n\nThanks for your comments. We have corrected it in the revision.\n\n",
" Thanks for your insightful comments, we carefully address your concerns below. \n\n### **Q1:**\n\n> Relationship between stable gradient and approximation of inner maximization term is not very clear. Also, the term stable gradient is not well-defined. If a stable gradient were the goal, then $\\rho = 0$ (SGD) or a low $\\rho$ would lead to the most stable gradient according to the cosine definition. However, it is observed that a very low $\\rho$ value does not lead to a significant gain in accuracy over SGD.\n\nThanks for your comments. By gradient stability, we mean the Lipchitz constant of gradient, i.e., $\\frac{||g(w)-g(w+\\epsilon)||}{||\\epsilon||}$, which measures how a perturbation on $w$ will change the gradient. When this value equals to $0$ within distance $\\rho$, the gradient is a constant within the region so one-step gradient ascent can get the maximum. However, when the value is large (gradient is unstable), since the gradient at $w$ can be very different from other points within the region, it is harder to solve the inner maximization problem. Computing this constant over the whole region is difficult, so we use $||g(w) - g(w+\\rho\\frac{g(w)}{||g(w)||})||/\\rho$ as the proxy.\n\nIn this paper, we discuss gradient stability in the context of a fixed $\\rho$ (SAM's parameter) so it is related to how good can we solve it with one-step gradient ascent, as mentioned above. When $\\rho$ is small, we can solve the inner maximization problem better but also SAM will encounter worse performance.\n\n### **Q2:**\n\n> LPF-SGD [1] is a recently proposed method that aims to reach flat optima. The algorithm of LPF-SGD also adds Gaussian Noise to the parameters with the variance equal to the norm of filter weights, similar to R-SAM.\n\n \n\nThanks for your comments. LPF-SGD try to use gaussian noise to directly perturb the weight in the gradient descent step. However, we try to use gaussian noise to smooth the loss landscape in the gradient ascent step. We also try to compare our proposed R-SAM with LPF-SGD and the experimental result is shown in the Table 1 and Table 2 of the revision.\n\n - CIFAR-100\n\n|Model |SGD |SGD+M | RMSProp | AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|-------|---------|-------|---------|------|-------|\n|ResNet-18| 78.0 | 78.9 | 79.4 | 77.7 | 80.2 | 80.9 | 81.4 |\n|ResNet-50| 80.9 | 81.4 | 81.4 | 80.8 | 82.1 | 83.3 | 84.0 |\n|WRN-28-10| 81.1 | 81.7 | 81.7 | 80.1 | 82.6 | 84.6 | 85.2 |\n\n \n - ImageNet\n\n|Model |AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|---------|------|-------|\n|ViT-B-16 | 74.7 | 75.9 | 79.8 | 80.7 |\n|ViT-S-16 | 74.9 | 75.8 | 77.9 | 78.7 |\n\n### **Q3:**\n\n> The increase in accuracy of R-SAM in comparison to SAM is minimal in most settings (Table 1, Table 2, Figure 3a).\n\n \n\nThanks for your comments. SAM has achieved a great accuracy (about 97\\%) on CIFAR-10. So there is relatively small room for improvements. On CIFAR-100, SAM already achieve great performance but we noticed that R-SAM can consistently achieve about 0.5\\% accuracy improvement over SAM.\n\nChen et al. illustrates that SAM can obtain a great performance on ViT and therefore our main experimental results focus on ViT (Table 3, Table 4 and Figure 3). We can find that R-SAM can achieve about 0.9\\% improvement for ViT-B-16.\n\n \n\n[1] Chen, Xiangning, Cho-Jui Hsieh, and Boqing Gong. 'When vision transformers outperform ResNets without pre-training or strong data augmentations.' ICLR 2022.\n\n### **Q4:**\n\n> In Figures 1 and 2, it would be clearer if the authors also mentioned the model's accuracy in a Table. For instance, $\\rho$ = 1.0 leads to the max loss value in Figure 2. Is that observed in the performance as well? (Also, why is the scale of the rho value in both the figures very different?)\n\n \n\nSorry for the confusion. It is not true that a larger value in Figure 2 implies better performance. In SAM's paper, it has been shown that larger $\\rho$ will over-regularize the model which leads to degraded prediction accuracy. Therefore, for larger $\\rho$ such as $\\rho=1$, even if we can solve the inner maximization very well, it is not helpful for getting a better model.\n\n \n\nDue to the limited time, we mainly focus on the model's accuracy when $\\rho=0.1, 0.2$ and $0.3$. We will provide more experimental results about Figure 2 in the revision.\n\n \n\n|Model |$\\rho=0.1$ |$\\rho=0.2$ |$\\rho=0.3$ |\n|---------|-----------|-----------|-----------|\n|WRN-28-10| 84.3 | 84.9 | 85.2 | \n\n",
" Thanks for your constructive and positive feedback, we carefully address your concerns below.\n\n### **Q1:**\n\n> In Fig. 1, the \"flatness\" of w is measured by the cosine value between g(w) and $g(w+ \\rho g(w)/||g(w)||$). But is it a suitable measure for all w? If w is close to a local minimum and g(w) is close to zero, then directions of both g(w) and $g(w+ \\rho g(w)/||g(w)||)$ are \"random\", and the cosine value could be small even in a very flat region. The detailed settings of experiments in Fig. 1 are not provided. If the w is obtained after training, we need to consider this case.\n\n \n\nSorry for the confusion. We use the weight $w$ after training for Figure 1. For this experiment, we observe the gradient is not very small during the training process. For example, we find that the gradient norm varies between 2 and 4.\n\n### **Q2:**\n\n> There are some typos in figures. In Fig. 1, $\\hat g(w)$ should be $g(\\hat w)$. In Fig. 2, $L(w+\\rho g(w)/g(w))$ should be $g(w+\\rho g(w)/||g(w)||)$.\n\n \n\nSorry for the confusion. We have corrected them in the revision. \n\n### **Q3:**\n\n> Do authors miss the mathematical expectation notation somewhere? For example, in Theorem 1, $L_s(w)=L(w+\\delta_0)$ or $E[L(w+\\delta_0)]$? (If it is the former one, we cannot say that $L_s(w)$ must have a smaller Lipschitz constant.)\n \n\nSorry for the confusion. We have clarified the description in the revision. For Theorem 1, $L_{S}(\\boldsymbol{ w}) = L(\\boldsymbol{ w} + \\boldsymbol{ \\delta_{0}})$ is more smooth than the original loss function $L(\\boldsymbol{w})$ when $\\frac{\\alpha}{\\gamma} \\leq \\beta$.\n\n### **Q4:**\n\n> Baseline methods include SGD (and modified SGD) without SAM and existing SAM methods. Certainly it is impossible to compare the proposed method with all regularization techniques, but it would be better to consider some advanced non-SAM strategies here.\n\n \n\nThanks for your comments. We try to use weight decay, label smoothing, dropout and data augmentation for the experiments in Table 3 and Table 4. We have provided these details in the revision. In addition, we try to provide more experimental results about non-SAM method. Due to the limited time, we firstly try to compare our proposed method with LPF-SGD in the revision.\n\n \n - CIFAR-100\n \n|Model |SGD |SGD+M | RMSProp | AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|-------|---------|-------|---------|------|-------|\n|ResNet-18| 78.0 | 78.9 | 79.4 | 77.7 | 80.2 | 80.9 | 81.4 |\n|ResNet-50| 80.9 | 81.4 | 81.4 | 80.8 | 82.1 | 83.3 | 84.0 |\n|WRN-28-10| 81.1 | 81.7 | 81.7 | 80.1 | 82.6 | 84.6 | 85.2 |\n\n \n - ImageNet\n\n|Model |AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|---------|------|-------|\n|ViT-B-16 | 74.7 | 75.9 | 79.8 | 80.7 |\n|ViT-S-16 | 74.9 | 75.8 | 77.9 | 78.7 |\n\n \n\nFrom the above experimental results, we can find that although the non-sam method LPF-SGD can achieve a great performance compared with traditional optimizers (SGD+M, AdamW), It still exists a performance gap between LPF-SGD and SAM.\n\n### **Q5:**\n\n> As stated by authors, \"deep learning is usually sensitive to hyper-parameters\". So the small improvement of the accuracy cannot strongly prove that RSAM significantly outperforms the other SAM methods. I suggest to add some numerical results to show that RSAM can find \"flatter areas\", which is more meaningful than 0.1% improvement in accuracy.\n\n \n\nThanks for your constructive comments. We try to use Hessian dominate eigenvalue to analyze the flatter areas that SAM and R-SAM learn in ViT. Due to the limited time, we mainly focus on the analysis about ViT-B-16. We will provide more analysis in the revision. The result is shown in the table: \n\n|Model |AdamW |SAM | R-SAM |\n|---------|------|------|-------|\n|ViT-B-16 | 727.1 | 21.2 | 17.4 |\n|ViT-S-16 | 571.2 | 20.7 | 11.1 |\n\nFrom this table, we can find that SAM means a lower eigenvalue of Hessian and therefore promote the model converge to a flat region. In addition, R-SAM can further reduce the eigenvalue of Hessian and converge to flatter region. We will explore it further in the future. \n\n[1] Xiangning Chen, Cho-Jui Hsieh, Boqing Gong. \"When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations\" ICLR 2022. \n",
" ### **Q7:**\n\n> Was a grid search done for SAM (e.g., Table 1 and 2)? And how are the hyperparameters tuned for each dataset for the proposed method, R-SAM?\n\n \n\nThanks for your comments. We use grid search for all the experimental results about SAM. The comparison between our reported results and the results in the paper of vanilla SAM are is shown in the Table:\n\n \n\n|Model |CIFAR-10 |CIFAR-100 |\n|--------------|----------|-----------|\n|Vanilla SAM | 97.3 | 83.5 |\n|Reported SAM | 97.3 | 84.6 |\n\n \n\nWe can find that our reported result is the same as the result in vanilla SAM in CIFAR-10 and our reported accuracy is higher than vanilla SAM in CIFAR-100. ",
" ### **Improvement 5:**\n\n> In addition, the role of the noise covariance is not sufficiently discussed. It’s unclear whether it’s crucial or not. For such a simple proposed idea, I’d expect much more rigorous ablation study for the key elements of the proposed method.\n\n \n\nThanks for your comments. We will provide the sensitivity analysis of gaussian noise in the revision. More specially, we would like to analyze the effects of $\\gamma$ for accuracy. The results are shown in the table:\n\n \n\n|Model |2e-3 | 1.5e-3 |1e-3 | 5e-4 |\n|---------|------|---------|------|-------|\n|WRN-28-10| 84.7 | 85.1 | 85.3 | 85.4 |\n|ViT-B-16 | 79.7 | 80.2 | 80.8 | 80.8 |\n|ViT-S-16 | 77.3 | 77.9 | 78.5 | 78.2 |\n\n \n\nFrom the above table, we can find that the noise plays an important role in the performance gain of R-SAM. There's a tradeoff but in general it's not hard to choose a good $\\gamma$.\n\n### **Improvement 6:**\n\n> The original SAM paper already pointed out that multiple steps of projected gradient ascent aren’t helpful (see Table 11 in https://arxiv.org/abs/2010.01412). That would be a more direct way to more accurately solve the inner maximization.\n\n \n\nThanks for your comments. This is a great question to further explore. We guess that multi-step also can obtain a better solution for the inner maximization problem, but the solution is different from the random smoothing based solution. For example, it is possible that there's a particular sharp region (or direction) where loss increases quickly; and minimizing this worst-case loss may not benefit too much to model's generalization. Instead, our method won't find those points due to the introduction of randomness. However, this is just our educational guess based on the experimental results, and we will further explore it in our future work.\n\n### **Q1:**\n\n \n\n> 'However, the model usually locates in the sharp minima [31] where the unstable gradient, to a large extent, makes one-step gradient ascent performs poorly.' – This statement is surprising given that SAM aims to minimize sharpness and converge to a flat region. Moreover, this sentence leads to the impression that [31] demonstrated that this is somehow the case, although [31] has nothing to do with this.\n\n \n\nThanks for your comments. We have removed this statement in the paper. We were intended to say that the inner maximization may not be well approximated with one-step gradient ascent in SAM. (and then you can talk about [31])\n\nAlthough [31] doesn't demonstrate the relationship between SAM and flatness, we still find that the loss value will be increased when we further perturb the weights of SAM. Based on that, we think SAM can converge to a more flat region than a traditional first-order optimizer, but still not very flat.\n\n### **Q2:**\n\n> \"Therefore, how to improve the approximation of the inner maximization procedure and obtain a more aggressive weight is an important problem” – What is meant by the “aggressive weight\"?\n\n \n\nSorry for the confusion. We have rewritten this sentence in the revision. \n\n### **Q3:**\n\n> \"However, the loss landscape of neural network is usually sharp and non-linear [30, 34].\" – Surprising to see statements like this as the loss landscape of a linear model is also non-linear and may be considered sharp depending on the data covariance.\n\n\nSorry for the confusion. We have rewritten this statement in the revision.\n\n### **Q4:**\n\n> \"we observe that models can still locate in a sharp region, leading to poor performance on one-step gradient ascent.\" – This statement isn’t complemented by sharpness measurements.\n\n \n\nSorry for the confusion. We have rewritten this part in the revision. \n\n### **Q5:**\n\n> \"we are able to apply the one-step gradient ascent on the smoothed weights for a much more accurate measurement of the inner maximization\" –There is no evidence to this in the paper. Fig. 2 is done for a different algorithm.\n\n \n\nThanks for your comments. As shown in the previous answers, Equation (6) and Equation (7) share the similar form, except with a difference scaling constant. Equation (6) can be represented as Equation (7) when we try to tune the value of $\\delta$ and $\\rho$ to make sure $\\rho_{2} = \\|\\delta_{0}+\\lambda g(\\hat{w})\\|$ and $\\lambda = \\frac{\\rho_{1}}{\\|g(w+\\delta_{0})\\|}$. If not, we can tune the value of $\\rho_{2}$ and $\\lambda$ to make sure that $\\rho_{2} \\frac{\\lambda}{\\|\\delta_{0} + \\lambda g(\\hat{w})\\|} = \\frac{\\rho_{1}}{\\|g(w+\\delta_{0})}$. In this way, the main difference between Equation (6) and Equation (7) will be the term about random noise $\\delta_{0}$: $\\delta_{0}$ in Equation (6) and $\\frac{\\rho_{2}}{\\|\\delta_{0} + \\lambda g(\\hat{w})\\|}\\delta_{0}$. Therefore, Equation (7) can be seen as introducing additional random noise compared with Equation (6).\n\n### **Q6:**\n\n> Eq. (2): there should be a division by the gradient norm, not multiplication.\n\n \n\nSorry for the confusion. We have corrected Equation (2) in the revision.\n\n\n",
" Thanks for your constructive feedback, we carefully address your concerns below.\n\n \n\n### **Improvement 1:**\n\n> A crucial baseline, Bisla et al. (AISTATS’22), is cited but isn’t compared to or discussed in sufficient depth. A comparison to their method (which is not using worst-case perturbations) would be beneficial, especially since they leverage random perturbations with the same covariance as suggested in this paper. I wonder if it’s key to the performance of this method.\n\n \n\nThanks for your constructive suggestion. We will provide more discussion about LPF-SGD in the revision. More specially, we try to compare R-SAM with LPF-SGD and the experimental results are shown in the following tables:\n\n- CIFAR-100\n\n|Model |SGD |SGD+M | RMSProp | AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|-------|---------|-------|---------|------|-------|\n|ResNet-18| 78.0 | 78.9 | 79.4 | 77.7 | 80.2 | 80.9 | 81.4 |\n|ResNet-50| 80.9 | 81.4 | 81.4 | 80.8 | 82.1 | 83.3 | 84.0 |\n|WRN-28-10| 81.1 | 81.7 | 81.7 | 80.1 | 82.6 | 84.6 | 85.2 |\n\n \n - ImageNet\n\n|Model |AdamW | LPF-SGD |SAM | R-SAM |\n|---------|------|---------|------|-------|\n|ViT-B-16 | 74.7 | 75.9 | 79.8 | 80.7 |\n|ViT-S-16 | 74.9 | 75.8 | 77.9 | 78.7 |\n\n \n\nFrom these tables we can find that our method is better than LPF-SGD, so integrating randomness in inner gradient ascent is important to achieve better performance.\n\n \n\n### **Improvement 2**:\n\n> 'That means the gradient of smoothed weight can better maximize the loss value in a large region.' – But do we really need such large weight perturbations? For $\\rho$=1.0, the training loss is increased to $\\approx$ 6.0 (Fig. 2) which is even higher than the loss of a random classifier ($\\approx$ ln(100) $\\approx$ 4.6).\n\n \n\nThanks for your comments. We have modified this in the revision. In order to demonstrate the empirical intuition more clearly, we use a larger $\\rho$ value to illustrate the long-term trend. Actually, from the figures, we can find that a stronger Gaussian perturbation as the initialization can obtain a larger loss value for the inner maximization when $\\rho$ is 0.3 for most cases, which is close to the $\\rho$ values we used in practice.\n\n \n\n### **Improvement 3:**\n\n> Improvement: Fig. 1 and 2: is it done for test-time BatchNorm? If yes, then it should be remade for training-time BatchNorm or for a network trained with a different normalization method. In practice, the difference between sharpness estimates of test-time and training-time BatchNorm is very significant (see, e.g., https://arxiv.org/abs/2206.06232)\n\n\nThanks for your comments and reference. We use training-time BatchNorm for Figure 1 and Figure 2 in the original submission. Based on the conclusion in your provided reference, we have clarified it in the revision. \n\n### **Improvement 4:**\n\n> The loss values in Fig. 2 are checked for the algorithm described in Eq. (6) while the proposed algorithm of R-SAM presented in Eq. (7) differs from Eq. (6) by the presence of the projection and a special covariance matrix of the noise. Intuitively, the projection step can make a lot of difference and I’m not sure if the algorithm in Eq. (7) will lead to a more accurate maximizer compared to the vanilla SAM.\n\n \n\nEquation (6):\n\n $\\boldsymbol{w_{adv}} = \\boldsymbol{\\hat{w}} + \\rho_{1} \\frac{\\boldsymbol{g(\\hat{w})}}{\\|\\boldsymbol{g(\\hat{w})}\\|} = \\boldsymbol{w} + \\boldsymbol{\\delta_{0}} + \\rho_{1} \\frac{\\boldsymbol{g(w+\\delta_{0})}}{\\|\\boldsymbol{g(w+\\delta_{0})}\\|}$\n\n \n\nEquation (7):\n\n\n\n$\\boldsymbol{w_{adv}} = \\boldsymbol{w} + \\rho_{2} \\frac{\\boldsymbol{\\delta_{0}}+\\lambda\\boldsymbol{g(\\hat{w})}}{||\\boldsymbol{\\delta_{0}} + \\lambda \\boldsymbol{g(\\hat{w})}||}$\n\n \n\nThanks for your comments. In fact, we can show that Equation (6) and Equation (7) share the similar form, except with a difference scaling constant.\n\n \n\nFirstly, we try to analyze Equation (7):\n\n \n\n$\\boldsymbol{w_{adv}} = \\boldsymbol{w} + \\rho_{2} \\frac{\\boldsymbol{\\delta_{0}}+\\lambda \\boldsymbol{g(\\hat{w})}}{||\\boldsymbol{\\delta_{0}} + \\lambda \\boldsymbol{g(\\hat{w})}||}\n= \\boldsymbol{w} + \\rho_{2} \\frac{\\boldsymbol{\\delta_{0}}}{\\|\\boldsymbol{\\delta_{0}} + \\lambda \\boldsymbol{g(\\hat{w})}\\|} + \\rho_{2} \\frac{\\lambda \\boldsymbol{g(\\hat{w})}}{\\|\\boldsymbol{\\delta_0}+\\lambda\\boldsymbol{g(\\hat{w})}\\|}$\n\n\n\nIf $\\rho_{2} = \\|\\delta_{0}+\\lambda g(\\hat{w})\\|$ and $\\lambda = \\frac{\\rho_{1}}{\\|g(w+\\delta_{0})\\|}$, we can find that Equatuon (6) is equal to Equation (7). If not, we can tune the value of $\\rho_{2}$ and $\\lambda$ to make sure that $\\rho_{2} \\frac{\\lambda}{\\|\\delta_{0} + \\lambda g(\\hat{w})\\|} = \\frac{\\rho_{1}}{\\|g(w+\\delta_{0})}$. Based on that, the main difference between Equation (6) and Equation (7) will be the term about random noise $\\delta_{0}$: $\\delta_{0}$ in Equation (6) and $\\frac{\\rho_{2}}{\\|\\delta_{0} + \\lambda g(\\hat{w})\\|}\\delta_{0}$. Therefore, Equation (7) can be seen as introducing additional random noise compared with Equation (6).\n\n ",
" The paper proposes to modify the Sharpness-Aware Minimization (SAM) algorithm to include a random perturbation before taking the gradient ascept step. The paper presents some justifications to why it could be beneficial, based on the smoothing argument. The experimental results show the effectiveness of the proposed method.\n - **Originality.** Low. The idea of adding noise before taking the gradient ascent step of SAM is very simple.\n- **Quality.** Medium. The experimental evaluation of R-SAM is quite convincing. The findings that SAM/R-SAM can quite noticeably boost the OOD performance (ImageNet-R, ImageNet-C, etc) is also nice. On the other hand, the motivation of the proposed method requires significant improvements (see below).\n- **Clarity.** Medium. Most of the content is clear.\n- **Significance.** Overall, I appreciate the empirical effectiveness of the proposed method which could be of interest to practitioners. However, the paper requires multiple improvements listed below.\n\nThings that need improvement:\n- A crucial baseline, Bisla et al. (AISTATS’22), is cited but isn’t compared to or discussed in sufficient depth. A comparison to their method (which is **not** using worst-case perturbations) would be beneficial, especially since they leverage random perturbations with the same covariance as suggested in this paper. I wonder if it’s key to the performance of this method.\n- *“That means the gradient of smoothed weight $w + \\delta_0$ can better maximize the loss value in a large region.”* – But do we really need such large weight perturbations? For $\\rho=1.0$, the training loss is increased to $\\approx 6$ (Fig. 2) which is even higher than the loss of a random classifier ($\\approx ln(100) \\approx 4.6$).\n- Fig. 1 and 2: is it done for test-time BatchNorm? If yes, then it should be remade for training-time BatchNorm or for a network trained with a different normalization method. In practice, the difference between sharpness estimates of test-time and training-time BatchNorm is very significant (see, e.g., https://arxiv.org/abs/2206.06232).\n- The loss values in Fig. 2 are checked for the algorithm described in Eq. (6) while the proposed algorithm of R-SAM presented in Eq. (7) differs from Eq. (6) by the presence of the projection and a special covariance matrix of the noise. In order to properly motivate R-SAM (and not some other possible variation), one has to perform the experiment in Fig. 2 specifically for R-SAM. Intuitively, the projection step can make a lot of difference and I’m not sure if the algorithm in Eq. (7) will lead to a more accurate maximizer compared to the vanilla SAM.\n- In addition, the role of the noise covariance is not sufficiently discussed. It’s unclear whether it’s crucial or not. For such a simple proposed idea, I’d expect much more rigorous ablation study for the key elements of the proposed method.\n- The original SAM paper already pointed out that multiple steps of projected gradient ascent aren’t helpful (see Table 11 in https://arxiv.org/abs/2010.01412). That would be a more direct way to more accurately solve the inner maximization.\n- Error bars are necessary to claim that the observed improvement is indeed significant.\n\n\nIn multiple places throughout the paper, the writing is not precise and some claims don’t seem to be justified:\n- “However, the model usually locates in the sharp minima [31] where the unstable gradient, to a large extent, makes one-step gradient ascent performs poorly.” – This statement is surprising given that SAM aims to minimize sharpness and converge to a flat region. Moreover, this sentence leads to the impression that [31] demonstrated that this is somehow the case, although [31] has nothing to do with this.\n- “Therefore, how to improve the approximation of the inner maximization procedure and obtain a more aggressive weight is an important problem” – What is meant by the “aggressive weight”?\n- “However, the loss landscape of neural network is usually sharp and non-linear [30, 34].” – Surprising to see statements like this as the loss landscape of a **linear** model is also non-linear and may be considered sharp depending on the data covariance.\n- “we observe that models can still locate in a sharp region, leading to poor performance on one-step gradient ascent.” – This statement isn’t complemented by sharpness measurements.\n- “we are able to apply the one-step gradient ascent on the smoothed weights for a much more accurate measurement of the inner maximization” – There is no evidence to this in the paper. Fig. 2 is done for a different algorithm (see my concern about Eq. (6) vs. Eq. (7)).\n- Eq. (2): there should be a division by the gradient norm, not multiplication.\n - Was a grid search done for SAM (e.g., Table 1 and 2)? And how are the hyperparameters tuned for each dataset for the proposed method, R-SAM?\n The proposed method has three hyperparameters: the perturbation radius $\\rho$, $\\gamma$ for the noise standard deviation of the noise step, and $\\lambda$ for the step size of the gradient. In comparison, SAM has only one hyperparameter. The hyperparameter sensitivity presented in **4.6 Sensitivity Analysis** is not very convincing as it shows only one-dimensional grid search (while keeping the other two dimensions fixed to the optimal values) and, for example, the range of $\\rho$ presented in Fig. 3 is in fact quite narrow (from 1.2 to 1.6).\n\n---\n\n**Update after the rebuttal**\n*Overall, I'd say that the improvements coming from R-SAM are consistent across multiple settings which makes the proposed method practically useful. However, the core idea of simply adding random noise (although with specific covariance) before the ascent step of SAM is extremely simple and lacks novelty. Moreover, I think the paper could've been implemented more carefully in various aspects (mentioned above) such as careful ablation studies and better motivation of the proposed method. I strongly feel that since the proposed method is so simple, the paper has to compensate it with very careful and rigorous empirical validations. I increase my score to 4 in light of the improvements of the paper presented during the rebuttal but still feel that the paper is below the acceptance bar.*",
" Sharpness-Aware Minimization (SAM) is an emerging deep learning technology in recent years, which aims to find parameters lie in a \"flat\" region of the energy landscape. Its purpose can be formulated as a minimax problem as shown in (4): A parameter vector w_adv is \"optimal\" in the sense of SAM if the loss of worst solution in the neighborhood of w_adv is minimized. The vanilla SAM solved this problem by approximating the landscape around a solution w as a linear function. This paper investigates this approximation thoroughly by experimental and theoretical analysis, and shows that this approximation is possibly poor in applications. Inspired by recent works in the field of adversarial training, authors improve the vanilla by introducing noise wo weight parameters.\n Strengths:\n\nBoth theoretical and numerical analysis is thorough. The idea is simple and easy-to-use. The paper is clearly written and the contribution is significant.\n\nWeakness:\n\nSee Questions. 1) In Fig. 1, the \"flatness\" of w is measured by the cosine value between g(w) and g(w+rho*g(w)/||g(w)||). But is it a suitable measure for all w? If w is close to a local minimum and g(w) is close to zero, then directions of both g(w) and g(w+rho*g(w)/||g(w)||) are \"random\", and the cosine value could be small even in a very flat region. The detailed settings of experiments in Fig. 1 are not provided. If the w is obtained after training, we need to consider this case. \n\n2) There are some typos in figures. In Fig. 1, \\hat g(w) should be g(\\hat w). In Fig. 2, L(w+rho*g(w)/g(w)) should be g(w+rho*g(w)/||g(w)||).\n\n3) Do authors miss the mathematical expectation notation somewhere? For example, in Theorem 1, L_s(w)=L(w+delta_0) or E[L(w+delta_0)]? (If it is the former one, we cannot say that L_s(w) must have a smaller Lipschitz constant.)\n\n4) Baseline methods include SGD (and modified SGD) without SAM and existing SAM methods. Certainly it is impossible to compare the proposed method with all regularization techniques, but it would be better to consider some advanced non-SAM strategies here.\n\n5) As stated by authors, \"deep learning is usually sensitive to hyper-parameters\". So the small improvement of the accuracy cannot strongly prove that RSAM significantly outperforms the other SAM methods. I suggest to add some numerical results to show that RSAM can find \"flatter areas\", which is more meaningful than 0.1% improvement in accuracy. Authors have adequately addressed the limitations and potential negative societal impact of their work.",
" This work analyzes the approximation quality of the inner maximization term in Sharpness-Aware Minimization (SAM). It is shown that the unstable gradients can hurt the inner maximization. A novel method called Random Sharpness-Aware Minimization (R-SAM) is proposed that smooths the landscape, enabling a better approximation of the inner maximization. This is done by the addition of Gaussian Noise before the computation of the inner maximization term. The results are shown on CIFAR-10, CIFAR-100, and ImageNet datasets. R-SAM improves upon the performance of SAM in most settings. Strengths:\n1. The analysis of the effect of Gaussian noise on the perturbed loss-value (Figure 2) is interesting. The analysis of approximation of gradient in SAM has not been studied and is a good direction.\n2. The proposed method, R-SAM, is simple, has no additional cost compared to SAM, and allows for stable training with large $\\rho$.\n3. The results on combining G-SAM with R-SAM, resulting in R-GSAM, show that the R-SAM can be integrated with other variants of SAM.\n\nWeaknesses:\n1. Relationship between stable gradient and approximation of inner maximization term is not very clear. Also, the term stable gradient is not well-defined. If a stable gradient were the goal, then $\\rho$ = 0 (SGD) or a low $\\rho$ would lead to the most stable gradient according to the cosine definition. However, it is observed that a very low $\\rho$ value does not lead to a significant gain in accuracy over SGD.\n3. LPF-SGD [1] is a recently proposed method that aims to reach flat optima. The algorithm of LPF-SGD also adds Gaussian Noise to the parameters with the variance equal to the norm of filter weights, similar to R-SAM. \n4. The increase in accuracy of R-SAM in comparison to SAM is minimal in most settings (Table 1, Table 2, Figure 3a).\n\n[1] Devansh Bisla, Jing Wang, and Anna Choromanska. Low-pass filtering sgd for recovering flat optima in the deep learning optimization landscape. arXiv preprint arXiv:2201.08025, 2022. 1. In Figures 1 and 2, it would be clearer if the authors also mentioned the model's accuracy in a Table. For instance, $\\rho$ = 1.0 leads to the max loss value in Figure 2. Is that observed in the performance as well? \n(Also, why is the scale of the rho value in both the figures very different?)\n\n2. The experimental details of the CIFAR-100 experiments, including the $\\rho$ value, seem to be missing. A note comparing the $\\rho$ value with SAM and R-SAM can be added. It is to be observed whether training with high $\\rho$ is also stable for ResNets. With ViTs (Table 6), the optimal $\\rho$ value is higher for R-SAM.\n\n3. The details regarding the value of $\\gamma$ have not been mentioned. A sensitivity analysis of the value of $\\gamma$ can also be added.\n\n4. Question 3c of the checklist is answered as, \"It would be too computationally expensive for us. We conduct all the experiments at least three times and report the average value.\" This is a bit confusing. I just wanted to clarify whether the authors have run all the experiments three times. This is significant because the increase with R-SAM is small in many experiments and within the error bound (as mentioned in SAM).\n\n4. In the results section, it is unclear why the authors are discussing the improvement of the original SAM over SGD+M and AdamW (L233-238, L246-250, L261-264). These results have been discussed in the SAM work. \n\n5. The method proposed by Chaudhari et al. is called Entropy-SGD and not \"Energy-SGD,\" as mentioned by the authors. (L53) The limitations of the work is not mentioned. This work has no potential negative societal impact.",
" The authors argue that the approximation of SAM's perturbation step, which requires solving an inner maximization objective, is prone to unstable gradients. To improve this approximation step without adding significant additional computational steps, the authors propose adding Gaussian noise to the perturbation initialization. # Strengths \n\n* Significance: Efficient stochastic optimization is fundamental for deep learning, and sharpness-aware methods often (but not always [3]) lead to significantly better generalization performance than previous optimizers. Hence, improving methods like SAM can be very relevant to many optimizer users.\n* Clarity: the paper is well-written and easy to follow. \n* Simplicity: The proposed method is simple, easy to implement, and effective for the shown experiments shown. \n\n# Weaknesses\n\n\nWhile I'm open to being convinced otherwise and raising my score, I'm not fully convinced yet that the main premise of the paper -- randomly smoothening (RS) the loss function **significantly** **improves** over a non-smoothed loss function for approximating the gradient ascent step -- is true. RS seems very heuristical to me; hence, I'd appreciate either more clarification on why it always improves (if so) or more discussion on when it may not hold and what its trade-offs are. \n\nSAM's inner objective's gradient ascent step involves the same gradient computation as standard SGD. Yet, when using standard SGD, we typically do not additionally smooth the loss function. For example, [4] recently proposed doing so, however, with the motivation of finding flatter minima (akin to SAM) and by sampling multiple perturbations instead of just one. If RS leads to more \"stable\" or \"accurate\" gradients, why not apply it to the descent step too? For example, in Algorithm 1, \"Compute SAM gradient\", why do we not additionally perturb $\\mathbf w_{\\text{adv}}$ as well if such smoothening is always beneficial? \n\nFigure 1 shows Vanilla's cosine similarity between the standard and perturbed gradient decays faster than when RS gets applied. However, it is not clear to me why this matters and can't be simply resolved by re-scaling rho.\n\nFigure 2 shows that the RS loss function increases faster in loss when rho is increased. Again, it is unclear why reaching higher losses by perturbing the weights further and further with increasing standard deviations is strictly better or important for ultimately reaching flatter (and therefore, better generalizing) minima.\n\nThe authors claim that Vanilla can not reach loss values as high as RS reaches (although this is not fully proven as rho's range ends at $\\rho=1.0$). However, first perturbing the weights before approximating the sharpest loss in their neighborhood effectively moves the neighborhood ball in a worse-performing region -- it is not centered around $\\mathbf w$ but at $\\mathbf w + \\mathbf \\delta_0$. Since $\\mathbf \\delta_0 \\neq \\nabla L(\\mathbf w)$ , almost surely $L(\\mathbf w) < L(\\mathbf w + \\mathbf \\delta_0)$ . Therefore, it is not surprising that an ascent step starting from $L(\\mathbf w + \\mathbf \\delta_0)$ will reach even higher losses. \n\nFurther, we may note that the authors generated Figure 2 using the checkpoint of the 200th epoch on CIFAR-100. Using a decent learning rate/schedule, 200 epochs are enough for training a Resnet-18/WRN to (almost) convergence on CIFAR100. Hence, it is not clear why we want to reach higher-loss regions at this point during training. \n\nLastly, the loss function/y-axis of Figure 2 is not precisely specified. Assuming that it is sth like the cross-entropy loss and based on my experience, my guess would be that parameterizations with losses above 3 (which is what vanilla still reaches) resemble the performance of randomly initialized networks or within the first few optimization steps epochs during training. That is when the parameters are far from fruitful valleys and hence, not where we want the gradient ascent step jumping towards. \n\nDespite these issues, I'd be easily convinced of RSAM's utility if the empirical results showed its superiority against other SAM variants and versatility across different tasks. Unfortunately, such results are largely missing.\n\n* While the empirical performance gains over SAM are consistent across the shown experiments, they are less than 1% on average, which resembles the performance gains of other SAM variants, such as GSAM (cited in the paper), ASAM [1], SAM+SWA[3]. However, only for one task (model/dataset combination), the authors compare against GSAM, in which their performance differs by 0.1% and is identical (as the authors say themselves). \n* I appreciate the additional R-GSAM experiment, but again, I find the number of tasks (which is only one here) too limited. It would be great to see how R-SAM compares against GSAM, ASAM, SWA+SAM and on more tasks, and even if they reach similar performances; it would then be interesting e.g. to combine these and see if they are complementary.\n* This method introduces another hyper-parameter $\\gamma$, which may incur additional tuning overhead for practictioners. While Figure 3c) shows that it is rather stable for three different values on one particular dataset, it would be nice to see a more thorough sensitivity analysis with e.g. more values or a default value used on a few more datasets/architectures (like SAM's rho=0.05).\n* The paper includes thorough image classification results; however, it lacks empirical analyses on other tasks such as NLP or graph learning [2,3]. \n* All empirical results seem to be the result of one training run. It would great to see averaged results across at least three random seeds $\\pm$ std. \n\nMinor: \n* Figures 1 and 2 show the effect of random smoothing (RS) on the loss surface. Since RS is a stochastic operation, it would be great to see some empirical confidence intervals (e.g., $\\pm 1$ std across 3 seeds). \n\n\n\n[1] ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks, Kwon et al, ICML 2021.\n[2] Sharpness-Aware Minimization Improves Language Model Generalization, Bahri et al, ACL 2022.\n[3] A Fair Comparison of Two Popular Flat-Minima Optimizers: Stochastic Weight Averaging vs. Sharpness-Aware Minimization, Kaddour et al, arXiv:2202.00661.\n[4] Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape, Bisla et al, 2022. Let me rephrase my previously mentioned weakness as a question here. \n\nIt is not entirely clear why performing the inner maximization gradient ascent step on a smoothed loss function necessarily leads to a more \"accurate\" solution or exactly what that means. Following the same argument, we would expect that we want to apply SGD (solving a minimization objective) on a smoothed loss function too. Yet, this is not standard practice. Works like [1] recently proposed to explore such smoothening; however, with the motivation of finding flat minima (akin to SAM), not \"gradient stability\" or more \"accurate\" solutions. It would be great if the authors could clarify why \"gradient stability\" is important for inner maximization. \n\nI'm happy to raise my score if this gets sufficiently addressed. \n\n[1] Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape, Bisla et al, AISTATS 2022. As far as I can see, the authors do not discuss any limitations of their method. \n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
3
] | [
"nips_2022_htUvh7xPoa",
"GXFt8247X_D",
"Jzga_pj59p6",
"51hzizhZq_sv",
"vDGFc9xkDmCM",
"msihoDDRAD",
"RACcRH0dqV7",
"vl1GRud_nmJ",
"BJxQxxDvN9j",
"SMdKl8qeio",
"2E2zVUc1Yi2",
"K-5dAOXnViD",
"dIGDYo8l07Z",
"nips_2022_htUvh7xPoa",
"nips_2022_htUvh7xPoa",
"nips_2022_htUvh7xPoa",
"nips_2022_htUvh7xPoa"
] |
nips_2022_9ND8fMUzOAr | Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning | Vision transformers have recently achieved competitive results across various vision tasks but still suffer from heavy computation costs when processing a large number of tokens. Many advanced approaches have been developed to reduce the total number of tokens in the large-scale vision transformers, especially for image classification tasks. Typically, they select a small group of essential tokens according to their relevance with the [\texttt{class}] token, then fine-tune the weights of the vision transformer. Such fine-tuning is less practical for dense prediction due to the much heavier computation and GPU memory cost than image classification.
In this paper, we focus on a more challenging problem, \ie, accelerating large-scale vision transformers for dense prediction without any additional re-training or fine-tuning. In response to the fact that high-resolution representations are necessary for dense prediction, we present two non-parametric operators, a \emph{token clustering layer} to decrease the number of tokens and a \emph{token reconstruction layer} to increase the number of tokens. The following steps are performed to achieve this: (i) we use the token clustering layer to cluster the neighboring tokens together, resulting in low-resolution representations that maintain the spatial structures; (ii) we apply the following transformer layers only to these low-resolution representations or clustered tokens; and (iii) we use the token reconstruction layer to re-create the high-resolution representations from the refined low-resolution representations. The results obtained by our method are promising on five dense prediction tasks including object detection, semantic segmentation, panoptic segmentation, instance segmentation, and depth estimation. Accordingly, our method accelerates $40\%\uparrow$ FPS and saves $30\%\downarrow$ GFLOPs of ``Segmenter+ViT-L/$16$'' while maintaining $99.5\%$ of the performance on ADE$20$K without fine-tuning the official weights. | Accept | This paper presents a method to reduce the computational cost of a trained vision transformer for dense prediction. According to the authors' presented experiments, the method can accelerate the transformers effectively without retraining. Although some experiments are not throughout (as discussed below), I see potential in this method and would give the research community a try to see whether the method can be further generalized to other architectures.
The AC does see some strange experimental setups. For instance, it is strange that the authors chose Mask2Former to conduct experiments but uses Segmentor to conduct experiments on ADE20K. Mask2Former already provides quite a strong model on ADE20K. Why use Segmentor for ADE20K experiments? The AC also observe that the authors compare with ACT on Segmentor as well.
The code is strongly encouraged to release for letting the general public test the method on other architectures. | train | [
"YOtJMWU1KGK",
"_ljYXctMSFO",
"UqD4w1icGP1",
"MA7Wcmg6wP8",
"8CrxXqj5nQ0",
"EYopl8wEyee",
"G-nrv1FAnmo",
"qktSXEYvwN-",
"tJ1XOCsdZn9",
"2NEEQdq38c9",
"QFf9S0S2A8K",
"2-KNyTq5j_",
"s-kzfNBDej7",
"r6AT4zJCIEq"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the previous careful reviews and valuable suggestions.\n\nWe have learned a lot through the suggested comparisons with TokenPooling[47]/DynamicViT[52]/TokenLearner[55]. We also hope to learn more from your further valuable suggestions.",
" We thank the reviewer for the previous careful reviews and valuable suggestions.\n\nWe have learned a lot from the response from the other reviewers. We also hope to learn more from your further suggestions.",
" We thank the reviewer for the previous careful reviews and constructive suggestions.\n\nWe have learned a lot from the response from the other reviewers. We also would like to hear your further suggestions.",
" We thank the reviewer for your careful response and for increasing the rating. \n\nWe will add the rebuttal contents to the main paper in the revision following your valuable suggestions.",
" Thanks for your responses. The author is encouraged to add the rebuttal contents to the main paper in the future. ",
" We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.\n\n___\n>\"Since the authors claimed that there is redundancy in the tokens of the vision transformer and that the proposed method is based on clustering, some visualizations could better represent this observation.\"\n\nA: Thanks for your valuable suggestions. We follow your suggestions to visualize the attention maps of neighboring tokens in Figure 5 of the revised supplementary. We would like to revise the visualizations if you could provide any further detailed suggestions.\n\n___\n>\"Clustering attention has been explored by previous approaches such as clustering attention[1] and adaptive clustering attention[2]. Specifically, ACT can slim the high-resolution transformer without retraining. The author is highly encouraged to discuss relationships between previous works. Overall, I think the proposed approach is straightforward and effective. However, the paper lack comparison with previous similar approaches. The author is encouraged to discuss related works, especially on clustering attention topics. [1] Fast transformers with clustered attention [2] End-to-end object detection with adaptive clustering transformer [3] SMYRF: Efficient Attention using Asymmetric Clustering\"\n\nA: Thanks for pointing out these GREAT works on clustering attention[1][2][3] and we will include the references and the following discussions in the revision.\n\n👉 The key differences are:\n1. These clustering attention methods perform clustering within each multi-head self-attention layer (MHSA) independently. **vs.** Our approach only performs clustering once with the token clustering layer and refines the clustered representations with the following transformer layers. Therefore, our approach introduces a much smaller additional overhead caused by the clustering operation.\n2. These clustering attention methods ONLY reduce the computation cost of each MHSA layer equipped with clustering attention as they maintain the high-resolution representations outside the MHSA layers. **vs.** Our approach can reduce the computation cost of both MHSA layers and feed-forward network (FFN) layers after the token clustering layer.\n\nWe further summarize their detailed differences in the following Table:\n\n| Cluster method | Query | Key-Value | FFN | # Clutering times |\n| ----------- | ----------- | ----------- | ----------- | -----------:|\n| ClusteredAttention[1] | ✔️ | ❌ | ❌ | # MHSA layers |\n| ACT[2] | ✔️ | ❌ | ❌ | # MHSA layers |\n| SMYRF[3] | ✔️ | ✔️ | ❌ | # MHSA layers |\n| Ours | ✔️ | ✔️ | ✔️ | $1$ |\n\nIn the above Table, we use ✔️ to mark processing the clustered representations and ❌ to mark processing the original representations.\n\n👉 The similarities are: ACT[2] and SMYRF[3] can also slim & accelerate vision transformer without retraining by applying locality-sensitive hashing (LSH). Specifically, ACT focused on accelerating DETRs while SMYRF focused on BERTs and BigGANs. We compare with the more representative ACT[2] as follows.\n\n\n👉 Comparison with ACT[2] without retraining:\nWe follow your suggestions to report the segmentation results of Segmenter+ViT-L based on the official implementations of ACT[2]:\n\n| Cluster method | FPS | GFLOPs | mIoU |\n| -----------| ----------- | ----------- | -----------:|\n| Baseline | $6.2$ | $659.0$ | $51.82$ |\n| Ours ($\\mathrm{h}\\times\\mathrm{w}=24\\times24$) | $9.1$ | $388.2$ | $51.32$ |\n| Ours ($\\mathrm{h}\\times\\mathrm{w}=28\\times28$) | $8.8$ | $438.9$ | $51.56$ |\n| ACT (#query-hashes=16) | $5.8$ | $578.7$ | $48.12$ |\n| ACT (#query-hashes=24) | $5.3$ | $614.7$ | $51.38$ |\n| ACT (#query-hashes=32) | $5.0$ | $638.2$ | $51.64$ |\n\nAccording to the above results, we can see that (1) ACT also achieves strong performance without retraining, (2) our approach is a better choice considering the trade-off between performance and FPS & GFLOPs, e.g., our method achieves close performance as ACT (51.32 vs. 51.38) while running 70% faster (9.1 vs. 5.3) and saving more than 35% GFLOPs (388.2 vs. 614.7).\n\n\n___\n>\"I suggest that authors provide visualizations of the original features and clustering features in the paper or in the appendix, if possible.\"\n\nA: Thanks for your suggestions and we have included the related visualizations following your suggestions in Figure 4 (b) of the supplementary.",
" We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.\n\n___\n>\"From Table 2, I see that the proposed operators actually have limited performance when increasing the downsampling stride. I think this limitation is normal for non-parametric modules, but I also think the author should also compare with the other parametric methods such as [47,52,55] here to give a comprehensive discussion.\"\n\nA:\n👉 Comparison with TokenPooling[47]/DynamicViT[52]/TokenLearner[55]:\n1. [52] and [55] are **parametric** that introduce additional parameters while [47] and our approach are **non-parametric**. For example, [52] introduces a trainable prediction module to estimate the importance score of each token, [55] uses a learnable convolution to predict a group of spatial attention maps and generates a set of token vectors accordingly.\n2. [47] and [52] focus on accelerating vision transformers for only image classification tasks. **They can not be applied for dense prediction tasks directly as they only keep a small set of selected/clustering tokens, thus losing the spatial information that is necessary for dense prediction tasks**. Our approach maintains the spatial information and can be used to **accelerate both image classification and various dense prediction tasks**, especially segmentation, object detection, and depth estimation.\n3. [47] and [52] only reduce the number of tokens while [55] and our approach further propose to increase the number of tokens with either TokenFuser or token reconstruction layer, where the difference is that [55] repeatedly applies the combination of TokenLearner & TokenFuser before/after each transformer layer while our approach only applies token clustering layer & reconstruction layer once through the whole transformer. Therefore, our approach introduces a much smaller additional overhead caused by the clustering & reconstruction operations when compared to [55].\n\n\n👉 Comparison experiments: considering both [47] and [52] can not be used for dense prediction tasks directly, we **apply our token reconstruction layer to adapt the most representative [52] for segmentation tasks**. We illustrate more details in Figure 2 (b) of the supplementary and report the comparison results as follows:\n\n| Cluster method | GFLOPs | Parametric | Fine-tuning | mIoU |\n| ----------- | ----------- | ----------- | ----------- | -----------:|\n| DynamicViT($\\rho=0.7$) | $455.6$ | ✔️ | ✔️ | $43.79$ |\n| DynamicViT($\\rho=0.8$) | $513.3$ | ✔️ | ✔️ | $46.78$ |\n| DynamicViT($\\rho=0.9$) | $583.0$ | ✔️ | ✔️ | $46.95$ |\n| Ours ($\\mathrm{h}\\times\\mathrm{w}=8\\times8$) | $274.0$ | ❌ | ❌ | $32.13$ |\n| Ours ($\\mathrm{h}\\times\\mathrm{w}=16\\times16$) | $315.1$ | ❌ | ❌ | $48.21$ |\n| Ours ($\\mathrm{h}\\times\\mathrm{w}=24\\times24$) | $388.2$ | ❌ | ❌ | $51.32$ |\n\nAccording to the above results, we can see that our approach consistently outperforms DynamicViT with more accurate segmentation results while requiring much fewer GFLOPs.",
" We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.\n\n___\n>\"The Token Clustering Layer in the paper is similar to the algorithm cited in [30], where the center of the superpixel is obtained and then iterated until the result of the latest clustering is obtained. Although better results are obtained, the method is only introduced and the innovation point is not sufficient. And I think the author's description of the two modules is not detailed enough and should be further illustrated with pictures. The experimental part should be more experimental on multiple datasets for each task.\"\n\nA: Thanks for your constructive comments! \n\n👉**Concerns about the innovation**: we agree that the token clustering layer is similar to [30]. We would like to stress the importance of our token reconstruction layer and we show that it can be used to adapt the very recent EViT[1] and DynamicViT[2] for dense prediction tasks in the supplementary. Please refer to the **novelty & contribution** in the general response.\n\n👉**Concerns about the details**: we have followed your suggestions and illustrated the details of the two modules in Figure 1 in the revised supplementary. Besides, we also provide the PyTorch example implementations in the supplementary to clarify more details.\n\n👉**Concerns about the experiments**: we have already reported the object detection results on COCO, segmentation results on ADE20K/PASCAL-Context/Cityscapes, and monocular depth estimation results on KITTI/NYUv2. We would like to improve the experiments if you could provide more detailed suggestions.\n\n[1] Liang, Youwei, et al. \"EViT: Expediting Vision Transformers via Token Reorganizations.\" ICLR 2022.\n\n[2] Rao, Yongming, et al. \"Efficient Vision Transformers and CNNs with Dynamic Spatial Sparsification.\" NeurIPS 2021\n\n___\n>\"The use of the superpixel method, while reducing the size of the token, must lead to a loss of semantic information. And after the reconstruction of the layers, there must be noise in the regained token. My question is how the loss of semantic information in each layer compares to that of the original ViT after using the two modules proposed by the authors.\"\n\nA: We take Segmenter + ViT-L (on ADE$20$K, $\\alpha=10$) as an example and analyze the loss of semantic information by calculating the cosine similarity between the reconstructed high-resolution feature $\\mathbf{Z_{\\alpha+\\beta}}$ and the original high-resolution feature $\\mathbf{Z^{original}_{\\alpha+\\beta}}$:\n\n\n| $\\alpha+\\beta$ | $12$ | $14$ | $16$ | $18$ | $20$ |$22$ |$24$ |\n| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | -----------:|\n| $cos(\\mathbf{Z_{\\alpha+\\beta}}, \\mathbf{Z^{original}_{\\alpha+\\beta}})$ | $0.94$ | $0.95$ | $0.96$ | $0.96$ | $0.96$ | $0.96$ | $0.96$ | $0.96$ |\n\nAccordingly, we can see that our approach well maintains the semantic information and suffers less from the noise during the reconstruction process.\n\n\n___\n>\"I personally believe that the work in this article has contributed to reducing the computational burden of the Transformer. However, the two modules proposed in the article should be further verified and reasoned in detail to show the reader more details.\"\n\nA: We have included more details in the supplementary according to your suggestions as follows:\n\n1. Figure 1 presents the detailed pipeline of our approach.\n2. Listing 1 & 2 present the example implementations based on PyTorch.\n\nWe would like to further improve the details if you have any further advice.",
" We thank the reviewer for the careful reviews and constructive suggestions. We answer the questions as follows.\n\n\n>\"This paper fulfills with texts and numbers. I think that authors should provide a visual comparison for detection, segmentation, and depth estimation\"\n\nA: Good points! We have followed your suggestions and included some visual comparison results, from the ADE$20$K segmentation benchmark, in the revised supplementary. Specifically, Figure 4 (a) presents the visual comparisons of our approach under different settings of cluster size on the ADE$20$K semantic segmentation task. We would like to include more visual comparisons on both detection and depth estimation benchmarks in the final revision.\n\n___\n>\"According to my understanding, the author speeds up Transformer by reducing the token number and restoring feature representation from these clustered tokens. However, I do not understand why this approach could avoid finetuning. Could authors provide detailed discussion for the reason why their acceleration method can avoid finetune?\"\n\nA: The reasons include the following two aspects:\n\n1. Our token clustering/reconstruction layers are **non-parametric**, thus avoiding retraining any additional parameters.\n2. The reconstructed high-resolution representations **maintain high semantic similarity** with the original high-resolution representations.\n\nWe take Segmenter + ViT-L (on ADE$20$K, $\\alpha$=10) as an example and analyze the semantic similarity between the reconstructed high-resolution feature $\\mathbf{Z_{\\alpha+\\beta}}$ (with our approach) and the original high-resolution feature $\\mathbf{Z^{original}_{\\alpha+\\beta}}$ (with original ViT-L):\n\n| $\\alpha+\\beta$ | $12$ | $14$ | $16$ | $18$ | $20$ |$22$ |$24$ |\n| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | -----------:|\n| $cos(\\mathbf{Z_{\\alpha+\\beta}}, \\mathbf{Z^{original}_{\\alpha+\\beta}})$ | $0.94$ | $0.95$ | $0.96$ | $0.96$ | $0.96$ | $0.96$ | $0.96$ | $0.96$ |\n\nIn the above Table, $\\alpha$ represents the inserted layer index of our token clustering layer, and $\\alpha+\\beta$ represents the inserted layer index of our token reconstruction layer. Accordingly, we can see that the cosine similarities are consistently high across different transformer layers between the reconstructed high-resolution feature $\\mathbf{Z_{\\alpha+\\beta}}$ (with our approach) and the original high-resolution feature $\\mathbf{Z^{original}_{\\alpha+\\beta}}$. In other words, **our approach well maintains the semantic information carried in the original high-resolution feature maps and thus is capable of avoiding finetuning.**",
" We thank all the reviewers for the careful reviews and constructive suggestions. We acknowledge the positive comments such as \"the setting of experiments is clear and the number of experiments is adequate\" (Reviewer Wt6a), \"this article has contributed to reducing the computational burden\" (Reviewer sJzo), \"this paper is technically sound and achieves good performance\" (Reviewer kZ4Z), and \"clearly motivated, well written and with sufficient experimentation (Reviewer Ldum)\".\n\nAbove all, we clarify the concerns from the following aspects:\n\n\n> Motivation:\n\nOur work aims to accelerate various advanced **SOTA large-scale vision transformers for dense prediction tasks without any additional re-training or fine-tuning** as they tend to be very expensive and require a lot of computation cost. Most of our experiments can be finished with **only 1$\\times$ 16G V100 GPU**. However, if we need to re-train or fine-tune these large-scale vision transformers such as ViT-H and SwinV2-L, we will need to access at least **8$\\times$ 32G V100 GPUs** considering their expensive training computation cost and huge GPU requirement. For example, even only fine-tuning SwinV2-L + HTC++ on COCO for $5$-epochs requires more than **240$\\times$ GPU hours** ($30$ hours with $8{\\times}$ 32G V100 GPUs).\n\n> Importance:\n\n**Large-scale vision transformer** models become increasingly important for dense prediction tasks and fine-tuning large-scale models becomes more and more expensive and impractical. Therefore, we hope our work could inspire more research efforts into exploring how to **accelerate large-scale vision transformers for dense prediction tasks without any additional re-training or fine-tuning** while maintaining the performance as much as possible.\n\n> Novelty & Contribution:\n\n👉The novelty of our work lies in two aspects:\n\n(1) We are the **first to study how to accelerate SOTA large-scale vision transformers for dense prediction tasks without fine-tuning** (e.g., \"Mask2Former + Swin-L\" or \"SwinV2-L + HTC++\"). Besides, our approach also achieves much better accuracy and speedup trade-off when compared to the very recent ACT [1] that is based on a clustering attention scheme (Reviewer Ldum);\n\n(2) Our **token clustering & reconstruction layers are capable of maintaining the semantic information encoded in the original high-resolution representations**. This is the very most important factor to avoid fine-tuning.\n\n\n👉The key contribution of our work is in **designing an effective combination of a token clustering function $f(\\cdot)$ and a token reconstruction function $g(\\cdot)$, which aims to maximize the cosine similarity between the reconstructed high-resolution feature maps and the original ones without fine-tuning**:\n\n$$\\max_{f,g}\\space cos(\\mathcal{T}(\\mathbf{Z_\\alpha}), g(\\mathcal{T}(f(\\mathbf{Z_\\alpha})))),$$\n\nwhere $\\alpha$ represents the inserted layer index of our token clustering layer, $\\mathcal{T}(\\mathbf{Z_\\alpha})$ and $g(\\mathcal{T}(f(\\mathbf{Z_\\alpha})))$ represent the original and reconstructed high-resolution feature maps, respectively. We implement $f(\\cdot)$ and $g(\\cdot)$ with the token clustering layer and the token reconstruction layer, respectively. $\\mathcal{T}(\\cdot)$ represents the combination of transformer layers between the token clustering layer and the token reconstruction layer.\n\n👉**The design of our token reconstruction layer is the key and not straightforward essentially. We also show that our token reconstruction layer can be used to adapt the very recent EViT[2] and DynamicViT[3] for dense prediction tasks in the supplementary.**\n\n[1] Zheng, Minghang, et al. \"End-to-end object detection with adaptive clustering transformer.\" BMVC 2021.\n\n[2] Liang, Youwei, et al. \"EViT: Expediting Vision Transformers via Token Reorganizations.\" ICLR 2022.\n\n[3] Rao, Yongming, et al. \"Efficient Vision Transformers and CNNs with Dynamic Spatial Sparsification.\" NeurIPS 2021.\n\n> Details & Visualizations:\n\nWe have **revised the supplementary material** to include (i) the details of our proposed two modules in Figure 1 and Listing 1 & 2, (ii) the details of how to adapt DynamicViT for dense prediction tasks in Figure 2 (b), (iii) rich visualizations of both segmentation predictions and feature maps in Figure 4, and (iv) attention maps associated different sampled local neighboring positions in Figure 5. **We mark the modifications with blue-colored text**.",
" This paper accelerates vision transformers for dense prediction without finetuning. This is done by (i) using the token clustering layer to cluster the neighboring tokens; (2) using the token reconstruction layer to re-create the high-resolution representations. The result of proposed method achieves state-of-the-art performance on five dense prediction tasks This paper proposes two layers (token clustering layer and token reconstruction layer) and apply them into Swin Transformer. The two ideas are simple and easy to follow. The setting of experiments is clear and the number of experiments is adequate.\n\n\nThis paper fulfills with texts and numbers. I think that authors should provide visual comparison for detection, segmentation and depth estimation\n According to my understanding, author speed up Transformer by reducing token number and restore feature representation from these clustered tokens. However, I do not understand why this approach could avoid finetune. Could authors provide detailed discussion for reason why their acceleration method can avoid finetune. ",
" When using the Transformer for dense image prediction, detection, and segmentation, the large number of tokens can impose a heavy computational burden. Therefore, the core of this paper is to propose clustering tokens to obtain a low-resolution representation to reduce the computational burden, and to perform attention on the low-resolution token cluster, followed by a token reconstruction layer to reconstruct the high-resolution token representation. The authors' experiments verify that the proposed approach significantly improves the FPS and reduces the GFLOPS index for segmentation, detection, and depth estimation tasks when using Transformer with only a small loss of AP. Strengths:\nThis paper introduces the superpixel scheme into the Transformer structure, which really reduces the computational burden on the Transformer. The author's solution ensures that the entire computational process is differentiable and is a practical solution.\n\nWeaknesses\nThe Token Clustering Layer in the paper is similar to the algorithm cited in [30], where the center of the superpixel is obtained and then iterated until the result of the latest clustering is obtained. Although better results are obtained, the method is only introduced in and the innovation point is not sufficient. And I think the author's description of the two modules is not detailed enough and should be further illustrated with pictures. The experimental part should be more experimental on multiple datasets for each task. The use of the superpixel method, while reducing the size of the token, must lead to a loss of semantic information. And after the reconstruction of the layers, there must be noise in the regained token. My question is how the loss of semantic information in each layer compares to that of the original ViT after using the two modules proposed by the authors I personally believe that the work in this article has contributed to reducing the computational burden of the Transformer. However, the two modules proposed in the article should be further verified and reasoned in detail to show the reader more details.",
" This paper introduces two non-parametric operators to efficiently down-sampling some intermedia tokens to accelerate the large-scale vision transformers for dense prediction. One is a token clustering layer to decrease the number of tokens and the other one is a token reconstruction layer to recover the number of tokens. These two operators need no training or finetuning, while the related experimental results prove the effectiveness of them in typical benchmarks. Strengths:\nThe motivation of the paper is clear and solid. While the current existing works for efficient transfomer need training or finetuning, this work introduces two non-parametric operators and brings some meaningful insight for this topic.\nThe experimetal results are sufficient to support the proposed work. \nThe paper is well written and easy to follow.\n\nWeaknesses:\nFrom Table 2, I see that the proposed operators actually have limited performance when increasing the downsampling stride. I think this limitation is normal for non-paramettic modules, but I also think the author should also compare with the other parametric methods such as [47,52,55] here to give a comprehensive discussion. \n\nOverall, I think this paper is technically sound and achieve good performance. Thus I vote for borderline accept. None. See the above discussion about weaknesses. ",
" The authors observed that the presence of local spatial representation redundancy and high definition representation in the vision transformer contribute to dense prediction. Therefore, this paper proposes the token clustering layer and the token reconstruction layer to reduce computation and memory cost. The proposed modules are non-parametric operations and are applicable to different models and multiple dense prediction tasks. Strengths:\n1. The paper is clearly motivated, well written and with sufficient experimentation.\n2. The proposed token clustering and reconstruction are non-parametric operations and therefore can be implemented directly on the well-trained model without additional fine-tuning. Taking existing trained models, the proposed method significantly improves the efficiency of the large vision transformer on the dense prediction task at a low cost in terms of precision.\n3. The authors' observations are insightful, and the methods proposed are clever and easy to implement. In addition to the five dense prediction tasks in the paper, I think the proposed approach also has the potential to be extended to other vision transformer-based tasks.\n4. The authors also provides the extension strategy and experiments for Swin-Transformer in addition to ViT.\n\nWeaknesses:\n1. Since the authors claimed that there is redundancy in the tokens of the vision transformer and that the proposed method is based on clustering, some visualizations could better represent this observation.\n2. Clustering attention has been explored by previous approaches such as clustering attention[1] and adaptive clustering attention[2]. Specifically, ACT can slim the high-resolution transformer without retraining. The author is highly encouraged to discuss relationship between previous works. \n\nOverall, I think the proposed approach is straightforward and effective. However, the paper lack comparison with previous similar approaches. The author is encouraged to discuss related works especially on clustering attention topic. \n[1] Fast transformers with clustered attention\n[2] End-to-end object detection with adaptive clustering transformer\n[3] SMYRF: Efficient Attention using Asymmetric Clustering I suggest that authors provide visualizations of the original features and clustering features in the paper or in the appendix, if possible. I do not see any potential negative societal impact for this work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
4,
4
] | [
"s-kzfNBDej7",
"2-KNyTq5j_",
"QFf9S0S2A8K",
"8CrxXqj5nQ0",
"EYopl8wEyee",
"r6AT4zJCIEq",
"s-kzfNBDej7",
"2-KNyTq5j_",
"QFf9S0S2A8K",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr",
"nips_2022_9ND8fMUzOAr"
] |
nips_2022_TjVU5Lipt8F | When Privacy Meets Partial Information: A Refined Analysis of Differentially Private Bandits | We study the problem of multi-armed bandits with ε-global Differential Privacy (DP). First, we prove the minimax and problem-dependent regret lower bounds for stochastic and linear bandits that quantify the hardness of bandits with ε-global DP. These bounds suggest the existence of two hardness regimes depending on the privacy budget ε. In the high-privacy regime (small ε), the hardness depends on a coupled effect of privacy and partial information about the reward distributions. In the low-privacy regime (large ε), bandits with ε-global DP are not harder than the bandits without privacy. For stochastic bandits, we further propose a generic framework to design a near-optimal ε global DP extension of an index-based optimistic bandit algorithm. The framework consists of three ingredients: the Laplace mechanism, arm-dependent adaptive episodes, and usage of only the rewards collected in the last episode for computing private statistics. Specifically, we instantiate ε-global DP extensions of UCB and KL-UCB algorithms, namely AdaP-UCB and AdaP-KLUCB. AdaP-KLUCB is the first algorithm that both satisfies ε-global DP and yields a regret upper bound that matches the problem-dependent lower bound up to multiplicative constants. | Accept | This paper studies the problem of multi-armed bandits under differential privacy. The reviewers are all positive about the results and presentation of the paper. | test | [
"rFMQT2jwMvh",
"asXUKzGs6e",
"jDPR1mgoGIA",
"eBkhpBMyuj",
"2t_Q2uVrWl3",
"NBnkMPY1Ge",
"Sc5-Mz_xSii",
"iBEILsLK1b",
"VFaZDiLN0V",
"T7kwfq9PnTq",
"pgw0RISwMgJB",
"qBkLCsnGULi",
"CWZ5OY39dN1",
"DdaT3b8IMvk",
"2xNRAmsJdKV",
"yOj4MnL4_mf",
"29fdSce3mVr",
"IYQwvWwpjt",
"fikrpuDE2u6",
"_jD7zvtP1Nq"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that all of your concerns have been addressed and thank you for raising your score. We will add a paragraph explaining the details discussed here and the comparaison to [1, 20, R1, 3] after Theorem 2.",
" Thanks for the update.\n\nI hope the authors can add one paragraph to carefully discuss the current situation of minimax lower bounds for private bandits so as to better sell the established result in the paper. This is important since it seems to me that there exists some misunderstanding in [20, R1] based on the authors' explanations. \n\nI have updated my score accordingly.\n\n",
" We are glad that some of your concerns are clarified and thank you for further feedback.\n\n**More details on the minimax lower bounds**. The exact result from [1] used by [20, R1] is the Corollary 15, which uses Claim 14, that has a \"consistency assumption\". Indeed, this is a problem dependent lower bound, improperly used by [20, R1] in their minimax lower bounds (as they are applying it without mentioning any consistency assumption). This is one of the reasons that motivates our work to refine and explicit the lower bounds in private bandits literature, in a resonance to the existing results for non-private bandits [ref. Lattimore and Szepsvari, Bandit algorithms, 2020].\n\n[3] uses the same structure of proof as our work. However, their KL-divergence decomposition in Lemma 6 is not tight compared to ours, which makes our lower bound tighter than theirs. Also, [3] needs to assume that the reward distributions are Lipschitz continuous and the corresponding Lipschitz constant appears in their bound. We do not need such assumption.\n\nAdditionally, we believe that our minimax lower bound is achievable as we have an algorithmic sketch that achieves this lower bound and we are working on to improve it.\n\nWe thank you again for your careful review and following interactions. We hope to have answered your concerns and we pledge to include relevant discussions in the revised version. If you feel that your concerns have been answered, we would appreciate it if you can adjust your score accordingly.",
" Thanks for the authors' update.\n\n> Neighboring relation...\n\nThanks for pointing out Lemma 1 of [3], which seems to resolve my concern. Again, the key idea is nothing but chain rule of conditional probability:)\n\n> Minimax lower bound...\n\nBased on my understanding of the authors' response, it seems that the lower bound in the current work [20, R1] is not **minimax**, am I correct? Do the authors in [20, R1] mess up with the problem-dependent bound with the minimax bound? In particular, to my best knowledge, the consistency assumption is often only used to derive problem-dependent lower bounds. (BTW, can the authors give an exact pointer of the result in [1] that has been used in [20, R1]?)\n\nBTW, in your reference [3], there also exists a result for minimax lower bound under global DP, which is very different from the current one. It seems that there is no comparison with the result in [3] and can the authors comment on this?",
" We thank the reviewer for further queries and feedback.\n\n**Neighboring on $r_t$ vs on $f_t$**. The main concern is that the notion of neighboring used in our work relies on $r_t$ (a particular value of the **received** reward), while a stronger notion of neighboring would consider the whole vector $f_t$ (all potential answers of one user). Lemma 1 of [3] shows that, in a bandit setting, these two notions are in fact equivalent. Intuitively, if a bandit algorithm is private with respect to all changes in the received feedback, for all possible sequences of actions, it is private for all the potential answers too. This validates our claim that [R1], [R2] and our work use the same notion of DP. We refer to [3] as it discusses in depth all possible definitions of DP for bandits and unifies them under global (our work) and local DP.\n\n**More details on the comparison of the minimax lower bounds to [20] and [R1]**. We thank the reviewer for the clarification. Indeed, our minimax lower bound is for Gaussian distributions and adapting the proof to Bernoulli distributions seems to not recover the extra $\\log(T)$ factor. A potential reason is the fact that both [20] and [R1] rely on the result of [1], which has a strong assumption that the policy is a consistent one. Our result is a more \"worst-case\" lower bound, since it is a minimax lower bound on the whole class of $\\epsilon$-global DP policies (not only consistent ones). The standard minimax lower bounds and corresponding proofs in non private bandits [Theorem 15.2, Lattimore and Szepsvari, Bandit algorithms, 2020] follow the similar setting like ours rather than the added consistency assumption of [1, 20, R1].",
" Thanks for the timely response. \n\n> $\\epsilon$-global DP is the exact notion used by [R2] in the section Private Online Learning: Bandit Setting by taking $w_t = a_t$ and $f_t(w_t) = -r_t$.\n\nThey are **NOT** the same based on my understanding. As the authors already note that here $r_t = - f_t(w_t)$ (the sign does not matter for this discussion). In [R2], the neighboring relation of DP is on $f_t$ (see their Eq. 2), i.e., the function itself while in your current paper, the neighboring notion is on $r_t$, which is just one particular value. The one in [R2] is a proper one, which can be used to capture real-life applications, i.e., to capture the person rather than a single feedback.\n\n> This proves that the notions of DP used in [R1], [R2] and our work are all the same.\n\nI don't think so. Even let $H=1$ and $S=1$, the one in [R1] is not the same as yours. In [R1], they directly consider a neighboring relation on users, and more importantly, each user is identified by a function -- that is, the reward response the user can give to all the possible actions (see the third line after Algorithm 1 in http://proceedings.mlr.press/v119/vietri20a/vietri20a.pdf).\n\n> minimax lower bound\n\nLet me clarify my question a little bit. The current minimax lower bound in [20, R1] is $\\Omega(\\sqrt{KT} + \\frac{K\\log T}{\\epsilon})$ (for the class of Bernoulli distributions). The current paper establishes $\\Omega(\\sqrt{KT} + \\frac{K}{\\epsilon})$ for Gaussian. **Is the difference of $\\log T$ in the privacy term due to different class of distributions?** In other words, if one considers the Bernoulli distribution, can the current proof technique yield the same minimax lower bound as in [20, R1]? I tend to think it may not be the case. \n",
" Thanks for the response. My major concerns have been nicely addressed.",
" We thank the reviewer for the interest in our response and the corresponding follow-up questions.\n\n**More details on the meaningfulness of $\\epsilon$-global DP.** We thank the reviewer for mentioning the references [R1] and [R2] as our work uses the same setting of DP in both these papers. $\\epsilon$-global DP is the exact notion used by [R2] in the section **Private Online Learning: Bandit Setting**, by taking $\\omega_t = a_t$ and $f_t(\\omega_t) = - r_t$ in their definition (since $f_t$ are loss functions there). \n\nAlso, the JDP notion used by [R1] for MDPs reduces to $\\epsilon$-global DP when $H=1$ and $S=1$ (as pointed out by the reviewer himself in the follow-up question when asking to compare our minimax results to the ones in [R1]). The \"Episodic RL Protocol\" (Algorithm 1 in [R1]) captures perfectly the example of medicines and patients given above, when rather than interacting with the same user for a whole trajectory in RL, you only get one reward in MAB ($H=1$). In addition, the lower bounds proved in [R1] rely on a reduction of JDP in MDPs to $n=S-2$ MAB instances satisfying $\\epsilon$-global DP (Section C.4 in [R1]) and then using the lower bounds of [1], which are lower bounds for $\\epsilon$-global DP bandits. This proves that the notions of DP used in [R1], [R2] and our work are all the same.\n\n**The chain rule in the proof of the privacy guarantee.** We thank the reviewer for this precise remark. We implicitly use the exact intuition as in the proof of Theorem 3 in [R2] in our privacy proof: the tricky part (as pointed out by [R2]) is that the private means at a step $t$ depend adaptively on the **publicly** released actions before step $t$, but since those actions are already private, by \"adaptive\" post-processing, you can deal with them inside the algorithm as if they were fixed public data that do not leak any information.\n\nTo avoid confusion, we make our argument more formal by stating the exact calculations and chain rules used for proving the privacy.\n\n*Claim:* Fix two neighboring reward streams $r^T =\\{r_1, \\dots, r_T\\}$ and $r'^T = \\{r'_1, \\dots, r'_T\\}$. This implies that $\\exists j \\in [1, T]$ such that $r_j \\neq r_j'$ and $\\forall t \\neq j$, $r_t = r_t'$. We also fix a sequence of actions $a^T = \\{a_1, \\dots, a_T\\}$. We want to show that:\n$Pr(\\pi(r^T) = a^T) \\leq e^\\epsilon Pr(\\pi(r'^T) = a^T)$.\n\n*Proof sketch:*\n\n- Since $r^{j - 1} = r'^{j - 1}$, $Pr(\\pi(r^{j - 1}) = a^{j - 1}) = Pr(\\pi(r'^{j - 1}) = a^{j - 1})$.\n- Let $t_\\ell \\leq j < t_{\\ell + 1}$ and $t_{\\ell'} \\leq j < t_{\\ell'+ 1}$ be the episodes corresponding to $r^T$ and $r'^T$ resp. Since $r^{j - 1} = r'^{j - 1}$, we get that $\\ell = \\ell'$. Thus, $Pr(\\pi(r^{t_{\\ell + 1}}) = a^{t_\\ell + 1}) = Pr(\\pi(r'^{t_\\ell + 1}) = a^{t_\\ell + 1})$.\n- Let $\\tilde{\\mu}\\_{a_j,\\epsilon}^{\\ell}$ and $\\tilde{\\mu}\\_{a,\\epsilon}^{' \\ell}$ be the private means of arm $a_j$ computed in the episode $[t_\\ell, t_{\\ell + 1}]$, by the Laplace mechanism, for every interval $I \\in \\mathcal{R}$, $Pr(\\tilde{\\mu}\\_{a_j,\\epsilon}^{\\ell} \\in I) \\leq e^\\epsilon Pr(\\tilde{\\mu}\\_{a_j,\\epsilon}^{'\\ell} \\in I)$.\n- Finally, since $\\{r_{j+1}, \\dots, r_T \\} = \\{r'_{j+1}, \\dots, r'_T \\}$, $Pr(\\pi(r^T) = a^T | \\tilde{\\mu}\\_{a_j,\\epsilon}^{\\ell} \\in I) = Pr( \\pi(r'^T) = a^T | \\tilde{\\mu}\\_{a_j,\\epsilon}^{'\\ell} \\in I)$ \n\nNow, we conclude the argument by using a chain rule. QED.\n\nWe refer to the proof of Theorem 4.1 (DP-SE) in [20] that explains the intuition behind the proof presented here.\n\n**Comparaison of the minimax lower bound to [20] and [R1].** Both minimax lower bounds in [20] and [R1] are for Bernoulli distributions since they rely on the lower bound of [1], which is specific to Bernoulli distributions. However, our Theorem 2 is a minimax lower bound for Gaussian distributions (line 639 in the appendix explains the class of distributions used, we will add a line defining it in the main paper too), which is a new result, independent of [1].",
" Thanks for the response.\n\nIn my opinion, a more proper definition for DP bandit should consider the change of one user (note that not the standard user-level DP). For example, in the DP tabular MDP paper [R1] (which is a strict generalization of MAB) considers this. A more older but classic paper on DP (adversary) bandit is [R2], where each user (say patient $t$) is represented by a function $f_t$ (note that stochastic bandit can be viewed as a special case of adversary bandit). \n\nRegarding the proof for the privacy guarantee, I think one needs to formally use the adaptive composition (i.e., chain rule of conditional probability). For example, see the proof of Theorem 3 in [R2] (in particular the subtlety pointed out by the authors). I think this chain rule is required even one considers the current DP definition. \n\nCan the authors comment on the above discussions?\n\nA follow-up question. After a closer look at the current minimax lower bound for MAB established in this paper, it seems to me that it is looser by factor $\\log T$ than the one used in [20] (see remark after Theorem 4.4) and the one established for the tabular MDP (which reduces to MAB by setting $H=1$ and $S=1$) in [R1]. \n\n\nCan the authors comment on the above observation?\n\nReferences:\n\n[R1] Vietri, G., Balle, B., Krishnamurthy, A. and Wu, S., 2020, November. Private reinforcement learning with pac and regret guarantees. In International Conference on Machine Learning (pp. 9754-9764). PMLR.\n\n[R2] Guha Thakurta, A. and Smith, A., 2013. (Nearly) optimal algorithms for private online learning in full-information and bandit settings. Advances in Neural Information Processing Systems, 26.",
" We thank the reviewer for the interest in our response and the corresponding follow-up questions.\n\n**Details of the adaptive episodes and privacy implications**\n\nTo clarify our algorithmic framework, let us consider the illustration of Example 2. In particular, let us move to step $t_4 = 7$. To compute the indexes of the two arms, the policy only uses the privatized means: $\\frac{r_4 + r_5}{2} + Lap(\\frac{1}{2 \\epsilon})$ for arm 1 and $r_6 + Lap(\\frac{1}{\\epsilon})$ for arm 2. In particular, the values of $r_1$, $r_2$ and $r_3$ will never be used again by the policy, to make any decision, at any step $t > t_4$. Intuitively, not using an individual's input is the highest privacy an algorithm could guarantee. Our framework achieves that thanks to the forgetting in the episodic scheme (one can find a similar trick in DP-SE [20]). In contrast, the tree mechanism in DP-UCB [18, 25] uses all the sequences of rewards in the history to compute its indexes. The value of $r_1$, for example, is affecting all the actions chosen by the algorithm, hence the sensitivity of the algorithm to a change in $r_1$ is very high.\n\nFor your second concern, all the rewards which are not part of the last episode are totally discarded (i.e. not even stored). The only statistics kept and used by the algorithm are the **privatized** means computed using the reward collected in the last active episode (e.g. the privatized mean for arm 1 at step $t_4=7$ is $\\frac{r_4 + r_5}{2} + Lap(\\frac{1}{2 \\epsilon})$). Since these statistics are already private (due to the addition of the corresponding Laplace noise), all the decisions made using them are private too by the post-processing property of DP. Consequently, there is no information leakage there.\n\n\n**Difference between $\\epsilon$-global DP and Joint Differential Privacy**\n\nJoint Differential Privacy [1] is a formalization of DP for contextual bandits, where the source of the sensitive information is both in the rewards and the contexts. This is not the case for stochastic and linear bandits, where the rewards are the only sensitive data. However, since stochastic bandits can be seen as a special case of contextual bandits with one unique context, the two definitions coincide in this case.",
" -- \nCan you elaborate a bit more on this idea of adaptive doubling and why it works intuitively? In particular, how are you keeping track of remote past information that was not a part of last episode? I would think that there is information leakage possible here. \n\n--\nHow is $\\varpesilon$-DP different from joint differential privacy in the streaming setting? ",
" We thank the reviewer for spending valuable time and for the constructive feedback.\n\n**Technical contributions in lower bounds beyond [1]:** Our technical contributions beyond [1] consist of: (a) extending the Karwa-Vadhan lemma (Lemma 6.1, [13]) to a sequential setting (Lemma 2), (b) presenting a novel sequential information processing lemma under $\\epsilon$-global DP that controls the difference between the outcome streams of a differentially private policy when interacting with two different bandit instances (Theorem 10), and (c) proposing a generic proof structure leading to new lower bounds for different bandit settings. These new technical tools, such as the sequential Karwa-Vadhan lemma or the proofing technique are generalisable to structured bandits, and have wider applications as pointed out by Reviewer 8hyR. We mentioned this in Technical Tools (line 78) and Related Work (line 88). We will clarify and emphasize these Technical Tools in the updated draft.\n\n**Reference to DP-TS:** We thank the reviewer for pointing us to Hu and Hegde (2022). It is indeed a good idea to compare our algorithms to DP-TS as TS methods are effective in practice. \nHowever, given the timeframe (the article appeared on OpenReview in April) and the fact that it's still unpublished work (to appear in UAI 2022), it was not possible for us to be aware of this reference while doing our work. Here, we provide a statement that we would like to add in the Related Work section while referring to Hu and Hegde (2022):\n\n\"DP-TS [Hu and Hegde, 2022] aims to achieve global DP with a Thompson sampling-based approach. AdaP-KLUCB achieves our problem-dependent lower bound with Kullback-Leibler (KL) indistinguishability gap, while DP-TS cannot. Even in non-private bandits, TS is not known to achieve the Kullback-Leibler (KL) indistinguishability gap while KL-UCB is known to achieve it. Thus, From a theoretical point of view, AdaP-KLUCB incurs lower regret than DP-TS.\"\n\n**Discussion on $\\alpha > 3$:** We refer to the general comment section for a detailed discussion on choosing $\\alpha$, its implications, and possible future works.\n\n**Step-shape decreasing pattern in Figure 3:** We would like to thank the reviewer for this wonderful insight. As the step-shape decreasing pattern in Figure 3 is indeed present even in the low privacy regime ($\\epsilon < 0.3$), we agree that stopping our experiments at $\\epsilon = 1$ might lead to confusion regarding the evidence. We re-run the experiment with the x-axis ($\\epsilon$) expended till $\\epsilon = 20$ and our claim is still validated as the regret remains constant with respect to the x-axis (i.e. $\\epsilon$). We will include the new figure in the final version.",
" We would like to thank you for your time, thorough feedback, and the kind words about the comprehensiveness of our study. \n\n**Meaningfulness of $\\epsilon$-global DP to study privacy in bandits:** In order to clarify our setting, we provide an example.\n\nLet us consider that we want to compare the effectiveness of $k$ medicines (corresponding to $k$ arms) by testing them on $T$ patients. At each step $t \\in [T]$, we choose a medicine $a_t \\in [k]$. We test it on a new patient $\\texttt{patient}_t$ and obtain a reward $r_t$, i.e. whether the patient is cured (say $1$) or not ($0$). From a bandit algorithm perspective, the goal is to maximise the cumulative reward, i.e. the number of patients getting cured. From the privacy point of view, the goal is to construct a randomized policy, that in a parallel universe with a different patient at some step $t \\in [T]$, still recommends the same sequence of medicines. \nIn this case, the output of the bandit algorithm is the choices of medicines and its input is the reactions (cured/not cured) of the $T$ patients to the corresponding medicines. In this setting, the definition of neighbouring inputs and global DP used in our paper and also in [18,25,20,3,12] makes sense.\n\n\nFrom a high-level perspective, the confusion as mentioned in the review arises if the rewards are perceived as random variables. In that case, rewards are coupled with actions under a stochastic assumption. In contrast, in the $\\epsilon$-global DP definition, the sequence of rewards is a fixed \"instantiation\" of these random variables. A randomised policy takes this instantiation of reward sequence as an input to produce a sequence of actions. Specifically, Sec 2.1. of [18] addresses your concern by pointing out that \"we cannot use any stochastic assumption on the reward functions for privacy guarantee\". We refer to Section A of the appendix where we rigorously define the bandit policy (the function to make DP), its input (a sequence of instantiated rewards) and its output (a sequence of actions). With this definition, the neighbouring relation among inputs as in our paper and previous works stays valid.\nWe also refer to [3] for details that compares different possible definitions of DP for bandits and unifies them under global (our work) and local DP.\n\n**Proof of the privacy guarantee:** We would like to thank the reviewer for such a detailed remark. We briefly explain the validity of our proof here.\n\nIt is indeed a non-trivial argument to handle the batch schedule and the intuition is related to the first concern as pointed out by the reviewer. To prove the privacy of the algorithm, we fix two neighbouring sequence of rewards $r^T = \\{ r_1, \\dots, r_T \\}$ and $r'^T = \\{r'_1, \\dots, r'_T\\}$. We also fix a sequence of actions $a^T = \\{a_1, \\dots, a_T\\}$. \nThe goal would be to prove that: $Pr(\\pi(r^T) = a^T) \\leq e^\\epsilon Pr(\\pi(r'^T) = a^T)$.\nThe episode schedule, i.e $t_1 < t_2 \\dots <t_\\ell$ could be directly inferred from the fixed sequence of actions $a^T$ which is indeed public information. Then, we use Lemma 1 with this fixed schedule and post-processing property of DP to conclude the privacy proof. We will add this comment in the privacy proof section of the updated draft.",
" We would like to thank the reviewer for the time spent reviewing, careful reading, and kind words about the novelty and the significance of the contributions. It is really encouraging for us.\n\n**The tree-based algorithm and doubling the episode:** As suggested by the reviewer, it is indeed our algorithmic framework that allows us to achieve $\\epsilon$-global DP without the need for either composition or the tree mechanism. Specifically, having episodes with forgetting (i.e each mean only uses samples from the last active episode), allows us to achieve the same privacy guarantee as the tree mechanism ($\\epsilon$-DP) while adding less noise. The main difference is that the tree mechanism keeps track of all the history (since its original use is for counting queries, where all the history is important) and hence has larger sensitivity to a change in the rewards. In fact, DP-UCB [18, 25] leverages a tree-based mechanism but has an extra multiplicative $\\log(T)^{1.5}$ compared to the regret lower bound. Our goal in this work was to get rid of the extra factor. The main intuition behind our algorithmic framework is that for bandits, it's possible to only access a part of the history (the last active episode of each arm) and still achieve low regret. We formalize this intuition in Lemma 1. \n\n**Comments on $\\alpha$:** We refer to the general comment for a detailed discussion on choosing $\\alpha$, its implications, and possible avenues for improvement.\n\n**Limitation: Extension to contextual bandits:** As you have aptly pointed out, extending our algorithmic framework to contextual bandits is an interesting future work that we have mentioned in the paper and are currently investigating. You are right that adapting the doubling episode scheme to contextual and structured bandits might be trickier as we have to be careful of the corresponding structure.",
" We would like to thank the reviewer for the time spent reviewing and for the precise remarks.\n\n**Generalisation of the existing lower bound for Bernoulli reward:** Our problem-dependent regret lower bound for stochastic bandits is indeed a generalization of [1] beyond Bernoulli reward distributions. We have mentioned that in Related Work (line 82), after stating Theorem 3 (line 200) and also as a remark in the appendix after the proof (line 689). We will also mention this in 'Our Contributions'.\n\nBut our lower bound (Theorem 3) also provides a novel observation that the difficulty of a bandit problem with DP in the high-privacy regime depends both on the TV-indistinguishability gap ($t_{inf}$) and $\\epsilon$. This interaction between privacy and partial information is not present in [1], which is a recurring observation of our lower bounds.\nWe refer to the general comment for further details regarding the novelty and impact of our lower bounds and proof techniques.\n\n**Gap between lower and upper bounds:** We thank the reviewer for these two suggestions that would make a great addition to our work. \nWe will add a comment explaining the difference in the \"multiplicative constant\" between the regret upper bounds of Adap-UCB and Adap-KLUCB compared to the lower bounds. We will mention it after stating the results of Theorems 7 and 8.\n\n**Instance-independent regret upper bounds:** Instance-independent or minimax regret upper bounds could be also provided for Adap-UCB and Adap-KLUCB and we will add them too. They are of order $O(\\sqrt{K T \\log(T) )} + \\frac{K \\log(T)}{\\epsilon})$.\n\n**Formatting: $I^\\epsilon_a$ and typo:** Thanks for pointing out these issues. We will introduce $I^\\epsilon_a$ in the comments used inside Algorithm 1 and refer to the corresponding equations. We will also proofread the paper to eliminate the typos.",
" We would like to thank the reviewers for acknowledging the strengths and soundness of the contribution as well as for their thoughtful comments and efforts towards improving the manuscript. In the following, we highlight general concerns of reviewers that were common and our effort to address these concerns. We then address comments specific to each reviewer by responding to them directly.\n\nAs pointed out in the reviews, our main goal is to propose a thorough examination of the well-studied problem of bandits with $\\epsilon$-global DP. We provide a generic proof technique that we use to generate four regret lower bounds (minimax and problem-dependent, for stochastic and linear bandits) that all reflect a phase transition behaviour depending on the privacy budget $\\epsilon$. We also propose an algorithmic recipe to make any index-based bandit algorithm $\\epsilon$-global DP, that we use to instantiate Adap-UCB and Adap-KLUCB and show that they achieve the problem-dependent regret lower bound. As also pointed out in the reviews, some of the theoretical contributions (extension of Karwa and Vadhan Lemma 2, sequential information processing lemma Theorem 10 and generic regret analysis of bandit algorithms with adaptive episodes Theorem 11) could be of general use beyond our work.\n\nNow, we would like to address two recurring comments:\n\n- **Lower bounds beyond generalizing [1]**: Our problem-dependent regret lower-bound for stochastic bandits indeed generalizes the lower bound of [1] beyond Bernoulli distributions. We have mentioned that in Related Work (line 82), after stating Theorem 3 (line 200) and also as a remark in the appendix after the formal proof (line 689). We will also mention this in 'Our Contributions' as suggested by Reviewer 9pTm.\n\n But our generic proof techniques to generate the lower bounds, the corresponding extension of the Karwa-Vadhan lemma, and cementing the fact that global DP bandit algorithms can achieve as low regret as non-private bandit algorithms in the low-privacy regime in different settings are 'non-trivial', 'novel', and can aid in 'wider applications' as reviewers kindly pointed out.\n\n For instance, our generic proof technique lead to three other 'new' lower bounds for stochastic bandits (minimax, Theorem 2) and linear bandits (minimax, Theorem 4 and problem-dependent, Theorem 5). These are novel bounds independent of the result in [1] and important to spearhead near-optimal global-DP bandit algorithms in these settings.\n\n Additionally, our problem-dependent lower bound for stochastic bandits (Theorem 3) provides a novel observation that the difficulty of a bandit problem with global DP depends on the TV-indistinguishability gap ($t_{inf}$). This was not known in [1]. Rather, the regret lower bound due to privacy in [1] ($\\frac{K \\log(T)}{\\epsilon}$) seems to be independent of the hardness of the bandit instance. Our result fills up this missing link between the interaction of privacy and partial information.\n\n- **Comments on $\\alpha$**: $\\alpha$ controls the width of the optimistic confidence bound. Specifically, it dictates that the real mean is smaller than the optimistic index with high probability i.e with probability $ 1 - \\frac{1}{t^\\alpha}$ at step $t$. \n The requirement that $\\alpha > 3$ is purely due to our analysis of the algorithm. This also happened in the classic bandit literature. [Bubeck and Cesa-Bianchi, 2012] required $\\alpha > 2$ for analyzing UCB-type algorithms. This condition was dropped for UCB after a more involved and improved technical analysis (\"On Upper-Confidence Bound Policies for Non-Stationary Bandit Problems\" Garivier and Moulines). But our main goal was to provide an algorithmic framework and analysis that can render any index-based bandit algorithm into a global DP one. Thus, we restrained from the specific and involved analysis.\n \n Since the dominant term in the regret upper bound of both Adap-UCB and Adap-KLUCB is multiplicative in $\\alpha$, $\\alpha=1$ works better in practice, as shown in Figure 8. To be more specific, the requirement that $\\alpha > 3$ is needed to use a sum-integral inequality to bound Term 2 in Step 3, line 850. We leave it for future work to relax this requirement. We will add this discussion in the final draft.\n\n[Bubeck and Cesa-Bianchi, 2012] Sébastien Bubeck, and Nicolo Cesa-Bianchi. \"Regret analysis of stochastic and nonstochastic multi-armed bandit problems.\" Foundations and Trends® in Machine Learning 5.1 (2012): 1-122.",
" This work investigates $\\epsilon$-global differential privacy for multi-armed bandits. For both stochastic and linear bandits, it derives refined instance-dependent and global lower bounds on the regret for differentially private learning, in that they quantify the burden of privacy in two distinct regimes. Crucially, in the low-privacy regime, the privacy constraint doesn't make the bandits problem any harder. Next, the work formulates a general $\\epsilon$-global strategy for stochastic bandits and shows that two algorithms based on it have regret upper bounds with privacy regime patterns like those in the lower bounds. This is confirmed empirically, where the two privacy regimes are evident. Strengths:\n- The paper touches contributes to a very important theme in the context of differential privacy for bandits - the cost of obtaining this privacy - and identifies two distinct regimes of privacy preservation. Empirical results confirm the theory.\n- It materializes a simple $\\epsilon$-global DP strategy applicable to existing index-based bandit algorithms. This strategy clearly incorporates features that are known/desirable in the context of DP algorithms (e.g. Laplacian noise, private empirical means using little data).\n\nWeaknesses: Please see questions. 1. It would be useful to clarify either in the abstract or in \"Our Contributions\" that the regret lower bounds effectively generalize the claims of [1] in generalizing it beyond Bernoulli to general distributions. My point is that there was prior work hinting at distinct regimes of privacy and I think it's a good idea to acknowledge that at the outset.\n2. It would be useful to discuss the gap between the lower bounds and the upper bounds for AdaP-UCB and AdaP-KLUCB. If feasible, instance-independent upper bounds would also be useful to have.\n3. In Algorithm 1 (line 253), please clarify where $I_a^\\epsilon$ comes from. You mention index-based algorithm but never actually define $I_a^\\epsilon$.\n4. Please fix some minor typos (e.g. \"Algortihm\") through the manuscript. Not applicable.",
" This paper studies the regret minimization problem for stochastic (Bernoulli and linear) bandits under the notion of global differentially privacy. This is a thorough study of the problem where the authors propose lower bounds quantifying fundamental problem hardness, an algorithmic framework and upper bound on the regret for the proposed algorithms under this framework. These results are supported by sound numerical findings. Some highlights: \n\n-- Two sets of lower bound results are provided: problem-dependent and minimax. The key lemma driving these results is a non-trivial extension of Karwa-Vadhan lemma which bounds the differential privacy when the data is generated from non-identical distributions. \n\n-- The algorithmic developments are novel from a bandit perspective more-so than from a privacy perspective, particularly, the idea of adaptive phases which is the key driver in ensuring the privacy of the algorithm. \n Strengths: \n\n-- Non-trivial novel theoretical contributions for both lower bound and upper regret bound. Some of the key results (for eg., Theorem 10) could be of independent interest and have wider application. The key implication of the lower bound is the phase transition behavior of bandits depending on the privacy budget. This has implications particularly because in the low-privacy regime, the problem is equivalent to the non-private setting.\n\n-- The general purpose framework proposed for algorithmic design unifies existing bandit algorithms. This is a much deeper contribution than proposing and analyzing a single algorithm. I believe this makes the contribution very fundamental.\n\n-- Numerical results are also done for a variety of privacy budget though it would interesting if they could demonstrate the lower bounds as well as study the dependence on $\\alpha$. A\n\nWeakness: \n\n-- More discussion regarding the parameter $alpha$ and how to tune it. A small discussion regarding this possibly in the main body of the paper. \n -- A natural idea would be to use a tree-based aggregation technique for adaptively keeping track of the means. It is not completely clear to me how adaptively using doubling horizon ensures the same privacy level. Further, the privacy proofs do not require composition which is surprising. I would like to be convinced if this is due to the definition of Global DP or a consequence of the algorithmic template. \n A contextual bandit extension would be nice esp. since the authors get rid of tree-based aggregation. But there the adaptive-doubling idea might not work since the minimum eigenvalue of the covariance matrix would need to be updated regularly in order to have consistent parameter estimate. ",
" This paper considers DP bandits under the central model. The authors present both minimax and problem-dependent lower bounds for both MAB and linear bandits. Then, they also establish a general method for designing private bandit algorithms based on non-private ones. Extensive numerical simulations are conducted to validate the theoretical results. **strengths**\n- A comprehensive study of lower bounds under the central model\n\n**Weaknesses**\n- The proof for the privacy guarantees in the upper bound part seems to need more care - My first concern is about the proof of the privacy guarantee in Section 4. The tricky part is that the adaptive batch schedule also depends on private data (rewards). Thus, an adversary can observe the batch schedule to infer the private data. More specifically, changing one reward in the sequence will also change the follow-up batch schedule. The standard proof cannot be directly applied here. I believe this can be handled by a careful but non-trivial argument, but I will first leave it to the authors in the rebuttal period. \n\n- My second concern is more general. This relates to the commonly-used notion of DP in bandits. In particular, the neighboring relation is about two neighboring reward sequences, i.e., differ in the reward value at only one particular time slot. This is inspired by the standard continual model of DP. However, this is not the right notion for DP in MAB. To see it, this notion requires that if one changes a reward at time $t$, the reward obtained at $t+1, t+2,...$ are still the same by the requirement of the neighboring definition. This is obviously not the case in MAB, since the policy would be different at $t+1, t+2,...$ due to the change of reward at time $t$, and hence the agent will pull different arms and hence different rewards will be observed. Note that this is in sharp contrast to the classic continual observation DP model where the typical task is simply to count the current counts in the online stream data, where there is no correlation between the data across time slots, and hence the standard neighboring relation makes sense in terms of privacy protection. I think one can also propose a more meaningful notion of DP in bandits (with only a minor change of the current proof), but again I will leave it to the authors in the rebuttal period.\n\n- In fact, the intuition for the above two questions are quite related. Yes, limitations and potential negative societal impacts have been addressed. ",
" This paper proves the minimax regret lower bound and problem-dependent regret lower bound for stochastic and linear bandits that quantify the hardness of bandits with $\\epsilon$-global DP. The results reveal interesting phase transition phenomenon in that in the high-privacy regime, the hardness depends on a coupled effect of privacy and partial information about the reward distributions; while in the low-privacy regime, the regret of bandits with $\\epsilon$-global DP reduces to that of bandits without privacy. In the algorithm, the authors then propose AdaP-UCB and AdaP-KLUCB methods which are $\\epsilon$-global DP extensions of UCB and KL-UCB algorithms, respectively. AdaP-KLUCB is shown to be the first algorithm that both satisfies $\\epsilon$-global DP and yields a regret upper bound that matches the problem-dependent lower bound. \n Strengths \n\n1. In spite of a very theoretical paper, this paper is very well written and is easy to follow. \n\n2. DP in bandit algorithms has received recent attentions. The hardness of local DP in bandits in terms of regret lower bound has been well studied in the literature. However, fundamental hardness of differentially private bandits with global DP is less studied, except for Shariff and Sheffet (2018). \n\n3. The proposed algorithm is able to achieve a regret upper bound that matches the problem-dependent lower bound. \n\n\nWeaknesses\n\n1. Technical novelty beyond Shariff and Sheffet (2018). Shariff and Sheffet (2018) studied the problem-dependent lower bound on regret for stochastic bandits of Bernoulli reward with $\\epsilon$-globally DP. This paper extends the results to general reward case. It is important to explicitly highlight the technical contributions beyond Shariff and Sheffet (2018) . \n\n2. In the experiments, the proposed algorithms were compared to the benchmark methods DP-SE and DP-UCB. However, there are some recent developments, e.g., DP-TS by Hu and Hegde (2022), on differentially private stochastic bandits. DP-TS and its lazy version are shown to be superior over DP-SE. It is important to compare with the state-of-the-art methods. \n\nHu and Hegde (2022), Near-Optimal Thompson Sampling-based Algorithms for Differentially Private Stochastic Bandits, UAI.\n\n3. Can authors provide some discussion on the requirement $\\alpha > 3$ in algorithms and theorems? I noticed that $\\alpha =1$ actually works the best as shown in the additional simulation results in the supplementary materials.\n\n4. The authors claim that Figure 3 show that in the low privacy regime ($\\epsilon > 0.3$), the regret of AdaP-KLUCB does not depend on $\\epsilon$. However, what I observe is that when $\\epsilon > 0.3$, the regret first decreases when $\\epsilon$ increases and then stabilizes. This step-shape decreasing pattern is consistent even when $\\epsilon < 0.3$. Therefore, more convincing numerical evidence would be helpful to justify this claim. \n\n See above weakness part. N/A"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"asXUKzGs6e",
"jDPR1mgoGIA",
"eBkhpBMyuj",
"2t_Q2uVrWl3",
"NBnkMPY1Ge",
"iBEILsLK1b",
"qBkLCsnGULi",
"VFaZDiLN0V",
"CWZ5OY39dN1",
"pgw0RISwMgJB",
"DdaT3b8IMvk",
"_jD7zvtP1Nq",
"fikrpuDE2u6",
"IYQwvWwpjt",
"29fdSce3mVr",
"nips_2022_TjVU5Lipt8F",
"nips_2022_TjVU5Lipt8F",
"nips_2022_TjVU5Lipt8F",
"nips_2022_TjVU5Lipt8F",
"nips_2022_TjVU5Lipt8F"
] |
nips_2022_uOQNvEfjpaC | What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs | Given an input image, and nothing else, our method returns the bounding boxes of objects in the image and phrases that describe the objects. This is achieved within an open world paradigm, in which the objects in the input image may not have been encountered during the training of the localization mechanism. Moreover, training takes place in a weakly supervised setting, where no bounding boxes are provided. To achieve this, our method combines two pre-trained networks: the CLIP image-to-text matching score and the BLIP image captioning tool. Training takes place on COCO images and their captions and is based on CLIP. Then, during inference, BLIP is used to generate a hypothesis regarding various regions of the current image. Our work generalizes weakly supervised segmentation and phrase grounding and is shown empirically to outperform the state of the art in both domains. It also shows very convincing results in the novel task of weakly-supervised open-world purely visual phrase-grounding presented in our work.
For example, on the datasets used for benchmarking phrase-grounding, our method results in a very modest degradation in comparison to methods that employ human captions as an additional input. | Accept | The paper presents a new approach, using two pre-trained models (CLIP and BLIP) as supervision to enable three tasks, including the newly proposed task WWbL, which is a joint open vocabulary description and grounding/localization task trained only with weak supervision.
I recommend acceptance based on the revised paper, the reviewer's comments, and the author response. I think the paper sufficiently contributes:
- Overall idea and architecture
- The WWbL task, even if similar to previous task
- Extensive experimental evaluation and comparison to prior work
- Solid ablation study
The paper received mixed review scores with 2 Borderline rejects, 1 Borderline accept, and 1 strong accept.
The authors have in my opinion largely addressed the concerns and revised the paper, one of the remaining concerns of the weak reject reviewers is novelty, which I think is sufficient.
My recommendation for acceptance is under expectation that the authors revise the paper to address any outstanding points made by reviewers, e.g.
- additional alternative models (reviewer MVej) if possible
Additionally, I think it would be great if the authors discuss the relation ship of WWbL to to the task of dense captioning task more clearly in the paper. | test | [
"_FvDhxCL0J5",
"LDCgPpP_SJW",
"j199srD8iX",
"SKa4GTws8Q9",
"MYUeVg9na_r",
"R0mDqVPUlKh",
"HJaGo6pkvHf",
"nnMUQq_2mhJ",
"Ss7tGKqWVTU",
"8AxZZuTxAtE",
"it6u6y0VauV",
"5zQ7FJl4cl",
"xZ6v_UdSN_n",
"_FTsFCbyMI",
"tutiBrOkaO",
"SdyZyw8-Xj"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer aWBt,\n\nplease look at the author response to your review, and comment on the corresponding author response, and if this changes your ratings / understanding / resolves your concerns / creates new concerns/questions.\n\nThank you, your AC\n\nPS: Don't respond to this message but directly to the author response, so author can also see your response!",
" We thank the reviewer for engaging in this discussion.\n\n__Re: Novelty__\n\nThe novelty argument made by the reviewer is based on a very high-level summary of our work and disregards our actual novelty claims. These claims are:\n1. We are the first to train a wealy supervised phrase grounding model in a way that utilizes pre-trained visual-language models.\n2. The training of this model is based on a novel architecture and novel loss terms, namely $L_{fore}$ and $L_{back}$. \n\nSince the ability in item 1 is useful (as Tab. 6 indicates) and since it is the basis of being able to solve WWbL, hopefully, we can agree that it is important enough. A counterargument to our novelty claims should be, therefore, in the form of identifying a previous work doing either 1 or 2.\n\nRegarding WWbL: to be precise, our claims are:\n1. It (weakly supervised trained, no text input at inference, output are a set of descriptions and segmentation maps) is a fundamental computer vision task, which was identified as such in the early days of vision research.\n2. It was never solved before.\n \nWe can probably agree on both points.\n\n__Re: Alternative models__\n\n\nRegarding alternative image-language models: following the review, we provided 8 different alternatives to the original CLIP we used. Unfortunately, we did not provide the alternative the reviewer wished for. This will be promptly corrected. However, is it fair to ignore the effort that was already done and the very encouraging conclusions? One surprising conclusion is that the method is much less sensitive to the underlying model than the simple zero-shot classification method.\n\n",
" Thank you for the authors' reply. \n\nThe major concern of mine is still the novelty. As also pointed out by other reviewers, is a direct combination of image-caption (region-based) + localization models. This level of novelty does not meet the bar of a NeurIPS paper. The authors also claimed that another contribution of the paper is proposing a new purely-visual task WWbL. However, in my opinion, the style of the paper is not a task-oriented paper, which should focus on benchmarks construction, and description of the motivation of the new task, etc. \n\nIn addition, my question on how the CLIP and BLIP models can affect the empirical results has not been well addressed. I would like to see replacements of CLIP and BLIP with other models, not different checkpoints of CLIP. \n\nI will keep my original rating. ",
" We have added an experiment that tests the accuracy of $g$ with different pre-trained CLIP models for the Phrase grounding task. The results, which can be found in Appendix H of the revised manuscript, indicate that the ResNet50 model slightly outperforms the ViT32 one when both are trained on the Open-AI dataset. Other datasets, both bigger and smaller, do not lead to better performance.\n\nAs mentioned before, we note that the gap observed for both WSOL and Phrase Grounding is much narrower than the gap that we see in top-1 accuracy for zero-shot classification. This indicates that our method is relatively stable concerning the CLIP model used during training.\n",
" According to the concerns:\n\n(1) In comparison to other WSOL/WSG methods, we present a new architecture and loss terms. For example, the loss terms $L_{fore}$ and $L_{back}$ are novel.\n\n(2) The WWbL is defined in our work as a weakly supervised method. There are two main reasons for doing so: (i) since it is inspired by Marr’s work, we aim to train in an ecological way (the term ecological in this context means in a natural way, as children do), and (ii) we want to have a detection setting that is as general as possible, and specifically, that requires strictly less supervision than WSG.\n\n(3) We completely agree that the evaluation is not complete. It is based on an established protocol that is retooled from WSG to WWbL. However, this metric neglects some aspects of the problem, which are also neglected in our definition of WWbL. Such neglects are discussed in the context of multiple instances of similar objects (Appendix G of the revised manuscript).\n",
" Dear Authors,\n\nThank you for your detailed response. My two remaining concern would be:\n\n1) WSOL / WSG is a well studied task with established methods. Thus, it would be better if the author could make it clear how g compares to prior weakly supervised localization method (at a high level; in one or two sentences).\n\n2) My view on the novelty of the task WWbL stays the same: once one has a caption model + a good grounding model, one could coin them together into a WWbL model. The definition of WWbL does not seem to contain any restriction on how we train the model (i.e., whether we can use localization annotations). Thus, we could directly combine many previous models for this purpose. E.g., COCO / VG trained image caption models + MDETR/GLIP. Should such models be compared to the proposed method? Or does WWbL entail no explicit localization annotations?\n\nIn addition, the evaluation of WWbL is debatble. What the kind of caption we want to model to generate? The current evaluation (in Table 7) evaluates model's ability to generate captions with the same focus as the pointing game datasets. But if the model chooses to generate diverse and creative captions (e.g., if it focuses on novel objects defined in LVIS), should it be penalized? It is also hard to define the granularity of the caption. E.g., when there is a person in the picture, the model can be concise or very detailed (describing each body part separately). A complete evaluation setup should also evaluate the caption quality / diversity / coverage.\n\nBut I do applaud the fact that the proposed method runs without any localization annotations and rely only on bootstrapping from a classification model's heat map.\n\n\n",
" We thank the reviewers for the detailed feedback and useful ideas. As detailed in the posted summary of changes, we have made an effort to factually address the raised concerns. \n\nWe would appreciate the opportunity to discuss our work further if the response to each reviewer has not already addressed all concerns.",
" We would like to thank all the reviewers for their valuable feedback. We are also grateful for pointing out the typos and the additional experiments to consider, which have been incorporated into the revised manuscript. \n\n[1] Following Reviewer aWBt, we add an experiment that tests the influence of the coefficient controlling $L_{rmap}$ (Appendix C).\n\n[2] Following Reviewer MVej, we add an experiment that tests the influence of the exact CLIP model used to train $g$, on the localization performance of g (appendix H).\n\n[3] Following Reviewer aWBt, we add quantitative results for the scenario where multiple instances of the same object appear in the image (appendix G). \n\n[4] Following Reviewer 4wQX, an additional background for BLIP on related work has been added - line 107 in the revised version.\n\n[5] Following Reviewer 4wQX, MDETR was also added to the related work - line 87\n\n[6] Following Reviewer J9Sc, typos have been corrected.\n\n[7] Following Reviewer aWBt, we add a discussion about biases and ethical issues in the limitation.\n\n[8] Following Reviewers 4wQX and J9Sc, we add our contributions to the introduction.\n\n[9] Following Reviewer MVej, we add the three mentioned papers to the related work.\n\n[10] Following Reviewer 4wQX, we add additional papers that employ GradCam for obtaining pseudo-labels.\n\n[11] Following Reviewer MVej, we have extended the experiment of item [2] above for the phrase grounding task and updated the results in appendix H.\n",
" Our paper focuses on two main contributions: first, a new state-of-the-art architecture for weakly-supervised localization with text and image as input. Second, building upon the first contribution, we tackle a purely-visual task called WWbL, in which given an input image and nothing else, an algorithm should predict what are objects in the image and where. This is the most fundamental task in computer vision, yet it was not dealt with in previous work. This is an open-world detection task that does not rely on any localization information during training. \n\nWe respectfully disagree that there is a limited novelty. The weakly-supervised trained localization network and the ability to solve the task are completely novel. The first employs new loss terms and techniques, and the second has no precedents, despite being specifically identified decades ago as the most fundamental problem in computational vision (Marr, 1980). The fact that there are no precedents points to the level of the challenge. The list of requirements in the WWbL problem is beyond that of any localization method: the weak supervision, the open-world requirements, and the need to provide a full textual description. \n\nIt is true that our method builds upon existing models, including CLIP, BLIP, and selective search. However, the ability to combine these relies on a crucial new component, which is the new weakly-supervised localization method. The utilization of large image-text models, such as CLIP and BLIP, for other tasks than the ones used for training these, is a very active research domain [L27-34]. This points to the validity of our research direction: building upon such models, we are able to solve, for the first time, the most fundamental computational vision problem.\n\nWe acknowledge that our method’s success is conditioned on the quality of the image-text models it utilizes. This is, of course, true for most work that builds upon pre-trained models. Following the review, we examine the ability of network $g$ to generalize when switching between various CLIP models. As can be seen in Appendix H, the variance in the performance of $g$ is much lower than the variance in the zero-shot classification performance of the CLIP models used to train $g$.\n\nA similar experiment was conducted for the phrase grounding task. Due to resource constraints up to this point in time, we are only able to do it with the RN50 backbone trained on the open-ai dataset. The results are extremely encouraging and show only a modest drop in the performance compared to the CLIP model released by open-ai (75.1 vs 75.6 on flicker30K). This gap is much narrower than the gap that we see in zero-shot classification and in the WSOL task on the CUB benchmark.\n\nThank you for pointing us to three recent related works. We have added these to the related work section. Contributions (a,c) solve vision-language tasks such as image captioning and VQA. These can be used in our work instead of BLIP, but this effort is out of the scope of our current effort. Work (b) is a phrase grounding work, formulated as masked language modeling. It is supervised, and our work is weakly supervised. \n\nWe are not sure we understand the request “Have the authors compare the model with some pre-trained methods like CPT-Colorful Prompt Tuning, Frozen, or Flamingo?” CPT-colorful is a phrase grounding model that uses a fully supervised training scheme, while our work does not use any location annotation. Flamingo is a captioning model that is especially suited for few-shot learning. We could, in principle, adopt our method for using it instead of BLIP. However, this is out of the scope of the current effort and, in any case, the code and weights of Flamingo are not public. \n",
" We now note in the limitations section that “our weakly supervised learning scheme does not distinguish between multiple instances of the same object. While Algorithm 1 can be improved to somewhat mitigate this, by separating multiple objects that have the same caption, building such a solution in a robust way may be challenging without additional supervision.” \n\nIn the new appendix G, we present our method's results for images with multiply instances of the same object, such as apples and dogs. We present both the results of the WWbL method and the results for grounding specific sentences that are used to study the output of network $g$.\n\nThe results indicate that WWbL does not typically select sentences that distinguish between the various objects. However, network $g$ has the capacity to separate between different objects of the same class given specific captions. When there are multiple objects of exactly the same type, e.g., multiple green apples, $g$ marks all of them. We note that $g$'s heatmap does peak at specific parts of the objects, which may facilitate instance separation. Both the ability to extract specific captions automatically and the ability to perform instance segmentation are left for future work. \n\nFollowing the review, we have also applied our method with different weighting parameters. Due to the time limit, we have performed this sensitivity analysis only for $\\lambda_3$, which is the only coefficient that is not set to one in our experiments (it is fixed at a value of 4). The results are presented in Table10 in appendix C. Evidently, for a wide range of this parameter, the performance of the $g$ is similar. We also note that the same $\\lambda_3$ seems to be optimal for different tasks.\n\nFollowing another review, we have also replaced the pre-trained CLIP model with other models, in order to validate the robustness of our method to a switch of the underlying visual-language model. As can be seen in Appendix H, the variance in the performance of $g$ is much lower than the variance in the zero-shot classification performance of the CLIP models used to train $g$. \n",
" Our paper focuses on two main contributions: first, a new state-of-the-art architecture for weakly-supervised localization with text and image as input. Second, building upon the first contribution, we tackle a purely-visual task called WWbL, in which given an input image and nothing else, an algorithm should predict what objects are in the image and where. This is the most fundamental task in computer vision, yet it was not dealt with in previous work. This is an open-world detection task that does not rely on any localization information during training. \n\nWe respectfully disagree that there is a limited novelty. The weakly-supervised trained localization network and the ability to solve the task are completely novel. The first employs new loss terms and techniques, and the second has no precedents, despite being specifically identified decades ago as the most fundamental problem in computational vision (Marr, 1980). The fact that there are no precedents points to the level of the challenge. The list of requirements in the WWbL problem is beyond that of any localization method: the weak supervision, the open-world requirements, and the need to provide a full textual description. \n\nIt is true that our method builds upon existing models, including CLIP, BLIP, and selective search. However, the ability to combine these relies on a crucial new component, which is the new weakly-supervised localization method.\n\n\"Purely visual\" referees to inference time, and not to train time. This would be clarified further. Obviously, one cannot learn such NLP-heavy tasks without any text being involved during training. \n\nAs far as we can ascertain, CLIP and BLIP were not trained on the test sets of Flicker or ReferIt used for the evaluation of our work against previous works. To alleviate such concerns, we have also provided qualitative results on datasets of very recent news images (Appendix D Figure 6 and 7).\n\nWe compare our method to GAE [12] on both the WSG and WWbL tasks. GAE leverages per-trained information from both CLIP and BLIP. Our method’s results on various datasets are higher with a large margin. It is not feasible for us to retrain GbS[3] and MG[2] using larger datasets.\n\nWe note that the utilization of these large image-text models for other tasks is a very active research domain (lines 29-32). This points to the validity of our research direction: building upon such models, we are able to solve, for the first time, the most fundamental computational vision problem.\n\nAll requests for elucidation have been fully addressed in the revised version. We would further submit the paper for another round of proofreading.",
" Our paper focuses on two main contributions: first, a new state-of-the-art architecture for weakly-supervised localization with text and image as input. Second, building upon the first contribution, we tackle a purely-visual task called WWbL, in which given an input image and nothing else, an algorithm should predict what objects are in the image and where. This is the most fundamental task in computer vision, yet it was not dealt with in previous work. This is an open-world detection task that does not rely on any localization information during training. \n\nWe respectfully disagree that there is a limited novelty and no challenge. The weakly-supervised trained localization network and the ability to solve the task are completely novel. The first employs new loss terms and techniques, and the second has no precedents, despite being specifically identified decades ago as the most fundamental problem in computational vision (Marr, 1980). The fact that there are no precedents points to the level of the challenge. The list of requirements in the WWbL problem is beyond that of any localization method: the weak supervision, the open-world requirements, and the need to provide a full textual description.\n\nIt is true that our method builds upon existing models, including CLIP, BLIP, and selective search. However, the ability to combine these relies on a crucial new component, which is the new weakly-supervised localization method. These points are already made in the paper, but we did not include the list of contributions the reviewer asks for. This is now added to the revised version.\n\nThe words “simple combination” used by the reviewer, could be a valid description of algorithm 1 but not of our entire work. To enable algorithm 1, one has to first obtain network g (the localization network), which, as shown in our ablation study, requires multiple novel loss terms.\n\nThe question “how it compares to previous weakly-supervised methods for localization” is answered in Tab. 1 and most of the related work section. We would appreciate more specific feedback and would be happy to comply. \n\nThe remark “the idea of using a `classification’ model's heatmap for localization is not particularly new so I would appreciate a detailed discussion” is correct. We have added such a discussion in the revised version. \n\nThe revised version describes BLIP in more detail, and we have added a reference to MDETR.",
" The paper proposes the task of Weakly-Supervised Open-World Phrase-Grounding, which seeks to first generate captions for local regions and then ground the caption. Meanwhile, the paper proposes a way to train a mask generator given image and text with only weak supervision (based on CLIP). Pros:\n+ The paper studies a new task and proposes a sensible baseline to this task.\n+ The paper contributes a way to train a mask generator with only weak supervision with decent performance on benchmark (e.g., Flickr30K), which is \"somewhat\" novel to my knowledge.\n\nCons:\n- The newly proposed task is not particularly novel or challenging. It seems like a direct combination of image-caption (region-based) + localization. I would have liked to see some discussion on the unique challenge this task poses and how solving it might benefit real-world needs or other tasks.\n- The method to the newly proposed task is also a simple combination of previously methods: selective search for proposing regions; BLIP for generating the captions given a region; a localization network. \n- The paper also proposes a way to train a mask generator given image and text with only weak supervision (based on CLIP). However, the paper does not come with a discussion on the novelty of the proposed method and how it compares to previous weakly-supervised methods for localization. The idea of using a \"classification\" model's heatmap for localization is not particularly new so I would appreciate a detailed discussion.\n- The paper is a bit confusing on its core contribution: it lacks a full discussion on the uniqueness of the newly proposed task if the task if the core contribution; the method of training the mask generator lacks a discussion to prior work so it is also hard to judge the contribution of the method part.\n\nMinor points:\n\nProviding more backgrounds on BLIP would be helpful.\n\nMDETR should be referenced.\n\nGLIP [42] does not use CLIP initialization.\n\n See cons. N/A.",
" This paper proposes and studies a new task, which localizes image regions with masks and describes them with natural language. The paper studies the task in a weakly-supervised open-world setting. In order to tackle the task, two recently proposed pre-trained vision-language models, namely, CLIP and BLIP are leveraged to produce multi-modal matching score and generate candidate captions, respectively. Meanwhile, an encoder-decoder network is trained to generate foreground mask for grounding. Superior performances are shown in multiple benchmark datasets, for the three evaluation tasks. Strengths:\n\n1. A new perspective for vision-language community, and open-world applications. Comprehensive techniques are proposed with extensive supportive experiments.\n2. State-of-the-art quantitative results on multiple benchmark datasets for the three tasks (WSOL, WSG, WWbL).\n\nConcerns:\n\n1. Regarding novelty. The novelty might not reach the standard of NeurIPS conference. The proposed framework largely depends on CLIP-like pre-trained models and lacks original innovation. Moreover, the proposal claims to be \"purely visual\" (without textual input), but actually the textual information comes from pre-trained BLIP model at inference. I expect more discussion regarding the novelty.\n2. Regarding empirical results. I am not quite sure whether it is fair to compare with previous methods, considering the proposed method actually leverages the pre-trained information from CLIP and BLIP. And I am also not sure if the datasets (e.g., Flicker, ReferIt) used for comparison have overlapping (or similar) images with the large-scale datasets involved in the pre-trained models (i.e., CLIP and BLIP). The comparison fairness should be discussed.\n3. The writing quality and presentation should be improved. The writing and expression highly affect deeper understanding of this work. There are quite many expression issues and typos throughout the whole paper. Some of the typos are listed as follows.\n(1) L50, \"A two-stage inference time procedure\" shoud be \"A two-stage inference procedure\". (2) L69, \"during the inference test\" should be \"during the inference\". (3) L197, \"a encoder-decoder architecture\" should be \"an encoder-decoder architecture\". (4) L241, \"An SGD optimizer\" should be \"A SGD optimizer\". 1. Discuss the novelty and elaborate the contributions (Concern #1).\n2. Discuss the empirical fairness (Concern #2). The authors have adequately addressed the limitations of their work.",
" In this paper, the authors explore using pre-trained CLIP for three tasks: (1) object localization, (2) phrase grounding and (3) WWbL: generating object masks and corresponding captions for an image. Their model encodes both image and text with VGG and CLIP respectively, and generates a heat map. They use two weakly supervised losses: (1) foreground loss which encourages the masked image with the heat map to match the text input, (2) background loss which encourages the remaining image to not match the text input, (3) relevancy heatmap which measures the difference between the generated heat map and the relevancy heatmap of CLIP, and (4) regularization loss for the generated heat map. Performing inference on (1) and (2) are trivial, whereas for (3) they designed an algorithm which first unsupervisedly propose region, and then get captions for them to convert the problem to a phrase grounding task. \n\nObject localization results on CUB, and phrase ground results on VG, Flickr and ReferIt show that their model is better than previous methods, and each of the four losses plays a role in improving the results. On their proposed new task, WWbL, the results are also in favor of their proposed model. Combining all of the results, this paper found that CLIP can be useful in various language related object segmentation tasks. \n Strengths:\n\nIt is natural to use a pre-trained vision-language model for object detection/localization. Straightforward and simple solutions like [1], have been proposed and could perform reasonably well in a zero-shot manner. This paper is also a neat method, based on the assumption that objects to localize have higher similarity to the text in CLIP embeddings, compared to background. With the four simple and intuitive losses, the authors show their model performs better than the previous methods.\nThe authors also contribute a new task, WWbL.\n\n[1] https://github.com/shonenkov/CLIP-ODS\n\nWeaknesses:\n\n1. The model may fail for ambiguous captions. For example, given an image of four apples, if the text is “an apple”, the desired output mask should be one of the apples for a deterministic model. Since the foreground loss and background loss maximizes the difference between the image representation along the text embedding dimension, one of the most likely results is that all of the apples are in the foreground (if CLIP “thinks” four apples to no apples is more different than one apple and three in terms of the caption “an apple”). This means that even for a perfect pre-trained model, such loss may still result in undesired output given ambiguous captions. \n\n2. Related to 1, this paper lacks a qualitative study of the effects of coefficient. It is shown in this paper that all of the losses are important for the metrics, but would different configurations have different tendencies? Can configurations be optimized for different applications? This might be out of scope, but if the authors had provided insights on this, this paper would have been much stronger. \n Please refer to the weakness section above. It is nice of the authors to point out the limitation that their work only applies to single images. It would also be nice if the authors can also talk about the potential ethical impact of their paper, e.g. would the bias in the pre-trained models be reinforced or reduced in this method? ",
" This paper proposes a new solution for weakly-supervised open-world phrase grounding. The method is built upon the off-the-shelf CLIP model and captioning model, only taking images as the input. The experiments are conducted on several datasets, showing the superior performance of the proposed solution. Strengths:\n1. The method is simple yet effective. By combining strong pretrained model and off-the-shelf model, it achieves good performance without text inputs. \n\n2. The empirical results are several benchmarks are promising. \n\nWeaknesses:\n1. The novelty of the paper is quite limited. Most of the components are not developed by the paper. \n\n2. It is not clear how this method can perform well in such a weakly-supervised setting. Its generalization ability may be contributed by the powerful pretrained model (CLIP). The author should performant some ablation study to change these models to other candidates to validate this. \n\n3. The paper ignored some work related to pretrained model for zero/few shot settings (with prompt, etc) [a,b]\na. A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models. ACL 2022.\nb. CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models\nc. SimVLM: Simple Visual Language Model Pretraining with Weak Supervision. ICLR 2022. 1. Have the authors tried to change the CLIP model and BLIP to other models to see how the new alternative can perform? \n\n2. Have the authors compare the model with some pre-trained methods like CPT-Colorful Prompt Tuning, Frozen or Flamingo? How can the proposed solution perform if there are no strong pre-trained models available? "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"tutiBrOkaO",
"j199srD8iX",
"Ss7tGKqWVTU",
"Ss7tGKqWVTU",
"R0mDqVPUlKh",
"5zQ7FJl4cl",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvEfjpaC",
"SdyZyw8-Xj",
"tutiBrOkaO",
"_FTsFCbyMI",
"xZ6v_UdSN_n",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvEfjpaC",
"nips_2022_uOQNvEfjpaC"
] |
nips_2022_hFni381edL | SAPA: Similarity-Aware Point Affiliation for Feature Upsampling | We introduce point affiliation into feature upsampling, a notion that describes the affiliation of each upsampled point to a semantic cluster formed by local decoder feature points with semantic similarity. By rethinking point affiliation, we present a generic formulation for generating upsampling kernels. The kernels encourage not only semantic smoothness but also boundary sharpness in the upsampled feature maps. Such properties are particularly useful for some dense prediction tasks such as semantic segmentation. The key idea of our formulation is to generate similarity-aware kernels by comparing the similarity between each encoder feature point and the spatially associated local region of decoder features. In this way, the encoder feature point can function as a cue to inform the semantic cluster of upsampled feature points. To embody the formulation, we further instantiate a lightweight upsampling operator, termed Similarity-Aware Point Affiliation (SAPA), and investigate its variants. SAPA invites consistent performance improvements on a number of dense prediction tasks, including semantic segmentation, object detection, depth estimation, and image matting. Code is available at: https://github.com/poppinace/sapa | Accept | The paper focuses on the task of feature upsampling, specifically in decoder layers for dense prediction problems. The proposed point affiliation module can be used in upsampling kernels to produce semantically smooth and boundary preserving upsampled sets. The paper received four detailed reviewers from experts. There was a healthy discussion between authors and reviewers during the discussion period and the extra analyses, explanation, and experiments from the authors helped resolve most of the concerns raised by the reviewers. With these extra items presented in the discussion period, the paper has reached the level of impact and contribution expected by NeurIPS papers. The authors are recommended to include them in the final version of the paper. | train | [
"BFhhkJlE6hj",
"vJjKQThIYya",
"JQ5Nt4rZav8",
"fzydfh-2XdV",
"9DdWooU8D17",
"XOsTNBcLwe",
"3gPGSInzJGX",
"NKdgQ2bON_1"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for positive comments and consider our approach useful. We answer the questions as follows.\n\n**Actual runtime comparison.**\n\nWe test the runtime on a single NVIDIA GeForce RTX 1080Ti GPU with Intel Xeon CPU E5-1620 v4 @ 3.50GHz CPU. \nBy upsampling a random feature map of size 256\\*120\\*120 to 256\\*240\\*240, CARAFE, IndexNet, A2U, and SAPA-B \ntakes 2.75 ms, 7.39 ms, 8.53 ms and 6.24 ms, respectively (averaged over 1000 independent runs). \nConsidering that SAPA processes five times more data than CARAFE (due to the high-res encoder features), \nwe think the time consumption of SAPA is acceptable.\n\n**More qualitative results.**\n\nDue to the limited capacity of main text, we have moved the qualitative results to the supplementary material. \nPlease refer to Figure S1-S3 in the supplementary material for the visualizations on the reported task.\n\n**Ablation study on kernel size.**\n\nIndeed it seems that the ablation study on kernel size is too simple. \nBesides the quantitative experiment results, we can explain more why we choose $K=5$ here. \nConsidering the situation where the boundary in the low-res feature is not that sharp, \nthere are likely gradually changing values on the boundary, where one cannot see distinct semantic clusters in a small window such as $3\\times 3$. \nOn the other hand, because our used normalization function $h(x)=e^x>0$, i.e., every kernel weight is larger than zero, \nif too large kernel size is chosen, the smoothing effect of the kernel increases, which has also a negative influence on sharp boundary. \nTherefore, by considering all these factors and the ablation study results, we choose the kernel size as $5$. \nWe will also investigate more on our operator, perhaps in a journal extension due to the limited capacity of the conference paper.",
" The authors thank the reviewer for constructive comments, particularly on presentation. We address these concerns as follows.\n\n**The notion of 'cluster' seems misleading. There is no explicit clustering and affiliation estimation process in the proposed framework. \nThe manuscript needs to be revised in a more compact form.**\n\nOur approach is not related to clustering approaches. We use the term 'semantic cluster' to indicate a region where points have similar semantic meaning. \nSince this term has been clearly defined in the footnote of page 1, we think this may not mislead readers. \nIn addition, we do have an affiliation assignment process, but in an implicit way with the similarity scores in the kernel.\nBy encoding the mutual similarity between encoder and decoder feature points in the kernel, \nthe upsampled point could be assigned to a semantic cluster that is most similar to. \n\nIt is true that our idea can be explained with Eq. (2), Eq. (3), and Figure 3, but we think other parts can help one to understand our idea more easily. \nFollowing the suggestion of the reviewer, we have simplified the symbol system and have rewritten some parts of text to improve the clarity and conciseness, \ne.g., we have squeezed Eq. (4) into one single line. Please take a look at our submitted revision.\n\n**The equation (4) is meaningless.**\n\nIn contrast to the reviewer, we think that Eq. (4) expresses a key characteristic of SAPA for noise suppression on the encoder feature.\nOur work originates from an observation that upsamplers using encoder features like IndexNet and A2U work poorly in semantic segmentation. \nFrom Fig. 2, their upsampling kernels introduce unexpected texture or color from encoder features into upsampled features, which affects semantic coherence.\nYet, involving encoder features does enhance the boundary quality, \nso wezh explore how to use encoder features to compensate details while not introducing noise, especially on interior regions. \nEq.(4) and the comparison in Figure 2 exactly explain and emphasize how we effectively block the noise from the encoder features. \nHence, we expect to leave Eq. (4) as it is.\n\n**The kernel is similar to Joint Bilateral Filter (JBF).**\n\nIn JBF, $J_p=\\frac{1}{k_p}\\sum\\limits_{q\\in\\Omega}I_qf(||p-q||)g(||\\widetilde{I}_p-\\widetilde{I}_q||)$, where $f$ is the spatial filter kernel, \nsuch as a Gaussian centered over $p$, and $g$ is the range filter kernel conditioned on a second guidance image $\\widetilde{I}$, \ncentered at its image value at $p$. $\\Omega$ is the spatial support of the kernel $f$, and $k_p$ is a normalizing factor.\n\nFrom the formulation above and Eq.(3) in our paper we see that SAPA and JBF actually are not similar but differ in: \n\n- **The way to generate kernels.** JBF uses the product of two kernels—a spatial filter conditioned on the distance prior of $I$ \nand a range filter conditioned on $\\widetilde{I}$—to generate the kernel, while SAPA only generate one kernel with mutual similarity.\n\n- **The source of similarity comparison.** In the range filter, JBF calculates the similarity only between the points in the higher-res feature \n$\\widetilde{I}$; however, SAPA computes the similarity of each high-res point in the encoder and its corresponding low-res decoder points within a window.\n\nIn JBF, when the higher-res feature point is in smooth regions, i.e., $\\widetilde{I}_p=\\widetilde{I}_q$, then the kernel becomes Gaussian. \nBut in our paper, we discuss: when the low-res feature points are in a smooth region, i.e., $I_p=I_q$, how to retain its smoothness. \nConsidering the texture noise, which results in $\\widetilde{I}_p\\neq\\widetilde{I}_q$, if JBF is used, the kernel will not be a Gaussian, \nand will even be sensitive to the texture noise. However, in this case, SAPA (Eq. (4)) enables the kernel to keep an unchanged constant, \nregardless of the value of the encoder point.\n\n**The explanation of Fig. 3.**\n\nWe are sorry for causing misleading here. SAPA has three variants, named SAPA-I, SAPA-B, and SAPA-G. \nWe attempt to merge the three in a single figure, but it seems rather confusing. \nIn Figure 3, the addition symbol and the switch indicate our gated bilinear similarity version named SAPA-G. \nWe have found a way to improve the clarity of Fig. 3 and have updated it in the revision.\n\n**The sensitivity to the noise in the encoder feature.**\n\nAs mentioned above, noise suppression does not rely only on a single encoder point, \nbut on the similarity between each encoder point and its corresponding local decoder points. \nPer Eq. (2) and Eq. (4), SAPA will upsample a smooth region to a smooth region, regardless of the value of the encoder point.\nNote that, we refer the 'noise' as the unexpected details in encoder features compared with the label mask. \nIf segmenting a person from background, then the clothes on body would be noise. \nWe do not refer to the signal noise that may destroy the image content. We assume we still process natural images.",
" We thank the reviewer for considering our work novel and interesting. We address the questions and concerns as follows.\n\n**Lower performance than CARAFE on object detection.**\n\nThe AP metric of object detection can be influenced by both classification and localization; \nCARAFE mainly improves misclassification due to the ability of semantic mending, for instance, by reducing false positives; \nhowever, when the classification is not solved well, the advantage of better localization introduced by SAPA can be marginal. \nSuch a difference comes from the different emphases of the two upsampling operators.\n\n**The proposed approach has a little bit more steps compared with baselines.**\n\nIn Table 1, we ignore the steps of other upsamplers in generating kernels, and depict ours in detail, \nso it seems that ours is more complicated. Actually, our implementation steps are similar to the compared upsamplers; \nif the gating mechanism is not considered, SAPA is even more concise. \nWe add the gating mechanism because this additional step brings considerable increase of performance. \nMoreover, SAPA can achieve good performance even without the gating mechanism.\n\n**Segmentation performance on the CNN-based approaches.**\n\nWe select the three transformer-based baselines because they are the recent mainstream in semantic segmentation. \nSAPA is applicable to CNN models as well. To prove this, here we supplement an experiment with UperNet on ADE20K dataset with ResNet-50 (R50) as the backbone. \nWe train UperNet-R50, with upsamplers in FPN replaced by SAPA-B, for 80 K iterations, and reach the mIoU of 41.47, \nwhich outperforms the original bilinear baseline by 0.77.\n\n**Dealing with high-frequency data within a local neighborhood.**\n\nThe high-frequency neighborhood follows the same principle as the low-frequency neighborhood; the former may result in additional semantic clusters, \nbut the assignment of point affiliation still obeys the same rules given in the paper. Due to the complexity in expressing this graphically, \nwe only use the case of two clusters as an example in the paper. One empirical evidence we can offer here is the evaluation on the matting task, \nwhere ground-truth alpha mattes contain many high-frequency local regions; SAPA still invites consistent performance improvements in all metrics.",
" We appreciate the reviewer for highlighting the significance of point affiliation. We address the concerns as follows.\n\n**Learnable upsampling parameters may lead to overfitting.**\n\nWe address this concern from three aspects:\n\n1) Our framework only introduces a few amount of additional parameters, which occupies $0.03\\%\\sim1.4\\%$ of the overall number of parameters in the baseline models. \nEven if a model overfits data due to excessive number of parameters, perhaps the baseline model should be checked first. \nIt is unlikely that such a few additional upsampling parameters would dominate overfitting.\n\n2) SAPA-I has no additional parameter. The results in Table 2 show that SAPA-I still achieves good performance.\n\n3) Overfitting may loosely related to the number of upsampling parameters but a specific upsampler used. \nWe use a toy-level experiment to exposit this claim. \nSince overfitting happens more likely on small data sets, we select a small two-class segmentation dataset, the Weizmann Horse, \nand use SegNet as the baseline. We replace the default max unpooling with NN interpolation, bilinear interpolation, and SAPA-B, respectively. \nWe train the model for 50 epochs and use a constant learning rate of 0.01. \nBy plotting the learning curves, the training losses of the three upsamplers all decrease smoothly to $0.26\\sim0.36$, \nbut their val losses vary significantly. \nAt the 10-th epoch the val loss of SegNet-NN begins to increase, from the minimum of 0.126 to 0.166 at the 50-th epoch, \nand that of SegNet-bilinear increasees at the 14-th epoch, from 0.119 to 0.138 at the 50th epoch. \nInstead, the training loss of SegNet-SAPA decreases faster and the val loss reaches the minimum of 0.057 within 10 epochs, \nand fluctuates from 0.057 to 0.060 during the rest of epochs. The mIoU metrics are 89.5 for NN, 90.1 for bilinear, and 94.9 for SAPA. \nThe experiment shows that the additional learnable parameters in SAPA encourage not only fast training but also better generalization.\nHence overfitting may have weak correlation with learnable upsampling parameters. \nPerhaps as Bengio's ICLR 2017 paper says, \"understanding deep learning requires rethinking generalization\".\n\n**Normalization should be used before similarity computation due to different data distribution between encoder and decoder features.**\n\nWe do have a LayerNorm used before similarity computation (see our code implementation in the supplementary). \nWe thank the reviewer for reminding us of this detail that has been overlooked in the submission. \nIndeed, without normalization, the network even cannot be trained. \nWe have supplemented this detail in the revision (L211-L214, \"In practice, encoder and decoder features may have different data distributions, \nwhich is not suitable to compare similarity directly. Therefore we apply $\\tt LayerNorm$ for both encoder and decoder features before similarity computation. \nIndeed we observe that the loss does not decrease without normalization\").\n\n**The current ablation study in Table 4 did not cover all the variants.**\n\nWe think the ablation study in Table 4 has included most circumstances mentioned by the reviewer.\nSAPA has two modules: kernel generation and feature assembly. Feature assembly is a standard procedure and does not require ablation study. \nThe kernel generation has 3 steps: XY embedding, similarity computation, and kernel normalization.\n\n- **XY embedding.** The effect of embedding or not can be observed by comparing SAPA-I and SAPA-B, and the results are shown in Tables 2, 3, and 4. \nAdditionally, we also explored the influence of embedding dimension in Table 4.\n\n- **Similarity computation.** We have validated inner product, bilinear, and gated bilinear similarity in main experiments (Tables 2 and 3). \nThen, because gated bilinear similarity follows a gated addition manner, in Table 4 we further included a plain addition baseline (P). \nBy comparing P and G in Table 4, it also justified the effectiveness of gated addition.\n\n- **kernel normalization.** We have explored four normalization functions for computing the similarity score in Table 4. \nAdditionally, we also explored the influence of the upsampling kernel size.\n\nPerhaps all the results are summarized in a single line in Table 4, which makes it difficult to interpret the results.\nWe have reorganized Table 4 and double checked what is missing in the ablation studies. \nWe further supplement the result of the upsampling kernel without normalization (41.45 mIoU). \nSee the Table 4 of the revision (L270-L272).\n\n**The use of the \"low-rank\" version.**\n\nThe motivation behind low-rank version is to reduce the number of parameters and computational complexity. \nWe have clarified this in the revision (L200-L202).\n\n**Using subscript of subscript should be avoided.**\n\nWe have simplified the symbol system to improve the readability. Please have a look at the revision.",
" This paper introduces a point affiliation for feature upsampling which is one of the most essential parts, especially dense prediction networks. The proposed method generates similarity-aware kernels by comparing the similarity between each encoder feature point and the spatially associated local region of decoder features. It also introduce a lightweight upsampling operator, termed Similarity-Aware Point Affiliation (SAPA) and its variant. Experiments show the superiority of the proposed upsampling module on various depth prediction tasks. + Introducing the notion of point affiliation into feature upsampling is interesting and makes sense. Many other followers will benefit from such notions. \n+ In depth comparison to other previous upsampling methods such as CARAFE or IndexNet, e.g., in Fig. 2, is interesting.\n+ In most cases, the state-of-the-art performance is attained when it is incorporated with various dense prediction networks.\n - Learning feature upsampling is interesting and makes sense. But this framework requires additional learnable parameters, which may make the networks more suffer from the overfitting problem, less generalizable. It would be great if there are experiments about this overfitting issue and generalization issues.\n- In Kernel generation part, the similarity between encoder feature and decoder feature is computed. But, such encoder feature and decoder feature may have different data distributions, so directly computing the similarity between them may be sub-optimal. To overcome this, very simple normalization prior to computing similarity may be used. It would be great if the relevant comments or experiments are additional conducted.\n- The proposed module, SAPA, consists of many sub-modules, e.g., Y embedding, X embedding, Gated addition, Kernel generation, etc. So through experiments for ablation study is required. The current ablation study in Table 4 did not cover all the variants of ablation study. \n\nMinor comments:\n- Using subscript of subscript, e.g., in Line 184, may make the reader follow the paper. It would be better if simpler notations are used.\n- In Similarity Function in Line 203-205, why \"low-rank\" versions are used? Please clarify this. The paper discussed the limitation of the proposed method.",
" The paper presents an interesting feature unsampling module, which can be flexible applied to the tasks with upsampling like segmentaiton, detection and depth estimation. The main idea is to generate the kernels based on the feature clustering similarity. It provides extensive experiments on different dense prediction tasks and consistent performance gain has been obtained. The paper is well motivated and the presentation is clear. strength:\n1. The idea to design a feature upsampling framework based on the clustering similarity is interesting and novel.\n2. The proposed module obtains state-of-art performance on dense prediction tasks like segmentation, detection, and depth estimation, without large computational overhead.\n3. The proposed module is well motivated and the presentation of the paper is clear. \n\nweakness:\n1. As shown in Table 3, the performance of on object detection is a little bit lower than the baseline of CARAFE [1]. Although the authors provide an explaination in Section 6, usually the better segmentation results should lead to better localization on the bounding-box level. \n2. Compared with the baselines, the implementation of the proposed approach is a little bit complicated with more steps. 1. The segmentation experiments are based on three transformer-based models. How about the segmentation performance for the CNN-based approaches?\n2. The proposed approach relies on the similarity score. It is similar to a new interpolation between the features. Thus, how about is result of the high-frequency data if the ground-truth mask within a local neighborhood changes abruptly? The paper claims the potential limitation on the object detections which requires on the semantic mending. ",
" This work proposes a new approach for upsampling decoder features with the guidance of encoder feature, leading to semantic preserving and detail delineation. The key idea is to apply similarity-aware upsampling based on encoder and decoder features. To be more specific, the decoder feature is upsampled via the weighted sum in the sub-region, where the weight is calculated using the similarity between encoder and decoder features, as shown in Figure 3. +) The dense prediction tasks (semantic segmenation, depth etimation, and image matting) often require an accurate and detail-preserving upsampling of decoder features, and the proposed simple upsampling scheme can be very effective with no significant computational overhead.\n\n+) Experiments validated the performance gain on the above-mentioned tasks.\n\n-) The overall idea is very simple, and it can be explained using (2) and (3) together with Figure 3. Nevertheless, the manuscript needs to be revised in a more compact form. In the abstract and introduction, authors stated that the point affiliation (semantic cluster) should be incorporated in the upsampling process of decoder features. 'Cluster' seems to be rather misleading. The proposed idea is just a similarity based summation, where the similarity is measured between encoder and decoder features. There is no explicit clustering and affiliation estimation process in the proposed framework. \n\n-) The equation (4) is rather meaningless. The proposed kernel is rather similar to joint bilateral filter, and it is well-known that in the smooth region, the joint bilateral kernel becomes the Gaussian function. Though the proposed kernel becomes a uniform function, such a derivation seems unnecessary in the main paper (maybe it can be moved to supplementary material).\n\n**********************************************************************************\n* After rebuttal\n\nAuthors addressed some questions in the rebuttal. I appreciate it.\n\nThis work introduces a simple yet effective approach for dense prediction tasks. This kind of upsampling (using two inputs) has been adopted in various vision tasks, but using it in the upsamling process for decoder features sounds interesting.\nNevertheless, considering the simplicity of the overall method, I would like to keep the initial rating (Borderline accept).\n**********************************************************************************\n - In Figure 3, the decoder feature is added into the encoder feature. This was not explained in the paper.\n\n- The similarity kernel w_{i1, j1, m, n}, which is used for obtaining the upsampled decoder feature at (i1, j1), is calculated with the single encoder feature at (i1, j1) and the set of decoder features (m,n) for m=-r,...,r and n=-r,...,r. This means that upsampling relies on the single encoder feature at (i1, j1), and thus it seems that the proposed method is still too sensitive to the noise of the encoder feature.\n N.A.",
" This paper introduces a new kernel upsampling module which encourages not only semantic smoothness but also boundary sharpness in order to enhance the performance of semantic segmentation/matting tasks. The key idea is to design a similarity-aware kernel which compare the similarity between encoded features with local spatial awareness about the decoded features. The paper has also proposed a lightweight upsampling operator, Similarity-Aware Point Affiliation (SAPA), which demonstrated high performance on various dense prediction tasks. The proposed method has compared with CARAFE, IndexNet and A2U in term of both quantitative results and computational complexity. Overall, this is a solid paper and I think the proposed method is very useful in dense prediction tasks that require very accurate boundary. The experimental comparisons are also sufficient to validate the performance of the proposed method, and I think its simplicity would make this method useful in real world applications. Although table 1 has summarize the theoretical complexity of various, I would like to see the actual runtime of the proposed method compared with other methods. \n\nBesides the quantitative results in table 2 and table 3, I would like to see some qualitative results which show the boundaries of the segmented regions, especially for the matting tasks. \n\nThe ablation study about the kernel size seems to be too simple, I hope there is a deeper study about its effectiveness.\n The paper has discussed it properly."
] | [
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"NKdgQ2bON_1",
"3gPGSInzJGX",
"XOsTNBcLwe",
"9DdWooU8D17",
"nips_2022_hFni381edL",
"nips_2022_hFni381edL",
"nips_2022_hFni381edL",
"nips_2022_hFni381edL"
] |
nips_2022_iQpaHC7cPfR | SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections | Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics. Neural approaches such as NeRF have achieved photorealistic results on novel view synthesis, but they require known camera poses. Solving this problem with unknown camera poses is highly challenging as it requires joint optimization over shape, radiance, and pose. This problem is exacerbated when the input images are captured in the wild with varying backgrounds and illuminations. Standard pose estimation techniques fail in such image collections in the wild due to very few estimated correspondences across images. Furthermore, NeRF cannot relight a scene under any illumination, as it operates on radiance (the product of reflectance and illumination). We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination. Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR. To our knowledge, our method is the first to tackle this severely unconstrained task with minimal user interaction. | Accept | This paper had notable consistent reviews. All reviews were thoughtful, and there was a consensus that this paper tackles an important problem in a way that has not been explored. While there were some weaknesses highlighted in the review process, discussion and the author rebuttal ameliorated all major concerns. Therefore I am accepting this paper. | train | [
"qGJwhSgFoLt",
"kDFqs5jNAkhb",
"gDp5CoxUFaa",
"vxG70-LEQSV",
"F4odPg3o94B",
"z2k7n3cn2ra",
"tQ8LCZ-VhFN",
"5h2e2G_girj"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers, Thanks for your constructive feedback. We hope to have clarified most of the reviewer questions in our response. As we are nearing the end of the author-reviewer discussion period, we would like to give a gentle reminder in case you have any more questions or concerns.",
" **Comparison with noisy camera poses**: Please see the answer in the main response above. \n\n**Reconstruction quality wrt. existing works**: Please see the answer in the main response above. \n\n**Performance when poses available**: Please see the answer in the main response above. \n\n**Sampling range during optimization**: Our sampling bounds stay fixed during the optimization process at the origin of the coordinate system. We enforce that our cameras look roughly toward that origin. The cameras can move freely, and we simply calculate the intersection points of an imaginary ray towards the origin (Shown in Fig. 3 in a dashed line). The distances toward the intersections can then be transferred to the actual rays, which generates a view frustum. Therefore, we only sample in a predefined area during the optimization.\n\n**Reliability of camera pose initialization**: It can indeed occur that all candidate poses are equally bad. However, it is unlikely that all images will have such poor initialization. Therefore, we introduced an image level posterior scaling (L215-L221), where we compare the weighted losses against all other views. If all poses are bad, we reduce the backpropagation towards the network for the views. Similar to the Camera Multiplex, the camera poses can still improve unhindered. Therefore, specific images are ignored if the poses do not improve during training.\n",
" **Performance when poses available**: We do not use the poses in our method and always use the rough pose initialization. We will change the wording to clarify that our method still leverages the direction-based poses, not the GT poses. We split the datasets to enable comparison with NeRD and Neural-PIL for the datasets where poses are available.\n\n**Reconstruction quality on internet collections**: The Internet image collections provide additional challenges, as the Statue-of-Liberty scene consists of a collection of images under highly varying capture scenarios. The shots consist of drone images, images from ships, directly under the statue, from helicopters, the mainland, etc. Here, the focal lengths and distances can vary extremely starkly. For the chair scenes, the automated U2-net masking did not always include the legs, which are distinct features for pose alignment. Our SAMURAI dataset mostly captures some of these challenges (varying camera and distances) but more constrainedly.\n\n**Reconstruction quality wrt. existing works**: Please see the answer in the main response above. ",
" **No proof of COLMAP not working**: In our novel dataset and the internet image collections, we found that COLMAP fails in the correspondence matching step. No reconstruction took place. We have followed best practices and tried extensive parameter tuning for these scenes.\n\n**Need for pose annotation**: We note that objects are often symmetric in a specific plane. For example, a car is mostly symmetric left to right. When the camera is initialized on the back of the car but should be in the front, it is unlikely that a gradient-based optimization will be capable of moving the camera there as the loss would be higher for the side views. We found that happening often. Therefore, we followed recent research such as NeRS [63], which requires a rough camera direction.\n\n**On large-scale objects**: Our method is mostly designed for smaller objects. This is mainly due to the non-global illumination model. Furthermore, occlusions in large-scale objects occur, which we do not model. We leave these challenging points for future work, and the main goal for SAMURAI is tackling small to medium-sized objects. \n\n**No comparison with prior art**: We want to highlight that we compare with recent papers (GNeRF, BARF) and even provided a modified version (BARF-A), which handles varying illumination. These are state-of-the-art methods in camera and neural field reconstruction. Snavely et al. 2007 leverage correspondences, which will fail in our datasets (See **On no proof of COLMAP working**). Our method is not designed for large-scale objects (See the previous answer)\n\n**Removal of coarse network**: We indeed do not use hierarchical sampling as in NeRF. We found a general problem with two separate networks: As the camera poses are moving and the networks are fully disjointed, we noticed that instabilities in the camera optimization occur. We, therefore, only use a single network with stratified sampling.\n\n**Coarse-to-fine in ablation**: The coarse-to-fine optimization is the BARF-style Fourier annealing coupled with the resolution increase during the optimization. We will clarify that in the revision.\n\n**Achieving GT pose estimation**: Few methods arrive at the correct GT pose during optimization, and the current state-of-the-art methods in the neural field and camera pose estimation also do not achieve this. In general, the joint reconstruction of intrinsic, extrinsic, and shape estimation is highly challenging, especially when no correspondences are available (See section on COLMAP). \n\n**Leveraging initialization**: Most pose estimation methods require either a video, fixed backgrounds, correspondences, or are category specific. This is not the case for online image collections of objects.\n",
" First, we want to thank the reviewers for recognizing that our method is capable of solving “a highly under constrained problem” (eZCF, hfWC) with several technical novel contributions (eZCF, hfWC, m5j4). Each contribution shows an effect in the ablations (hfWC), and the overall method provides an obvious boost compared to the prior art (hfWC). In this general response, we address common questions by reviewers. .\n\n**Reconstruction quality w.r.t. existing works** (hfWC, m5j4): We agree that the quality is lower than prior works leveraging near GT poses (NeRD or Neural-PIL). But, we tackle a significantly more challenging problem of jointly estimating camera poses, shape reconstruction, and material decomposition. Especially in datasets where objects are located in different locations and illuminations, traditional pose reconstruction methods fail. We show the influence of noisy poses in the **Comparison with noisy camera poses** below.\n\n**Comparison with noisy camera poses** (hfWC, m5j4): Under the assumption that poses are recoverable, but the poses will be slightly noisy, we can show the performance degradation of Neural-PIL in Tab. T1. If our method leverages GT poses without optimization, Samurai (ours) obtains similar results as Neural-PIL as our method is mostly a generalization of Neural-PIL to also jointly optimize camera poses. In Tab. T1, it is clear that Neural-PIL degrades severely even under slightly noisy poses. Samurai achieves a PSNR of 23.84 dB. So even with minor noisy poses, our method outperforms Neural-PIL significantly. It is worth noting that SAMURAI starts from the rough quadrant-based poses which are not close to the GT pose. Even if our method does not achieve the same reconstruction performance as Neural-PIL with known poses - the difference is not too far off. \n\n| Translation % Error | Rotation ° Error | PSNR |\n|---------------------|------------------|-------|\n| 0 | 0 | 29.48 |\n| 0.1 | 0.1 | 20.21 |\n| 0.5 | 0.5 | 16.89 |\n| 1 | 1 | 12.58 |\n| 3 | 3 | 9.34 |\n| 5 | 5 | 0.98 |\n\n**Tab T1.** - Performance of Neural-PIL with varying inaccurate poses. SAMURAI achieves a PSNR of 23.94 dB when starting from the quadrant-based poses.\n",
" This paper uses a neural field-based approach to estimate the shape, BDRF parameters, and per-image camera pose and illumination from the in-the-wild image collections. It is the first method that is able to estimate all parameters simultaneously. Conventional neural field-based approaches require an almost-correct camera pose to estimate the shape and BDRF parameters. The proposed method replaces the need for accurate camera pose with a rough user-initialized camera pose quadrant. This paper introduces a novel in-the-wild image collection dataset in which COLMAP is hard to estimate the camera pose. Experimental results show that the proposed method achieves much better accuracy in the challenging dataset (pose not known). # Strengths\n\n- The proposed method is able to solve a highly under constrained problem with rough camera pose initialization. It estimates the BRDF parameters, unit-length surface normal, volume density, latent-per-image illumination vectors, per image camera pose and intrinsics.\n- Flexible camera parameterization for varying distances. As the proposed method targets for in-the-wild image collection, the near and far bound of conventional methods cannot be applied. Thus, the proposed method places the camera in a distance where the is visible given the field of view.\n- Camera multiplex optimization. It is challenging to optimize the camera pose due to local minima. The proposed methods optimize multiple camera poses with their corresponding weights based on the camera loss to find the best possible camera pose. \n- Posterior scaling of input images. Similar to camera multiple optimization, the proposed method also optimizes the image collection used for training. It gives different weights based on the noise level for each image.\n- The proposed method can be applied for various applications, such as AR & VR, which makes it easy for those applications to insert the real-world objects without huge efforts. \n\n\n# Weaknesses\n\n- The authors claim that standard pose estimation techniques fail on the challenging images, but there is no justification whether the proposed dataset is a challenging dataset. Proof of pose of estimation failure should be included.\n- While the conventional methods automatically estimate the camera pose with COLMAP, the proposed method requires user interaction for each dataset to roughly annotate the position. However, there is no justification why the proposed rough pose estimation is preferred. In addition, there is no ablation study of the camera pose initialization using 3 simple binary questions.\n- The intention of the proposed method is to do proof-of-concept of large-scale set objects. However, the qualitative evaluation of large-scale set objects is unavailable in the main manuscript. It is questionable whether the proposed method can be scaled to large-scale set objects. In the supplementary material, there are two online image collections, but the results are not satisfying. In addition, comparison with state-of-the-art methods is unavailable.\n - As in Modeling the World from Internet Photo Collections, the camera pose of large objects images might be estimated using correspondences.\n - In L157, the authors claim that they do not use the coarse-to-fine network. What if the proposed method use coarse network? Wouldn’t the accuracy be better? In addition, what is the coarse-to-fine optimization in the ablation study?\n- Why couldn’t the proposed method achieve GT camera pose? What if the proposed method utilizes deep camera pose estimation (such as Wide-Baseline Relative Camera Pose Estimation with Directional Learning) to improve the accuracy? Would it be possible?\n Yes, the authors has adequately addressed the limitations.",
" This paper proposed a method that works on in-the-wild image collections of an object to estimate the shape, BRDF, per-image camera pose, camera intrinsics and illumination in a jointly optimized framework.\nThis problem is very challenging and under constraint as the camara parameters, illuminations and even the background of the images may vary a lot across the online image collections. To my knowledge, this is the first method that aims at solving the shape, BRDF and camera poses with this challenging setup. \n\nThe paper proposed several components that based on the recent Neural-PIL-Rendering technique to construct the entire pipeline, including: an object-centric camera parametrizetion to learn the clipping planes per image; camera multiplex optimization to avoid local minima during the optimization; posterior scaling of input images to suppress the influence of corrupts images; a two-stage mesh extraction for refined mesh reconstruction. Strengths \n- The paper is well written and easy to follow. The figures, tables and videos help the understanding of the paper and identify its contributions.\n\n- The proposed camera multiplexes, although been used in mesh optimization works [22] before, is the first time being used in the context of neural volume rendering. This technique will dynamic re-weighing the each camera loss to reduce the influence of bad camera poses during the optimization of the shape and materials.\n\n- The proposed posterior scaling of input images also makes sense in the context of suppressing the influence of corrupted images. This technique can also be used in a broader application: when dealing with online image collections and images with poor quality can be dynamically re-weigh.\n\n- The evaluation of the method over real and synthetic datasets demonstrates the effectiveness of this method in recovering shape and material under unknown camera poses, changing illuminations and varying backgrounds. (From Table 1 and Table 3, Table A3)\n\n- The Ablation study in table 2 also validates the effectiveness of each component of the methods. \n\nWeaknesses\n- In table 1, the SAMURAI's performance is lower when the camera poses are available. Why?\n\n- I have a concern about the quality of the reconstructed meshes. In Figure A4, the reconstructed mesh of \"Statue of Liberty\" and \"Chair\" is inaccurate, and the texture is also blurry. \n\n- The novel view synthesis quality is very low compared to the prior \"fix-illumination\" or \"known camera pose\" methods. As can be seen in Table 1, Table 3 where the reconstructed images have much lower PSNR compared to prior works. See weaknesses part. The most of the limitations are discussed in the paper. ",
" This paper proposes a NeRF-based single-object inverse rendering pipeline where the camera intrinsics is unknown and extrinsics are coarsely initialized. The neural implicit based reflectance (BRDF) is also optimized separately for relighting/material editing. Compared with previous methods that uses camera intrinsics and extrinsics estimated from a third-party pipeline (eg. COLMAP), the proposed method is more robust in cases where COLMAP fails to estimate accurate poses due to the camera optimization module. A small dataset consisting of images captured with different background/illumination for 8 objects is collected and used in evaluation of the proposed method. The experiment results show that the proposed method outperforms the compared state-of-the-art methods that also use unposed input images. \n Strength\n\n+The joint optimization of the BRDF, occupancy represented as neural implicit functions and the camera parameters (poses and intrinsics) given unposed images captured under different illuminations/backgrounds is a challenging task. The recipes used in this submission can be of good reference for the following works that pushes the NeRF-based inverse rendering towards in-the-wild scenarios.\n\n+Compared with the similar method (BARF), the performance boost of the proposed pipeline is obvious.\n\n+A small dataset is proposed for challenging cases where the COLMAP fails (L276-L277). This dataset can be useful for testing algorithms using unposed images with varying illumination as inputs.\n\n\nWeakness\n\n-My main concern is about the limitation in the cases where the proposed method performs better than the compared method: \nOther methods, such as NeRF-W, also uses internet photos with varying background/illumination/camera intrinsics as inputs; the key difference between the work and the previous works is that this work is robust to cases where COLMAP fails to estimate good camera poses or does not work at all, versus the assumption that COLMAP works for in-the-wild dataset in other methods. Although this work has proposed a small dataset on which COLMAP is not working at all and shows that the proposed method works on this dataset, the following question is not answered: if COLMAP does estimate some noisy camera poses, can this method with optimized pose method still output perform the counterparts (either the other parts of this method + COLMAP pose, or NeRD + COLMAP pose) ? From Tab.3 of this paper, it shows that if COLMAP generates good camera pose, then the performance of the proposed method is not as good as the compared methods. This seems to reduce the application scope of this proposed method: it only works better if COLMAP does not work, which is not always the case given that COLMAP has decent robustness.\n\n-Self occlusion/shadowing is not considered. This poses a challenge for objects with concave shapes (grooves) that generate cast shadows, which may be reconstructed as albedo variations.\n\n-Based on L181-L199, the 'valid' sampling ranges for the rays are determined by the intersections between the predefined object-centric sphere and the camera rays, which are intern determined by the camera pose and intrinsics. As a result, the sampling location along the rays depends on the camera intrin/extrin. If that is the case, how the dependence is dealt with during the optimization process is not well explained. \n\n-The weighting scheme in camera multiplexes depends on the reliability of each candidate pose. How about the cases where none of the positions from one multiplex is reliable? Are the corresponding images ignored during reconstruction?\n Please see the questions in the weakness section. The method does not specifically handles shadows, as the author pointed out in the paper.\n"
] | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"F4odPg3o94B",
"5h2e2G_girj",
"tQ8LCZ-VhFN",
"z2k7n3cn2ra",
"nips_2022_iQpaHC7cPfR",
"nips_2022_iQpaHC7cPfR",
"nips_2022_iQpaHC7cPfR",
"nips_2022_iQpaHC7cPfR"
] |
nips_2022_HjicdpP-Nth | Generalized Laplacian Eigenmaps | Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Riemannian metric (AIRM) and Jesen-Bregman the LogDet Divergence (JBLD), and upper-bounded by AIRM scaled by the factor of $\sqrt{m}$. We show that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines. | Accept | The Authors provided a nice rebuttal, and address major issues in the last round. Therefore, I recommend to accept this paper. | train | [
"m6Rf6iLflKV",
"FwuxOkxYaGi",
"W2nLudS5zlT",
"P94g2PP1S6x",
"-e2A0L0wzJP",
"7Vx19nn6VW4",
"u0ISM-i_nZx",
"BbPgi_Y85k9",
"Iimyw9XR2O1",
"0LKFYg6ifJc",
"427K1wlrPCx"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers and the AC for their work.\n\nAs the reviewer-author discussion period is finishing in the next few hours, we just wanted to say that we are here to help should you have any additional questions.\n\nBest regards,\nAuthors.\n\n",
" # Response to Rev. 3 (3VJW)\n\n***We thank the reviewer** for the constructive review and interesting questions.*\n\n## 1. The notations are confusing and may contain errors.\nThank you and we apologize. There are indeed some errors which resulted from rushed editing. We have now carefully revised these notations and will upload revised paper shortly.\n\nIndeed, the ${\\bf S}$ matrices are of size $d\\times d$ which is the reason why our algorithm can scale up to large graphs with $n$ nodes.\n\n## 2. The motivation of Condition 1 is unclear. Why does it yield good embedding?\n\n* Thank you. Condition 1 yields a nice property described in Theorem 1: **the feature space under Condition 1 has the property of minimum separation between any two embeddings from two different classes (best minimum margin)**, which is equal to the distance between the corresponding class centers. In contrast, Linear Discriminant Analysis (LDA) strives to separate class centres and shrink within-class variance but cannot guarantee anything for pairs of embeddings. \n\n\n\n\n## 3. Why optimizing the Rank difference or the LogDet difference leads to good results?\n\nDespite Condition 1 provides us nice guarantees on the minimum separation between embeddings from two classes (best minimum margin), it is an NP-hard problem and so its solution needs relaxation. \n\nIn Resp. 2 to Rev. 1 (bffF) and Resp. 3 to Rev. 2 (Vcae), **we show that the LogDet difference formulation, in its limit case, can yield the exact Rank difference**. Under another set of parameters, **it can reverse to the Trace difference problem akin to COLES and LDA**. For this reason, **the LogDet difference can be tuned towards the guarantee in Theorem 1**.\n\n\n\n## 4. Why formulate the main problem as rank difference? Why not directly analyze the LogDet difference?\n\n* **We propose the Rank difference because we need to reformulate the NP-hard problem of Condition 1 as a measurable function that can be optimized**. \nThe LogDet difference is a great surrogate from which we can recover either the Trace difference or even the exact Rank difference (in the limit case, of course). In practice, the limit case looses smoothness and cannot be optimized, but the trade-off compromise can.\n\n* We also note that the Rank difference problem is universal. Its relaxations can yield the Trace difference (COLES, LDA) or LogDet difference (objective used in our GLEN). This facilitates the creation of a unified objective function. \n\n**We truly hope that with help of responses addressing theoretical aspects, we are able to convince reviewer about the value of our work**. We apologize for referencing responses to other reviewers but we felt it makes more sense than repeating them.",
" # Response to Rev. 2 (Vcae) (part I of II)\n\n***We thank the reviewer** for the constructive review and interesting questions.*\n\n## 1. Why do you constrain the model on GNNs? Why not conduct experiments on general datasets?\n\n* As shown in Eq.1 and 3, our method depends on an adjacency matrix ${\\bf S}_w=f\\_{\\Theta}({\\bf X})^\\top{\\bf L}\\_wf\\_{\\Theta}({\\bf X})$ (not just for a GCN encoder) or some notion of label information (one-hot label vectors can be easily used to form an adjacent matrix) for the within-class scatter matrix. GLEN also requires negative sampling realized by the randomly formed negative graph. Kindly see Resp. 1 to Rev. 1 (bffF) to see how randomized $k$-regular graphs form our negative ${\\bf L}_t$.\n\n* In unsupervised representation learning for GNNs, an adjacency matrix is readily available, and it can be utilized according to the SampledNCE framework to form positive node pairs in contrastive learning ${\\bf L}\\_w$. Thus, GLEN is not a generic contrastive learning framework. \n\n* As $f_\\Theta$ can be any neural network (or some projection technique), in our experiment we demonstrate GLEN with S$^2$GC and GCN backbones. \n\n* **Below we include an interesting setting of transductive one-shot learning** (images+CNN backbone) where negative graph is also based on fully-connected graph. **EASE (CVPR'22)** minimizes $\\text{Tr}(\\mathbf{U}\\mathbf{X}^\\top\\mathbf{L}\\_w \\mathbf{X}\\mathbf{U}^\\top)-\\text{Tr}(\\mathbf{U}\\mathbf{X}^\\top \\mathbf{L}\\_t \\mathbf{X}\\mathbf{U}^\\top), \\\\;\\text{s.t.}\\\\; \\mathbf{U}\\mathbf{U}^\\top=\\mathbf{I}$ for learning some linear projection $\\mathbf{U}$. \n\n **We extend GLEN to the EASE pipeline** to learn the linear projection $\\mathbf{U}$ by minimizing $\\text{LogDet}(\\mathbf{U}\\mathbf{X}^\\top\\mathbf{L}\\_w \\mathbf{X}\\mathbf{U}^\\top)-\\text{LogDet}(\\mathbf{U}\\mathbf{X}^\\top \\mathbf{L}\\_t \\mathbf{X}\\mathbf{U}^\\top), \\\\; \\text{s.t.}\\\\; \\mathbf{U}\\mathbf{U}^\\top=\\mathbf{I}$ (or we use $S_p$ norm instead of LogDet) and we achieve the following results:\n\n | |miniImagenet|tieredImagenet|CIFAR-FS|CUB|\n |-|-|-|-|-|\n |EASE (CVPR2022)| 58.2±0.19|70.9±0.21|65.2±0.21|77.7±0.19|\n |EASE GLEN ($S\\_p$ norm)| 60.5±0.23|74.8±0.25|67.8±0.25|81.5±0.25|\n |**EASE GLEN (LogDet)**|**61.4±0.23**|**76.4±0.25**|**69.2±0.25**|**83.4±0.25**|\n\n Kindly note for the simplicity of ablation, we have used the soft k-means rather than Sinkhorn k-means in the EASE pipeline. The backbone used is ResNet-12 but we are more than happy to supply more backbones if the reviewer would like that (kindly let us know).\n\n\n\n## 2. Is there a difference between `Calibrated Multi-Task Learning, SIGKDD, 2018' and GLEN? \n\nThank you for sharing the above paper (we will cite it accordingly). We sincerely think GLEN is novel. The paper pointed by the reviewer uses an MSE loss combined with LogDet based regularizer for the application of multi-task learning (nothing to do `per se' with the Rank difference problem explored by us or contrastive learning), as detailed below.\n\n* Kindly note that our GLEN is generalizing the SampledNCE framework to the matrix form under general pooling operator $\\phi(\\cdot)$ as detailed in Resp. 1 to Rev. 1 (bffF). From Condition 1 (main paper), we arrive at **Theorem 1 which gives us a guarantee on the minimum separation of any two embeddings from two different classes (best minimum margin which is our target)**. Our general pooling operator is $\\phi({\\bf M})=\\text{Rank}({\\bf M})$ because it can be arbitrarily well approximated by the number of various norms resulting in different models, e.g., the nuclear norm (COLES), LogDet (GLEN), $\\gamma$-nuclear norm, $S\\_p$ norm, Geman norm, etc. \n\n* In Resp. 2 to Rev. 1 (bffF) we also show that the **LogDet formulation can recover the Trace formulation (COLES) or even converge to our proposed Rank difference model** in Eq. 3 (main paper). This is one of reasons why we choose LogDet as a versatile operator. In Resp. 6 to Rev. 1 (bffF) we also explain how, **in the limit, the difference of LogDet operators yields approximation error $\\Delta\\epsilon=0$**. We also show how **the LogDet difference is lower- and upper-bounded by the Affine Invariant Riemannian Metric (AIRM)** which relates the LogDet difference to non-Euclidean distances, e.g., AIRM, which are however notoriously numerically unstable/almost intractable when backpropagating through in an end-to-end model as ours.\n\n\n* Kindly note that **the Rank Minimization Problem (RMP) is a well known NP-hard classic problem**. RMP arises in diverse areas such as control, system identification, statistics, signal processing, and computational geometry. In our paper, we cited [8] which proposes LogDet heuristic for RMP. Notice we show in Resp. 2 to Rev. 1 (bffF) how to recover from the LogDet formulation the nuclear norm (trace) and the exact rank.\n\n > [8] Maryam Fazel, Haitham Hindi, and Stephen P Boyd. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices.",
" # Response to Rev. 2 (Vcae) (part II of II)\n\n## 3. Major concern is the surrogate may be not novel. \n\nThank you. Kindly note that we do not claim that LogDet surrogate of Rank minimization is our contribution.\n\nApart from reasons described in above responses, we choose LogDet due to its stably during backpropagation, smoothness and ability to recover several other models (Trace for COLES, or even the exact Rank in the limit). Important is also that **thanks to Condition 1 (main paper) we can strive for the exact separation guarantee between any two embeddings of two different classes (best minimum margin), and we can drop the orthogonality constraint on Laplacian eigenmaps embedding**, as detailed in Resp. 1 to Rev. 1 (bffF).\n\nIn fact, even compared with just the LogDet surrogate of Rank from `Calibrated Multi-Task Learning, SIGKDD, 2018', if applied to our Rank difference problem, the Rank difference error between just two matching eigenvalues $\\sigma\\_i$ and $\\sigma'\\_i$ of matrices ${\\bf A}$ and ${\\bf B}$ would be:\n$$\n\\Delta\\epsilon=\\log(\\sigma\\_i +1)-\\log(\\sigma'\\_i+1),\n$$\nwhich indicates that for large eigenvaleus and large gap $\\|\\sigma\\_i-\\sigma'\\_i\\|$ , the error is large. \n\n**In our case, in the limit case, our formulation enjoys the exact difference of Ranks** (use $\\gamma=1$):\n$$\n\\Delta\\epsilon=\\lim_{\\alpha\\rightarrow\\infty}\\frac{1}{\\log(\\alpha+\\gamma)}(\\log(\\alpha\\sigma\\_i +\\gamma)-\\log(\\alpha\\sigma'\\_i+\\gamma))=0,\n$$\n\n\n\n\n## 4. Provide experiments under the common settings of Cora, Citeseer, PubMed, instead of the random splits.\n\n| | Cora | Citeseer | Pubmed |\n|---------------------------|-----------|-----------|--------------|\n|DeepWalk+F | 77.36 | 64.30 | 69.65 | \n|Node2vec+F | 75.44 | 63.22 | 70.6 | \n|GAE | 73.68 | 58.21 | 76.16 | \n|VGAE | 77.44 | 59.53 | 78.00 | \n|DGI | 81.26 | 69.50 | 77.70 | \n|GRACE | 80.46 | 68.72 | 80.67 | \n|GraphCL | 81.89 | 68.40 | OOM |\n| GMI | 80.28| 65.99| OOM |\n| GLEN-S$^2$GC | **85.1** |**71.9** | 80.72 | \n\nOOM means out-of-memory error on Nvidia RTX 11GB.\n\nKindly note that our method does not include any graph augmentations or multi-view learning which are orthogonal and complementary directions to ours. GLEN simply uses the adjacency to capture similar node pars, and the random fully-connected dense graph for negative sampling.\n\n## 5. No source code is provided so that it may limit the reproducibility.\n\nThank you, **we will of course release the full code** in due course. In the supplementary material, **we have updated now a simple demo code** for Cora and provided logs on other larger datasets.\n\n## 6. Some typos.\nThank you very much for pointing out these typos. We have now revised them accordingly. We plan to revise the paper and upload to the system within one week.\n\n**We truly hope that the above clarifications are able to convince reviewer about the value of our work.**",
" # Response to Rev. 1 (bffF) (part I of IV)\n\n*Firstly, **we thank the reviewer** for the constructive review and valuable questions.*\n\n## 1. What is the relationship between GLEN and contrastive learning?\n\n* COLES [45] extends Laplacian Eigenmaps, $\\min_{{\\bf X}, s.t. \\Omega({\\bf X})} {W}^+\\_{ij}\\||{\\bf x}\\_i-{\\bf x}\\_j\\||\\_2\\^2$ where ${\\bf X}=[{\\bf x}\\_1,\\cdots,{\\bf x}\\_n]$ and $\\Omega({\\bf X})$ are constraints (i.e., orthogonality) by expanding the SampledNCE formulation $\\mathbb{E}\\_{i \\sim p\\_{d}}\\left[ \\mathbb{E}\\_{j \\sim p\\_{d}(j \\mid i)} [s\\_{\\Theta}(x\\_i, x\\_j)] + \\eta\\\\, \\mathbb{E}\\_{j^{\\prime} \\sim p\\_{n}\\left(j^{\\prime} \\mid i\\right)} [\\tilde{s}\\_{\\Theta}(x\\_i, x\\_{j'})]\\right]$. Symbols $p\\_n$ and $p\\_d$ are negative/positive sampling distributions, $s\\_{\\Theta}(v, u) = \\log\\exp({\\bf u}^{\\top} {\\bf v})={\\bf u}^{\\top} {\\bf v}$ and $\\tilde{s}_{\\Theta}(v, u')=\\log\\exp(-{\\bf u}'^\\top{\\bf v})=-{\\bf u}'^\\top{\\bf v}$ are similarity measures, whereas $\\eta\\geq 0$ controls the impact of negative sampling.\n\n* **GLEN generalizes SampledNCE**, a framework for contrastive learning with positive and negative sampling, which relies on two terms: $\\mathbb{E}\\_{v \\sim p\\_{d}(v)}\\left[\\mathbb{E}\\_{u \\sim p\\_{d}(u \\mid v)} s({\\bf u}, {\\bf v})\\right]$ and $\\eta\\\\,\\mathbb{E}\\_{v \\sim p\\_{d}(v)}\\left[\\mathbb{E}\\_{u'\\sim p\\_{n}\\left(u'\\mid v\\right)} \\tilde{s}({\\bf u}',{\\bf v})\\right]$. \n\n The above two terms are evaluated over two different distributions $u \\sim p_{d}(u \\mid v)$ (nodes $u$ from the adjacency matrix) and $u'\\sim p_{n}\\left(u'\\mid v\\right)$ (nodes $u'$ from random negative adjacency matrix). \n\n Take positive sampling term (negative sampling term can be expanded in the similar way). Let $p\\_{d}(v) = \\frac{1}{\\sqrt{D\\_{vv}}}$ and $p_{d}(u \\mid v) = \\frac{\\hat{W}\\_{uv}}{\\sqrt{D\\_{uu}}}$ where $\\hat{{\\bf W}}$ is an unnormalized adjacency matrix and ${\\bf D}$ is its degree matrix. Let ${\\bf W}$ be the degree normalized matrix. Notice $u$ and $v$ are indexes of embeddings ${\\bf u}$ and ${\\bf v}$. Let $s$ be as in COLES. Then:\n $$\n \\mathbb{E}\\_{v \\sim p\\_{d}(v)}[\\mathbb{E}\\_{u \\sim p\\_{d}(u \\mid v)}\\\\,s({\\bf u},{\\bf v})] =\\sum\\_{u, v} {W}_{uv}\\\\,s({\\bf u},{\\bf v}) =\\sum\\_{i=1}^{d} \\sum\\_{u, v} {W}\\_{uv}u_i v_i= \\phi({\\bf X}^\\top{\\bf W}{\\bf X}), \n $$\n where $\\phi(\\cdot)$ is a pooling function, i.e., $\\phi({\\bf M})=\\text{Tr}({\\bf M})$ yields COLES:\n $$\n \\sum\\_{i=1}^{d}\\sum\\_{j=1}^{d}\\delta(i-j)\\phi({\\bf x}\\_i^\\top{\\bf W}{\\bf x}\\_j)=\\sum\\_{i=1}^{d}\\phi({\\bf x}\\_i^\\top{\\bf W}{\\bf x}\\_i) \\quad \\text{if}\\quad {\\bf x}\\_i\\perp{\\bf x}\\_j\\\\;\\text{for}\\\\; i\\neq j,\n $$\n where ${\\bf x}\\_i\\perp{\\bf x}\\_j\\$ imposes orthogonality constraints of Laplacian eigenmaps and $\\delta(z)=1$ if $z=0$ and $\\delta(z)=0$ if $z\\neq0$. Finally ,think that rows of ${\\bf X}$ contain all ${\\bf u}$ (and ${\\bf v}$).\n\n **We let the pooling operator $\\phi(\\cdot)$ operate on the entire spectrum under general aggregation scheme. A very general operator is $\\phi({\\bf M})=\\text{Rank}({\\bf M)}$ from which we can recover the original Trace (nuclear norm) of COLES or LogDet of GLEN, $\\gamma$-nuclear, $S\\_p$, and Geman norm.**\n\n* COLES uses the following expression based on SampledNCE: $\\min_{{\\bf X}} \\sum_{ij} {W}^+\\_{ij}\\||{\\bf x}\\_i-{\\bf x}\\_j\\||\\_2\\^2- (\\frac{\\eta}{\\kappa}\\sum\\_{l=1}\\^\\kappa{W}^{l,-}\\_{ij})\\||{\\bf x}\\_i-{\\bf x}\\_j\\||\\_2\\^2 = \\max_{{\\bf X}}\\text{Tr}({\\bf X}^\\top{\\bf L}\\_t{\\bf X})-\\text{Tr}({\\bf X}^\\top {\\bf L}\\_w{\\bf X})$, where the ${\\bf W}^+$ is a normalized adjacent matrix and ${\\bf W}^{l,-}$ are $\\kappa$ normalized randomized $k$-regular graphs (adjacent matrices), while ${\\bf L}\\_w$ and ${\\bf L}\\_t$ are the corresponding Laplacian matrices. \n\n* **Negative random sampling is represented by** ${\\bf W}^{-}$, e.g., randomized $k$-regular graph or several such graphs.\n\n* If we sample $\\kappa\\rightarrow\\infty$ randomized $k$-regular graphs (adjacent matrices of size $n\\times n$) (each row receives 1 with probability $k/n$), **the expectation of randomized graph (adjacent matrix) is** $\\mathbb{E}[{\\bf W}^-] =\\lim_{\\kappa\\rightarrow\\infty}\\frac{1}{\\kappa}\\sum\\_{l=1}\\^\\kappa{{\\bf W}}^{l,-}= \\frac{k}{n}{\\bf 1}{\\bf 1}^\\top$, which by itself is a fully-connected graph with the graph Laplacian ${\\bf L}\\_t={\\bf I}-\\frac{k}{n}{\\bf 1}{\\bf 1}^\\top$.\n\n* **We simply set $k=1$ to use $1$-regular graphs for negative sampling** so ${\\bf L}\\_t={\\bf I}-\\frac{1}{n}{\\bf 1}{\\bf 1}^\\top$. Thus, our contrastive term is equivalent of the total scatter matrix ${\\bf S}_t$ known from the Linear Discriminant Analysis, i.e., ${\\bf S}_t={\\bf X}^\\top({\\bf I}-\\frac{1}{n}{\\bf 1}{\\bf 1}^\\top){\\bf X}={\\bf X}^\\top {\\bf L}\\_t{\\bf X}$. The positive sampling is encoded by the graph adjacency matrix $ {\\bf L}\\_w$.\n\n* Thus, our GLEN is given as:\n $$\n \\max_{{\\bf X}} \\text{Rank}({\\bf X}^\\top {\\bf L}\\_t{\\bf X})-\\text{Rank}({\\bf X}^\\top {\\bf L}\\_w{\\bf X})\n $$",
" # Response to Rev. 1 (bffF) (part II of IV)\n\n## 2. Compared with the trace model in COLES, the generalization of the proposed Rank difference framework is not illustrated clearly. The author should give proof how the Trace model can be generalized into the Rank model as a special case. \n\n\n* Thank you. In Sec. 5.1 (Eq. 6), we discuss that the nuclear norm $||\\cdot||\\_*$ used by COLES can be regarded as the $\\mathcal{l}\\_1$ norm over singular values. \n\n* Below we demonstrate the relationship among the LogDet, Trace and Rank operators, respectively, under the Schatten norm framework. Essential is the following family of objective functions,\n$$\nf_{\\alpha,\\gamma}(\\mathbf{S})=\\frac{1}{c}\\sum_{i=1}^{d}\\log \\left(\\alpha \\sigma_{i}(\\mathbf{S})+\\gamma\\right)=\\log \\text{det} \\left(\\alpha\\mathbf{S}+\\gamma I\\right), \\quad \\alpha, \\gamma \\geq 0,\n$$\nwhere $\\sigma_{i}(\\mathbf{S}), i=1, \\ldots, d$, are the eigenvalues of either $\\mathbf{S}\\_t \\in \\mathbb{S}\\_+^{d}$ or $\\mathbf{S}\\_w \\in \\mathbb{S}\\_+^{d}$, which are the total scatter matrix and the within scatter matrix from our experiments, respectively. Moreover, $\\mathbb{S}\\_+^{d}$ is a set of symmetric (semi)definite positive matrices of size $d\\times d$ and we define a normalization constant $c$, that is, $c=1$ or $c=\\log(\\alpha+\\gamma)$ as detailed below.\n\n* The relationship between our LogDet function and the Schatten norm is:\n$$\n\\lim_{p \\rightarrow 0} \\frac{S^p_{\\gamma, p}(\\mathbf{S})-d}{p}= f_{1,\\gamma}(\\mathbf{S}), \\quad \\text{where} \\quad S_{\\gamma, p}(X)=\\left(\\sum_{i=1}^{d}\\left(\\sigma_{i}(\\mathbf{S})+\\gamma\\right)^{p})\\right)^{1/p}, %, \\quad 0<p \\leq 1.\n$$\nwhere $c=1$. \n\n* From the asymptotic analysis, **we can conclude that the LogDet is arbitrarily accurate rational approximation** of $\\mathcal{l}_0$ (the so-called pseudo-norm counting non-zero elements) over the eigenvalues of $\\mathbf{S}$. \n\n* The **case $p=1$ yields the nuclear norm (Trace) which makes the `smoothed' rank difference of GLEN become equivalent of COLES. The opposing limit case, denoted as $p=0$ recovers LogDet formula**.\n\n* **One can also recover the exact Rank** from the LogDet formulation by:\n$$\n\\lim_{\\alpha \\rightarrow \\infty} f_{\\alpha,1}(\\mathbf{S})=\\text{Rank}(\\mathbf{S}) \\quad \\text{if} \\quad c=\\log(1+\\alpha).\n$$\nThis is apparent because:\n$$\n\\lim_{\\alpha \\rightarrow \\infty} \\frac{\\log(1+\\alpha\\sigma_i)}{\\log(1+\\alpha)} =1 \\quad \\text{if} \\quad \\sigma_i>0 \\quad \\text{and} \\quad \\lim_{\\alpha \\rightarrow \\infty} \\frac{\\log(1+\\alpha\\sigma_i)}{\\log(1+\\alpha)} =0 \\quad \\text{if} \\quad \\sigma_i=0. \n$$\n\n\n\n## 3. Does the LogDet model still maintain the generalization property?\n\nYes, if we understood correctly the reviewer's question. Kindly **see Resp. 2, where we show how to recover the Trace based model, and the Rank based model from the LogDet formulation**. Kindly also note we propose in fact the Rank formulation (see Condition 1) as the most general case because in **Theorem 1 (main paper), we offer the expression for the minimum separation between any two embeddings from two different classes (best minimum margin)**. As LogDet model can approach Rank model, in theory, this is the limit on the best separation it can achieve.\n\n",
" # Response to Rev. 1 (bffF) (part III of IV)\n\n## 4. Are there other cases except COLES that can be generalized to GLEN?\n\nFirstly, allow us highlight our contributions from three different perspectives.\n\n1. We define a measurable condition $\\text{Rank}(S_t)=\\text{Rank}(S_w)+\\text{Rank}(S_b)$ for a class of embedding spaces. Under this condition, our Theorem 1 (main paper) provides a target for the minimum separation between any two embeddings from two different classes (best minimum margin).\n\n2. For this condition, we design an unconstrained objective function: we maximize the rank difference between $\\text{Rank}(S_t)$ and $\\text{Rank}(S_w)$. We choose the rank difference as a variety of solutions can be recovered from it, e.g., the difference of the nuclear norms (COLES), the difference between LogDet expressions, the difference of $\\gamma$-nuclear norms, the difference of $S\\_p$ norms, and the difference of Geman norms.\n \n Our optimization problem applies to Laplacian Eigenmaps, Contrastive Laplacian Eigenmaps, Linear Discriminant Analysis and is a matrix-form generalization of the SampledNCE framework, as explained in Resp. 1 above. Thus, our approach is versatile. \n\n3. Notice that the rank difference is a difficult NP-hard problem. Inspired by the approximation of rank minimization, we choose LogDet difference as a versatile surrogate of the rank difference as it lets recover the trace difference (COLES) and rank difference (GLEN), depending on parameters, as explained in Resp. 2 above. LogDet is differentiable and can approximate the rank with an arbitrary accuracy.\n\n\n4. Below we show how we can redefine Local Preserving Projection (LPP) and Deep Spectral Clustering (DSC) within the COLES and GLEN frameworks. All models below are based on the S$^2$GC backbone.\n\n * DSC is extension of `Deep Spectral Clustering Learning', ICML'17, by minimizing $\\text{Tr}(f_\\Theta(\\mathbf{X})^\\top\\mathbf{L}\\_w f_\\Theta(\\mathbf{X}))$ where $f_\\Theta$ is a two-layer neural network (MLP). Kindly note this is non-contrastive learning that only uses $\\mathbf{L}\\_w$.\n\n * LPP is extension of `Locality Preserving Projections', NeurIPS'03, that learns an orthogonal linear projection by minimizing $\\text{Tr}(\\mathbf{U}\\mathbf{X}^\\top\\mathbf{L}\\_w \\mathbf{X}\\mathbf{U}^\\top))$. Kindly note this is non-contrastive learning that only uses $\\mathbf{L}\\_w$.\n\n * We define COLES-LPP as minimizing $\\text{Tr}(\\mathbf{U}\\mathbf{X}^\\top\\mathbf{L}_t \\mathbf{X}\\mathbf{U}^\\top))-\\text{Tr}(\\mathbf{U}\\mathbf{X}^\\top\\mathbf{L}_w \\mathbf{X}\\mathbf{U}^\\top))$.\n\n * COLES* minimizes $\\text{Tr}(f_\\Theta(\\mathbf{X})^\\top\\mathbf{L}\\_t f_\\Theta(\\mathbf{X}))-\\text{Tr}(f_\\Theta(\\mathbf{X})^\\top\\mathbf{L}\\_w f_\\Theta(\\mathbf{X}))$, where * means two MLP layers are added (as in the DSC model above) are used (added to parameters $\\Theta$) between the S$^2$GC backbone and the loss.\n\n * GLEN-LPP is defined as $\\text{rank}(\\mathbf{U}\\mathbf{X}^\\top\\mathbf{L}_t \\mathbf{X}\\mathbf{U}^\\top))-\\text{rank}(\\mathbf{U}\\mathbf{X}^\\top\\mathbf{L}_w \\mathbf{X}\\mathbf{U}^\\top))$.\n\n * GLEN $\\text{rank}(f_\\Theta(\\mathbf{X})^\\top\\mathbf{L}\\_t f_\\Theta(\\mathbf{X}))-\\text{rank}(f_\\Theta(\\mathbf{X})^\\top\\mathbf{L}\\_w f_\\Theta(\\mathbf{X}))$ also uses two MLP layers as in the DSC model.\n\n * Below are the results:\n | | Cora (5) | Cora (20) | Citeseer (5) | Citeseer (20) | Pubmed (5) | Pubmed (20) | Cora-full (5) | Cora-full (20) | \n |---------------------------|-----------|-----------|--------------|---------------|-----------|-----------|--------------|---------------|\n |S$^2$GC | 71.4±4.4 | 81.3±1.2 | 60.3±4.0 | 69.5±1.2 | 67.6±4.2 | 73.3±2.0 | 41.8±1.7 | 60.0±0.5 |\n | LPP | 34.5±1.6 | 54.4±1.5 | 30.5±1.4 | 42.3±1.5 | 39.4±5.3 | 43.9±4.7 | 50.8±1.4 | 61.8±0.5 |\n | DSC | 32.5±3.9 | 53.4±4.6 | 37.2±4.0 | 48.24±3.0 | 40.0±5.6 | 39.2±5.6 | 50.04±0.0 | 60.0±1.0 |\n | COLES-LPP | 75.0±3.4 | 81.0±1.3 | 67.9±2.3 | 71.7±0.9 | 62.6±5.0 | 73.2±2.6 | 47.6±1.2 | 59.2±0.5 |\n | COLES* | 73.7±3.0 | 80.4±1.0 | 67.4±2.0 | 71.9±0.9 | 60.3±6.0 | 65.9±1.7 | 23.0±1.4 | 38.3±1.1|\n | GLEN-LPP | 75.3±3.6 | 82.6±1.2 | 65.9±2.7 | 71.5±1.0 | 68.9±3.9 | 78.4±2.1 | 51.4±1.4 | 62.0±0.6 |\n | GLEN | **78.2±2.4** | **83.0±1.0** | **69.1±2.1** | **72.3±0.9** | **70.6±3.9** | **80.1±1.9** | **53.0±1.5** | **62.6±0.5**|\n\n * Although LPP is a dimensionality reduction method, it significantly weakens the performance of S$^2$GC. In DSC, performance is further degraded by due to MLP. \n\n * The contrastive term help COLES get better results compared with the baseline S$^2$GC. However, COLES with MLP (COLES*) looses the performance compared with COLES-LPP, e.g., in Cora-full.\n * In contrast, GLEN-LPP and especially GLEN (which includes MLP) work better than the corresponding competitors, e.g., COLES and COLES*.\n\n",
" \n# Response to Rev. 1 (bffF) (part IV of IV)\n\n## 5. Compare the LogDet model with other surrogates of the Rank problem.\n\nThis is indeed a very interesting evaluation to perform. To this end, we choose four different surrogates of $\\text{Rank}(\\mathbf{S})$:\n* Nuclear norm $R_{N}(\\mathbf{S})=\\sum_i \\sigma_i(\\mathbf{S})$\n* $\\gamma$-nuclear norm $R_{\\gamma\\\\,NN}=\\sum_i\\frac{(1+\\gamma)\\sigma_i(\\mathbf{S})}{γ+\\sigma_i(\\mathbf{S})}$\n* $S\\_p$ norm $R_{S_p\\\\,norm} = \\sum_i\\sigma_i(\\mathbf{S})^{p}$\n* Geman norm $R_{Geman}=\\sum_i\\frac{\\sigma_i(\\mathbf{S})}{γ+\\sigma_i(\\mathbf{S})}$\n\nBelow are results on different specific surrogates:\n\n| | Cora (5) | Cora (20) | Citeseer (5) | Citeseer (20) | Pubmed (5) | Pubmed (20) | Cora-full (5) | Cora-full (20) | \n|---------------------------|-----------|-----------|--------------|---------------|-----------|-----------|--------------|---------------|\n| GLEN (nuclear norm) | 76.5±2.6 | 81.5±1.2 | 67.5±2.2 | 71.3±1.0 | 66.0±5.2 | 77.4±1.9 | 50.8±1.4 | 61.8±0.5 |\n| GLEN ($\\gamma$ nuclear) | 68.2±3.2 | 80.9±1.3 | 65.8±2.4 | 70.9±1.0 | 67.6±8.1 | 74.4±3.9 | 49.9±4.1 | 57.0±1.0 |\n|GLEN ($S\\_p$ norm) | 78.0±2.3 | 82.9±1.1 | 67.4±1.9 | 71.7.±1.0 | 62.0±5.7 | 74.9±2.9 | 49.9±1.5 | 60.0±1.6 |\n| GLEN (Geman norm) | 65.8±3.4 | 80.1±1.3 | 64.0±2.8 | 70.6.±1.0 | 57.9±5.0 | 67.5±5.6 | 45.1±3.0 | 57.9±1.4 |\n| GLEN (LogDet) | **78.2±2.4** | **83.0±1.0** | **69.1±2.1** | **72.3±0.9** | **70.6±3.9** | **80.1±1.9** | **53.0±1.5** | **62.6±0.5** | \n\nFrom the table we can conclude that $S\\_p$ norm is an interesting approximation of the rank problem. However, on balance, LogDet has been consistently the best performer.\n\n\n\n\n## 6. Is the LogDet used to solve the Rank problem the original work of the paper? If not, list the references. \n\n* Thank you. The Rank difference and the LogDet difference emerging from our SampledNCE derivations are original, together with Theorem 1. In Resp. 2 we also show that LogDet can indeed approximate Rank with a desired accuracy. The better the approximation, the closer the guarantees hold. As LogDet is typically upper bounded by the Trace [8], this suggests LogDet is closer to fulfilling Theorem 1 than the Trace problem (and COLES). \n\n* Kindly note that we do not claim that LogDet and its association to the rank approximation are our contributions. To that end, we have cited the paper which approximates the rank by LogDet. See [8]. We will make it clearer. \n\n >[8]. Maryam Fazel, Haitham Hindi, and Stephen P Boyd. Log-det heuristic for matrix rank mini336 mization with applications to hankel and euclidean distance matrices. In Proceedings of the 337 2003 American Control Conference, 2003., volume 3, pages 2156–2162. IEEE, 2003.\n\n* However, approximating the Rank difference and the LogDet difference is a new problem. To that end we have provided:\n * Proposition 5 (main paper): it shows that the difference of two LogDet terms is lower-bounded by the identity regularized Affine Invariant Riemannian Metric (AIRM) and upper-bounded by $\\sqrt{d}$ times AIRM ($d$ is the side size of square matrix). This indicates the relation of the LogDet difference and well-established AIRM for symmetric positive definite matrices.\n * Below we also provide a theoretical analysis of the approximation error for the LogDet difference. This analysis is important as LogDet is generally unbounded (no finite $\\tau\\ll\\infty$ if eigenvalues are unbounded), i.e., $\\lim_{\\sigma\\_i\\rightarrow\\infty}\\frac{1}{c}\\log(\\alpha\\sigma\\_i+\\gamma)\\rightarrow\\infty$ for $c=\\log(\\alpha+\\gamma)$ and $1<\\alpha+\\gamma\\ll\\infty$. Without the loss of generality, let $\\gamma=1$, the smallest error under $\\alpha\\rightarrow\\infty$ is $\\Delta\\epsilon=\\lim_{\\alpha\\rightarrow\\infty}\\frac{1}{c}(\\log(\\alpha\\sigma\\_i +1)-\\log(\\alpha\\sigma'\\_i+1))=0$.\n\n\n\n## 7. Proof for the Proposition 1\nThank you. We have indeed missed it in the main paper. We have added it into the revision.\n\nThe proof follows from the equality $\\det(\\mathbf{I} + \\alpha\\mathbf{X}) = \\prod_i\\sigma_i(\\mathbf{I} + \\alpha\\mathbf{X}) = \\prod_i(1 + \\alpha\\sigma_i(\\mathbf{X})) = \\det(\\mathbf{I} + \\alpha\\text{Eig}(\\mathbf{X}))$ where $\\text{Eig}(\\cdot)$ is the diagonal matrix with $\\sigma_i,\\ldots,\\sigma_d$. Thus: \n\n$\\delta_{rf}(\\mathbf{X},\\mathbf{X}';\\alpha) = \\log\\det(\\mathbf{I}+\\alpha\\mathbf{X})-\\lambda\\log\\det(\\mathbf{I} + \\alpha\\mathbf{X}') = $ \n\n$\\log\\det(\\mathbf{I}+\\alpha\\text{Eig}(\\mathbf{X}))-\\lambda\\log\\det(\\mathbf{I} +\\alpha\\text{Eig}(\\mathbf{X}'))=\\delta_{rf}(\\text{Eig}(\\mathbf{X}),\\text{Eig}(\\mathbf{X}');\\alpha)$.",
" This paper proposes a novel graph-embedding framework, which is a rank difference model. This rank model is NP-hard to solve, so the authors optimize the loss formulation by means of the logdet. The transformed model is then solvable and proven to be theoretically effective. The paper also offers connection between the given model and other graph embedding methods, and calculate the upper and lower bound of the proposed loss function. In total, the paper is well written and theoretically innovative. Strength\n1. The COLES can be the special case for the proposed GLEN framework, and the GLEN outperforms the COLES.\n2. The theoretical analysis gives the upper and lower bound of the proposed GLEN, which makes the model have good interpretation.\n3. The experiments are effective to demonstrate the proposed framework does have good performance.\n\nWeakness\n1. It is not clear about the relationship between the proposed model and the contrastive learning, although the contrastive learning is introduced in related work.\n2. Compared with the trace model in COLES, the generalization of the proposed rank difference framework is not illustrated clearly. The author should give proof how the trace model can be generalized into the rank model as a special case. \n3. Although GLEN is called the generalized Laplacian Eigenmaps framework, the paper shows no other cases that can also be generalized to GLEN except COLES.\n4. The paper does not compare the logdet model with other methods that can serve as a surrogate of the rank problem. There are many methods can be used to solve the problem at present, so the authors should list and compare these methods and explain why the logdet is chosen. \n5. The logdet terms are chosen to solve the rank model, which actually transfers the rank model into a logdet model. Does the logdet model still maintain the generalization property? If so, please give the proof.\n 1. Can authors give the proof for the Proposition 1? This proposition lacks the necessary proof.\n2. Are there any other cases except COLES that can be generalized to GLEN? If so, please supply examples of these cases. Otherwise, what’s the meaning of the ‘generalization’ of GLEN? \n3. Is the logdet to solve the rank problem the original work of the paper? If not, listing the references is necessary. Does the logdet framework still hold the generalization property?If so, please give the proof.\n4. Is there any other effective methods to solve the rank difference problem? The authors should compare these methods and explain the reason why logdet is chosen. \n The generalization of the proposed framework is not clearly described, and many other cases need to supply except COLES. Besides, there are other methods to solve the rank difference problem except using logdet. The paper does not list and compare these methods. ",
" This paper proposes a new unsupervised representation learning method, mainly based on GNN. \n\nThe idea is motivated by the scatter matrices that are usually used in LDA. Based on the fact that the features would be discriminative provided that Condition 1 holds, the model aims to maximize rank($S_w$) and minimize rank($S_b$) simultaneously, which is different from the losses of the popular contrastive learning. \n\nIn Section 3, the authors show that the equivalence between the specific two-layer (featureless) GAE and linear (featureless) GAE. \n\nIn Section 4, the authors try to investigate the real impact of ReLU on the hidden layer. \n\nThen, as the original goal is NP-hard, a surrogate that approximates the rank better than the classical nuclear norm is introduced. \n\nFinally, sufficient experiments are conducted to verify the idea. \n\n ### Pros: \n\n1. The idea to use the scatter matrices to learn discriminative features seems novel. It is different from the popular contrastive models. \n2. The motivation is convincing and interesting to me. \n3. The experimental results, especially on semi-supervised node classification when labels are pretty rare, seem to show effectiveness. \n\n### Cons: \n\n1. An important question that confuses me is why not to testify the idea on the setting of general contrastive learning. If I don't misunderstand the model, the graph (*i.e.*, adjacency) seems to be only used in the implementation of $f_\\Theta$, which indicates that $f_\\Theta$ could be any neural networks (or other projection techniques). So why do you constrain the model on the GNNs? If some similar ideas have been proposed in the general contrastive learning (which I'm not familiar with the newest publications), it will severely affect the novelty. \n\n2. A major concern is the surrogate may be not novel. The idea to use $\\log(\\cdot)$ to replace the $\\ell_p$-norm (which is equivalent to the Schatten-$p$ norm for the rank) has been well studied. Is there a difference between the following literature and this paper? It limits the novelty of the paper. \n\n [1] Calibrated Multi-Task Learning, SIGKDD, 2018. \n\n3. Could the authors also provide some experiments under the common settings of Cora/Citeseer/PubMed, instead of the random split? It is also an important comparison with the existing GNN models.\n\n4. No source code is provided so that it may limit the reproducibility. \n\n5. There are some typos including but not limited to: \n - The meaning of letters in boldface is confusing. For example, in Figure 1, the matrix is denoted by $S_b$ while in Section 3.1, all matrices are highlighted by boldface (*e.g.*, $\\textbf{S}_w$). In Line 135,$C$ is also bold.\n - In Line-145, Theorem 2 -> Theorem 1?\n\nOverall, I would like to update my score after reading other reviews and the response. (More details can be found in the previous part)\n\n1. Why do you constrain the model on the GNNs? In other words, why not conduct experiments on the general datasets?> \n\n2. Is there a difference between the following literature [1] and this paper? It limits the novelty of the paper. \n\n [1] Calibrated Multi-Task Learning, SIGKDD, 2018. \n\n3. Could the authors also provide some experiments under the common settings of Cora/Citeseer/PubMed, instead of the random split? N/A",
" The paper proposes a novel objective for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), to learn graph representation by maximizing the difference of logdet between the total scatter matrix and the within-class scatter matrix. The authors interpret this as a surrogate of rank difference maximization and give some theoretical results. Experiments show that GLEN offers good accuracy and scalability against state-of-the-art baselines on various benchmarks.\n\n** Post Rebuttal Update **\n\nI've read the rebuttal and the other reviewers' comments. I appreciate the update the authors have made, for example, the (supposedly) new experimental updates regarding Reviewer bffF's comment. I appreciate the experimental results against the prior art. My general concern is whether the theorem indeed shows a difference from the prior art. It seems to me that the rank difference formulation or the minimum class separation has been identified in the literature. What's more interesting is to explain why the logdet can be a better objective, which seems quite possible given the new experimental results.\n\n\n ** Strengths **\n\nS1. The proposed algorithm is based on the scatter matrices, which are of the size of $d\\times d$, not $n\\times n$. Note that $d$ and $n$ are the embedding dimension and the node number, respectively. Thus, the method is quite scalable to large graphs.\n\nS2. Experiments show strong results in various settings and datasets.\n\n** Weaknesses **\n\nW1. The authors approximate the rank difference with logdet difference. However, it is unclear why optimizing the rank difference or the logdet difference leads to good results.\n\nW2. The notations are confusing and may contain errors. For example, in section 3.1, it seems that $Z$ is $d\\times n$ instead of $n\\times m$. Also, the $S$ matrices should be $d\\times d$ instead of $n\\times n$. If $Z$ is $n\\times m$ and $S$ is $n\\times n$, then the algorithm should not be scalable as $n$ is the number of nodes.\n\n Q1. The motivation of Condition 1 is unclear. In particular, why $\\text{Rank}(S_t)=\\text{Rank}(S_w)+\\text{Rank}(S_b)$ yields good embedding?\n\nQ2. Why formulate the main problem as rank difference? Why not directly analyze the logdet difference?\n\n The authors didn't discuss the limitation and potential social impact."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2022_HjicdpP-Nth",
"427K1wlrPCx",
"0LKFYg6ifJc",
"0LKFYg6ifJc",
"Iimyw9XR2O1",
"Iimyw9XR2O1",
"Iimyw9XR2O1",
"Iimyw9XR2O1",
"nips_2022_HjicdpP-Nth",
"nips_2022_HjicdpP-Nth",
"nips_2022_HjicdpP-Nth"
] |
nips_2022_g_bqn4ewVG | PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape Completion on Unseen Categories | While 3D shape representations enable powerful reasoning in many visual and perception applications, learning 3D shape priors tends to be constrained to the specific categories trained on, leading to an inefficient learning process, particularly for general applications with unseen categories. Thus, we propose PatchComplete, which learns effective shape priors based on multi-resolution local patches, which are often more general than full shapes (e.g., chairs and tables often both share legs) and thus enable geometric reasoning about unseen class categories. To learn these shared substructures, we learn multi-resolution patch priors across all train categories, which are then associated to input partial shape observations by attention across the patch priors, and finally decoded into a complete shape reconstruction. Such patch-based priors avoid overfitting to specific train categories and enable reconstruction on entirely unseen categories at test time. We demonstrate the effectiveness of our approach on synthetic ShapeNet data as well as challenging real-scanned objects from ScanNet, which include noise and clutter, improving over state of the art in novel-category shape completion by 19.3% in chamfer distance on ShapeNet, and 9.0% for ScanNet. | Accept | This is an interesting paper on class-independent 3d shape completion. Reviewers agree that the paper has good quality and is moderately original. There were initially some questions about the level of generalization to new classes, but after a strong rebuttal all reviewers find the results compelling and all of them suggest acceptance. I agree with their assessment. | train | [
"iP0I2My5C_r",
"982PpeRSyyO",
"ZZMg34Zp-mQ",
"rXs_MuyRofA",
"JIYEIKdMl-5",
"GjpeeSdtSWg",
"siWbwWdOdHg",
"sEC7eyVnfSj"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your valuable review; we are glad that our method was found to be 'novel' and to enable 'good generalization to unseen categories', with 'experimental evaluation [that] is done well'.\n\n**Applications.** Our method focuses on the problem of shape completion on objects from unseen categories.\nWe believe our approach of disentangling the shape reconstruction task to learning local substructures has the potential to be applied to various 3D reconstruction tasks for unseen objects or environments in the future, for instance, single-view shape reconstruction, or 3D scene completion and reconstruction.\n \n**Code release.** We will publicly release the code and data.\n\n**Multi-resolution ablation.** We evaluate alternative multi-resolution combinations in Table 1, which shows that all resolutions benefit the more detailed chamfer evaluation (whereas IoU only penalizes non-intersections, rather than how far the predictions are from the GT object).\n\n Table 1: Ablation study of patch resolutions on synthetic ShapeNet data (CD × ${10}^{2}$).\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|---|:---|:---|:---|:---|\n|Ours (${4}^{3}$ with ${32}^{3}$) | 4.30 | 4.35 | 0.642 | 0.651|\n|Ours (${4}^{3}$ with ${8}^{3}$) | 4.35 | 4.42 | **0.644** | **0.654**|\n|Ours (all resolutions) | **4.23** | **4.27** | **0.644** | **0.654**|\n\nWe additionally evaluate the standard deviations of the multi-resolution ablations in Table 2 (note that the single-resolution evaluations were only run once for the main paper, and so the averages have changed the values slightly). The multi-resolution improvements are significant and consistent in both settings. \n\n Table 2: Ablation study on patch resolution (inc. standard deviation, CD × ${10}^{2}$).\n\n||ShapeNet||||ScanNet||||\n|---|:---|:---|:---|:---|:---|:---|:---|:---|\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|Ours (${32}^{3}$ priors only) | 11.97 $\\pm3{e}^{-2}$ |11.62 $\\pm1{e}^{-2}$ | 0.35 $\\pm1{e}^{-3}$|0.37 $\\pm1{e}^{-3}$|10.40 $\\pm3{e}^{-2}$|11.33 $\\pm1{e}^{-1}$|0.41 $\\pm2{e}^{-3}$|0.39 $\\pm5{e}^{-3}$|\n|Ours (${8}^{3}$ priors only) | 4.89 $\\pm3{e}^{-2}$ | 4.92 $\\pm3{e}^{-2}$ | 0.61 $\\pm4{e}^{-4}$ | 0.62 $\\pm1{e}^{-3}$|7.67 $\\pm3{e}^{-2}$|7.84 $\\pm3{e}^{-2}$|0.49 $\\pm6{e}^{-3}$|0.49 $\\pm2{e}^{-3}$|\n|Ours (${4}^{3}$ priors only) | 4.45 $\\pm1{e}^{-2}$ | 4.50 $\\pm1{e}^{-2}$ | **0.64** $\\pm4{e}^{-3}$ | 0.64 $\\pm2{e}^{-2}$ | **7.37** $\\pm3{e}^{-2}$|7.63 $\\pm5{e}^{-2}$ | 0.48 $\\pm4{e}^{-3}$ | 0.48 $\\pm7{e}^{-3}$|\n|Ours|**4.23** $\\pm4{e}^{-2}$|**4.27**$\\pm5{e}^{-2}$|**0.64**$\\pm1{e}^{-3}$|**0.65** $\\pm1{e}^{-3}$|**7.37** $\\pm7{e}^{-2}$|**7.49** $\\pm4{e}^{-2}$|**0.50**$\\pm9{e}^{-3}$|**0.50**$\\pm5{e}^{-3}$|\n\n**Eq. 3 Ablation on concatenation.** We evaluate the effectiveness of concatenation in *Eq.3* in Table 3, considering the attention-based term only (the core of our approach). We note that when excluding the attention-based term, this does not consider local patches anymore and becomes similar to the encoder-decoder training of 3D-EPN. \nAs the attention-based learning of correspondence to local priors is the core of our approach, this produces the most relative benefit, with a slight improvement when combining the terms together.\n\n Table 3: Concatenation ablation study for each term in Eq.3 on the ShapeNet dataset (CD × ${10}^{2}$).\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|---|:---|:---|:---|:---|\n|3D-EPN | 5.48 | 5.58 | 0.582 | 0.594|\n|Ours (attention term only) | 4.25 | 4.29 | 0.640 | 0.650|\n|Ours | **4.23** | **4.27** | **0.644** | **0.654**|\n\n**'No attention' in Table 4.** In Table 4, the 'no attention' experiment replaces the attention score calculation in Eq. 1 by using MLPs (on concatenated input/prior features) to predict weights for each input-prior pair.\n\n**Ablation for fixed priors, no pre-training, and no attention on ${4}^{3}$ priors only.** We evaluate this scenario in Table 4, which produces significantly worse results due to the lack of learnable priors in combination with attention.\n\n Table 4: Evaluation for fixed priors, no pre-training, and no-attention on ${4}^{3}$ priors only (CD × ${10}^{2}$).\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|---|:---|:---|:---|:---|\n|Ours (fixed priors, no pre-training, and no-attention on ${4}^{3}$ priors only) | 9.53 | 9.73 | 0.35 | 0.37 |\n|Ours | **7.37** | **7.49** | **0.50** | **0.50** |\n\n**Misc.** Thanks for the suggestions on visualization, references, and formation, we have modified our paper for the final version. We use 112 shape priors in our method, which are clustered from the 3202 train shapes, and will include more clarification in the final paper.",
" Thank you for your helpful feedback, and we are glad that our patch-level priors and multi-resolution fusion was found to be 'novel' and 'technically sound'.\n\n**Cross Validation.** Our novel category split was designed based on the number of objects in each category, to mimic real-world scenarios where object categories with larger numbers of observations are used for training.\nTo show that our approach is independent of the category splitting strategy, we add two new settings for evaluation on ShapeNet. \nWe use the same overall categories (26) as originally, then randomly shuffle them for train/novel categories. \nTo consider chair/bookshelf performance, categories are shuffled until these classes appear in the test split.\nWe then consider two more splits where chair and bookshelf appear in the novel category split, respectively.\nTable 1 shows that our approach is robust across these splits.\n\nFor Split 1, the 8 novel testing categories are *trash bin, bed, piano bench, **chair**, monitor, lamp, laptop, washing machine*.\nFor Split 2, the 8 novel testing categories are *basket, **bookshelf**, bowl, cabinet, laptop, pot, sofa, stove*. \n\n Table 1: Category split ablation on ShapeNet (CD × ${10}^{2}$).\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|:---|:---|:---|:---|:---|\n|Split 1 | 4.30 | 4.29 | 0.65 | 0.66|\n|Split 2 | 4.19 | 4.25 | 0.68 | 0.67|\n|Ours | 4.23 | 4.27 | 0.64 | 0.65|\n\n**Performance on Complex Data** From the cross-validation experiments, we see that our method can effectively handle the more complex geometry of chairs and bookshelves in Table 2. We will include additional qualitative results in the final version.\n\n Table 2: Quantitative results for chair from Split 1 and bookshelf from Split 2 (CD × ${10}^{2}$).\n| | CD$\\downarrow$ | IoU$\\uparrow$|\n|---|:---|:---|\n|Chair | 4.65 | 0.64|\n|Bookshelf | 4.15 | 0.61|\n\n**Impact of the Number of Priors.** We evaluate the effect of different numbers of priors on ShapeNet data in Table 3 (with 50\\% priors and 150\\% priors).\nWe see that performance degrades with 50\\% priors, while further increasing the prior number reaches a performance plateau (and requires additional storage). In our approach, our prior storage takes 14.68 MB in memory.\n\n Table 3: Ablation on the number of shape priors (CD × ${10}^{2}$).\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|---|:---|:---|:---|:---|\n|Ours (50\\% priors) | 4.41 | 4.45 | 0.632 | 0.640|\n|Ours | 4.23 | **4.27** | **0.644** | **0.654**|\n|Ours (150\\% priors) | **4.22** | 4.30 | 0.638 | 0.647|\n\n**Domain Independence.** Training on ShapeNet and testing directly on ScanNet is a challenging task, as ShapeNet objects are in isolation, whereas ScanNet objects can contain background clutter around them. \nWhen doing so without any fine-tuning, our method can still provide reasonable results, and achieves performance on par with state-of-the-art methods that have been fine-tuned on ScanNet data, as shown in Table 4.\n\n Table 4: Ablation study on domain independence (CD × ${10}^{2}$).\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|---|:---|:---|:---|:---|\n|Best-performing SOTA baseline | 8.12 | 8.26 | 0.44 |0.44|\n|Ours (w/o finetuning) | 8.17 | 8.44 | 0.44 | 0.46|\n|Ours (w/ finetuning) | 7.37 | 7.49 | 0.50 | 0.50|\n\n**TSDF Representation.** We use a TSDF representation, which is common in 3D scanning and capture methods (e.g., Volumetric Fusion, KinectFusion, etc.), and TSDFs provide information in empty regions about the distance to object surfaces, along with the sign indicating known/unknown regions with respect to the camera.\nAdditionally, the volumetric representation provides a natural spatial correlation between priors and inputs.\nPoint cloud inputs could potentially be processed by converting to volumetric grids for testing, or extracting point-based local input feature regions for the input-prior association during training.\n",
" Thank you for your constructive review, and we are glad that our multi-resolution approach was found to be 'sensible' with 'experimental results [that] validate its effect'. \n\n**Detailed Geometry.** In order to further measure the potential for detail, we have conducted two cross-validation experiments. Rather than considering the largest-represented categories for training, as in our paper setup, we arbitrarily shuffle categories to obtain a setup where chairs and bookshelves lie in the novel category test set (following z3XK's suggestion). Here, we see that our approach can maintain effective geometric representation for these complex categories, with 0.64 IoU and 4.65 x ${10}^{-2}$ CD for chairs, and 0.61 IoU and 4.15 x ${10}^{-2}$ CD for bookshelves.\n\n**Softmax Denominator.** Empirically, we found $d/2$ rather than $\\sqrt{d}$ to provide very slightly better performance.\n\n**Input Partial Shapes.** We use an SDF representation following that of popular volumetric 3D reconstruction and scanning methods (e.g., Volumetric Fusion [1], KinectFusion [2], BundleFusion [3]), and leveraged by alternative shape and scene completion methods (3D ShapeNets [4], 3D-EPN [5], SSCNet [6]). Here, input depth frames are fused into an SDF grid where the sign denotes in-front-of a surface (known empty) vs. behind a surface (unknown), rather than inside-outside.\n\n**Fig 5. GT.** We generate ground truth SDFs for ShapeNet by applying virtual rendering and fusion on synthetic shape meshes, following Occupancy Networks [7]. The volumetric resolution can lead to small discretization artifacts and may not be able to capture very fine-scale details.\n\n**Influence of Hyperparameters.** We considered patch resolutions of ${4}^{3}$, ${8}^{3}$, ${16}^{3}$, and ${32}^{3}$. We found ${16}^{3}$ and ${8}^{3}$ to perform very similarly (variance of $8{e}^{-6}$ IoU and $6{e}^{-5}$ CD), and used ${8}^{3}$ to potentially resolve more detailed patches.\n\nFor the multi-resolution pyramid, we consider different combinations with ${4}^{3}$ (which provided the best single-resolution results) in Table 1. Here, our multi-resolution approach performs the best with a combination of global and local reasoning. \n\nThe performance variation between loss coefficients tested produced a variance of $2{e}^{-4}$ in IoU and $2{e}^{-5}$ in CD; we used the coefficients that produced the best validation results.\n\n Table 1: Ablation study of patch resolutions on synthetic ShapeNet data (CD × ${10}^{2}$).\n| | Inst-CD$\\downarrow$ | Cat-CD$\\downarrow$ | Inst-IoU$\\uparrow$ | Cat-IoU$\\uparrow$|\n|---|:---|:---|:---|:---|\n|Ours (${4}^{3}$ with ${32}^{3}$) | 4.30 | 4.35 | 0.642 | 0.651|\n|Ours (${4}^{3}$ with ${8}^{3}$) | 4.35 | 4.42 | **0.644** | **0.654**|\n|Ours (all resolutions) | **4.23** | **4.27** | **0.644** | **0.654**|\n\n**Prior Clustering.** We cluster the TSDF representations of the train shapes (a truncation of 2.5 voxels) by mean shift clustering to generate the shape priors.\n\n**Notation.** Thank you for your suggestions, we have made a pass through the paper to clarify the notation.\n\n**Reference**\n\n[1] Volumetric method for building complex models from range images. [Curless and Levoy 96]\n\n[2] Kinectfusion: Real-time dense surface mapping and tracking. [Newcombe et al. 11]\n\n[3] Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. [Dai et al. 17]\n\n[4] 3d shapenets: A deep representation for volumetric shapes. [Wu et al. 15]\n\n[5] Shape completion using 3d-encoder-predictor cnns and shape synthesis. [Dai et al. 17]\n\n[6] Semantic scene completion from a single depth image. [Song et al. 17]\n\n[7] Convolutional occupancy networks. [Peng et al. 20]",
" Thank you for your valuable review; we are glad that our method and presentation were found to be 'effective' and 'well put together'.\n\n**Time Efficiency.** We evaluate runtime efficiency in Table 1. Times are measured for each method for a single shape prediction (running with batch size of 1), averaged over 20 samples. Here, *Ours (${M}^{3}$ priors only)* denotes our approach with only single-resolution $M^3$ priors.\n\n Table 1: Quantitative comparison for testing time efficiency (s).\n|3D-EPN|Few-Shot|IF-Nets|AutoSDF|${4}^{3}$ priors only|${8}^{3}$ priors only|${32}^{3}$ priors only|Ours|\n|:---|:---|:---|:---|:---|:---|:---|:---|\n|0.015 | 0.004 | 0.421 | 0.958 | 0.025 | 0.017 | 0.016 | 0.063|| \n\n\n**Additional Real-world Scenarios.** We are happy to show additional real-world results, in addition to the challenging real-world scenario or ScanNet scanned objects.\n",
" This paper proposed a 3D Shape reconstruction method using local patch priors. The proposed method uses the multi-resolution patch priors based on the observation that within a 3D structure, some details are repetitive and often easier to be constructed first. The paper is well written and balanced + the presentation of the paper is well put together\n\n+ the illustration of the paper is detailed and easy to understand\n\n+ the idea of using local prior and multi-resolution is not necessarily brand new, but it is effective for the method the paper is proposing.\n\n+ based on the paper's experiment results, it indeed improved some real-world 3D reconstruction\n\n- it would be great if the author could showcase some reconstruction from the real world without a ground-truth scan. Just a static RGB photo and the 3D reconstructed object would help better demonstrate the strength of the proposed method. * the method is indeed effective over the test case, how about the efficiency? what is the average runtime compared to other signal-pass and one-resolution methods? Natural limitations as the author also pointed out, the bounding boxes are required for the scan, which means this method is constrained to a predefined space and is not directly translatable to use in the wild. ",
" This paper proposes a deep network architecture for performing 3D shape completion on unseen shape categories by learning local geometric priors at multiple scales. By learning sets of features for distinct categories on patches sampled at different resolutions and fusing these features with those computed from an input partial scan from an unseen category, the network is able to leverage the local information to achieve a better reconstruction. The method is validated qualitatively and quantitatively against state-of-the-art baselines, and a brief ablation study is provided to justify the multi-resolution aspect. This paper makes a nice step towards training shape reconstruction models that can handle out-of-distribution examples from unseen categories. The multi-scale approach is sensible, and the experimental results validate its effect.\n\nThe results that are shown are all quite low frequency, with the shapes having fairy limited local variation. I wonder why this method is unable to handle more detailed geometries. Is this due to the fact that such textures are not seen in the training categories, or is there a different bottleneck?\n\nThe method description is pretty heavy on notation and somewhat difficult to follow. It would be helpful to clearly define the different objects that are considered and clearly distinguish between geometry representations and latent high-dimensional encodings. To this end it's also important to fix subtle typos in the notation, e.g. $S$ should be $\\mathcal S$ on L107. Why is the softmax denominator in (1) $d/2$? This seems to depart from the usual $\\sqrt d$.\n\nHow are the partial shapes inputted to the pipelines as SDFs? Aren't the partial shapes generically not closed surfaces?\n\nWhy do the ground truth synthetic surfaces in Figure 5 have artifacts and imperfections?\n\nTo what extent is the method robust to some of the hyperparameters used? For example, in particular how important is the choice of resolutions and the number of layers in the pyramid in the multi-scale approach? Similarly, how are the coefficients for the loss terms chosen?\n\nHow is the clustering to initialize the shape priors performed? Is it done directly on the SDF representations? Limitations are sufficiently discussed.",
" The paper tackles a challenging problem - the performance degradation on unseen categories in shape completion tasks. Inspired by the observation that different categories may share similar local geometric structures, the authors propose a method to learn multi-resolution patch priors and then reconstruct the complete shape using the recursively fused features. The results show a significant performance improvement in novel-category shape completion compared to baseline methods, and ablations have verified the importance of each component. Learning patch-level priors and the fusion pipeline of different resolution patch priors is moderately novel to me. The pipeline in this submission is technically sound. The submission is clearly written and well organized. The authors also provide the source code in supplementary material for better clarity of the techincal details. BTW, I’m a bit impressed that the completed shapes are mostly watertight and have continuous geometry. \n\nIn general, the evaluation part of the manuscript is good to me. The numerical results show a significant performance improvement over the baseline methods on both synthetic and real scan data. However, as the authors mentioned in limitations, visual and qualitative results on categories with fine geometric structures are not shown, which could have been a strong support to the merit of the proposed method. For the ablation study, the authors answer the importance of each designed component and show convincing numerical results of the effectiveness.\n\n 1. I wonder about the criteria for splitting the datasets. In my opinion, the partition of the data has an enormous impact on performance (e.g., it is hard to complete a chair if we only have sofas as priors). If you randomly choose the training categories, I think it would be better to use cross-validation to show the impact of how the data is divided or the generalization performance. \n\n2. Most of the categories in the paper and supplementary only have simple geometry, such as bed or bathtub. Although you mention the lack of fine-scale geometric details in the results, I am still curious about performance on more complex data (such as the chair or bookshelf you mentioned in the video).\n\n3. What is the impact of the number of priors? In the ablation part, the authors show the results on different patch resolutions, and it seems only 4^3 priors can achieve competitive performance. Can I assume that local path priors play a major role, and it may have less diversity? So it may have redundancy, and the boundary of the number of priors is meaningful in real-world applications. Moreover, it would be better to show the memory usage for storing the priors.\n\n4. How about the domain independence of the proposed method? Have you tried the cross-dataset experiments? More specifically, can you train priors on ShapeNet, and test using ScanNet? Logically speaking, there should be a reasonable result.\n5. By the way, why choose TSDF as the input shape representation? Is it because it is easy to perform 3D convolution operations? Is it possible to change TSDF to point cloud? Tthe paper bills itself that it can complete shapes for entirely unseen categories, but the authors do not illustrate the criteria of splitting the categories in the dataset, making the generalization of the proposed method on unseen categories obscure. It would be better to show more results on this to support the claim. \n\nAnother concern is the performance on categories with more complex geometries, logically, learning over the patch level could be helpful to improve the local geometric details in the synthesized results. Although the authors mention it in the limitations, the results of the method in its current form do not fully show the power of leveraging patch-level priors.",
" The submission introduces PatchComplete for 3D object-level shape completion from a partial input SDF voxel grid, with a focus on generalizing to unseen object categories (not just unseen instances). It evaluates on the synthetic ShapeNet and the real-world ScanNet, which contains 3D reconstructions of rooms (ScanNet annotations are used to segment out objects; pseudo-GT full object shapes are obtained via Scan2CAD). The paper contributes a scheme to learn patch-level shape priors. It also contributes a method that associates an input incomplete shape with that patch-level prior and then reconstructs the complete shape as a (learned) merging of that associated/weighted prior information. Comparisons are done to a diverse set of prior work and PatchComplete outperforms them, especially on unseen categories. A number of ablations are also included. _Positives_\n\nThe method is novel and enables good generalization to unseen categories, which is particularly useful for static 3D reconstruction of unseen environments with systems like KinectFusion.\n\nThe paper is written well in terms of language and clarity. \n\nThe experimental evaluation is done well. The ablations consider a multitude of factors of the method.\n\n_Negatives_\n\nI don't see much on the technical level that could potentially be used in other areas of CV/ML. The method consists of a core (attention into a learned patch prior) and a number of tweaks, which are evaluated in the ablations. The core part is a simple version of (non-sparse) dictionary learning. Questions for the rebuttal:\n\n1) It is not clear to me whether the accompanying code will be released.\n\n2) I have several questions about Table 3:\na) The ablation of which resolutions to use in the multi-resolution scheme (Table 3) shows rather marginal improvements when going from only the finest resolution (4^3) to all three resolutions (Ours; 4^3, 8^3, 32^3). Are really both 8^3 AND 32^3 necessary? How does 4^3+8^3 and 4^3+32^3 perform? \nb) What are the standard deviations for these numbers? Are the marginal improvements significant? \nc) Especially on ScanNet, which is the more interesting dataset compared to ShapeNet for this task, shows results that are on par for 4^3 only and for Ours. I am not really convinced that this supports the claim that using multi-resolution leads to the \"most effective shape completion results\" (line 216).\n\n3) An ablation of the concatenation in Eq. 3 could be added, to get a sense of how much the patch prior and how much the input encoding can achieve on their own, and what their respective issues are. I.e., use only Q^R_i in one ablation and use only the Attention() in another ablation. This is not crucial though.\n\n4) What is the \"no attention\" ablation in Table 4? What is used instead of attention? Only Q^R_i directly, as I suggested in 3)? What else is changed?\n\n5) An overall ablation that uses fixed priors, uses no pre-training, uses no attention, and only uses resolution 4^3 would be interesting. How much do these tweaks contribute to the overall performance?\n\nSuggestions for improvement that are not relevant for the rebuttal:\n\n- I suggest to add qualitative results for the ablations, at least in the supplement. \n\n- How many shapes/priors does {T^c} contain? I.e. how many representative samples (in total) are there out of how many total shapes {G^c}?\n\n- Some related works on deep implicit local geometry representations could be added:\n\n* Genova et al. Local Deep Implicit Functions for 3D Shape, CVPR 2020\n* Charbra et al. Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction, ECCV 2020\n* Takikawa et al. Neural Geometric Level of Detail: Real-Time Rendering With Implicit 3D Shapes, CVPR 2021\n* Jiang et al. Local Implicit Grid Representations for 3D Scenes, CVPR 2020\n* Deng et al. NASA: Neural Articulated Shape Approximation, ECCV 2020\n\n- There's a superfluous period in line 95. - Limitations are mentioned. \n\n- Societal impact is discussed."
] | [
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"sEC7eyVnfSj",
"siWbwWdOdHg",
"GjpeeSdtSWg",
"JIYEIKdMl-5",
"nips_2022_g_bqn4ewVG",
"nips_2022_g_bqn4ewVG",
"nips_2022_g_bqn4ewVG",
"nips_2022_g_bqn4ewVG"
] |
nips_2022_pkfpkWU536D | Neural Shape Deformation Priors | We present Neural Shape Deformation Priors, a novel method for shape manipulation that predicts mesh deformations of non-rigid objects from user-provided handle movements. State-of-the-art methods cast this problem as an optimization task, where the input source mesh is iteratively deformed to minimize an objective function according to hand-crafted regularizers such as ARAP. In this work, we learn the deformation behavior based on the underlying geometric properties of a shape, while leveraging a large-scale dataset containing a diverse set of non-rigid deformations. Specifically, given a source mesh and desired target locations of handles that describe the partial surface deformation, we predict a continuous deformation field that is defined in 3D space to describe the space deformation. To this end, we introduce transformer-based deformation networks that represent a shape deformation as a composition of local surface deformations. It learns a set of local latent codes anchored in 3D space, from which we can learn a set of continuous deformation functions for local surfaces.
Our method can be applied to challenging deformations and generalizes well to unseen deformations. We validate our approach in experiments using the DeformingThing4D dataset, and compare to both classic optimization-based and recent neural network-based methods. | Accept | While some of the scores on this paper are mixed, even the negative reviews highlight the quality and interest of the work and have specific (and somewhat debatable) technical concerns. Overall, the AE recommends accept, especially in light of the detailed and thoughtful responses during the rebuttal phase.
In the camera ready, the authors are encouraged to see if they can squeeze some of the new results (e.g., transfer learning attempt in Figure 6 and comparisons to Shapeflow) in the main body of the paper, where they're more likely to be noticed. | val | [
"VYdPXnwycA_",
"kKOpZtOo27l",
"U9nGvfEWajV",
"5ZPxzh8nvxg",
"Q1BXyYI5BII",
"uXeMer-cIQ8",
"q1oGYs1Qfcu",
"JmGjq-PKZ2s",
"ywcU4zezp5",
"TI180jT6d6K",
"JbqUdqQFOVN",
"rne54RJ8NFk"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the positive feedback! \n\nOur model learns deformation priors from a dataset containing realistic non-rigid motions. When it is directly evaluated on non-realistic or non-physical-aware handles, it will try to find the most similar realistic deformation that can best explain the given handles.\nHowever, we can easily transfer the ideas to non-realistic or non-physical priors by using an appropriate dataset.\nWe are happy to discuss this in more detail and include an example experiment in our camera-ready version. \n\nRegarding interpretability, we agree that this is not non-trivial; however, we can analyze and precisely evaluate the output of our model; e.g., as shown in our experiments, our method predicts more realistic deformations than state-of-the-art baselines such as ARAP, NFGP, and ShapeFlow. We will further add a clear explanation of the model in terms of interpretability.",
" Thank you very much for the response. In my opinion, I think the authors have addressed my doubts and comments as well as the those of my colleagues. Maybe, the answer for the section \"Non-realistic or non-physical-aware user-specified handles\" is not clear enough, but anyway, the rest of the things are correct. Regarding that comment, I cannot understand how the deformation to be obtained is as realistic as possible, due to the fact that we do not have a clear explanation of the model in terms of interpretability. On balance, my rating remains. ",
" The authors solve my problem and also answer other reviewers' questions. So I keep the attitude of acceptance.",
" We thank the reviewers for the constructive comments. It is very encouraging to see that the reviewers found our paper \"well written\" (R1, R2, R4), \"interesting\" (R1), well motivated (R1, R4), clear (R2, R4), our problem relevant (R2), our method \"technically sound\" as well as \"carefully validated\" (R3). In the following, we address the reviewer comments. These responses, together with those for minor issues, have already been included in the revised paper (see updated version).",
" Q1: Require dense correspondences of CAD-based models.\n\nWhile our current method uses a dataset where dense correspondences between temporal mesh frames are available, our framework can also be trained on datasets without dense correspondences by some adjustments on inputs and loss functions. Concretely, we can change our method to receive sparse handle correspondences as inputs, and utilize Chamfer distance as the loss function that does not require ground-truth meshes with dense correspondences. In Figure 9 of the revised supplementary material, we visualize test results of such a modified framework. As seen, without dense correspondences for training, our method can still obtain accurate deformations.\n\nQ2: Train/test split.\n\nThe train/test split is based on the provided identity and motion names of deforming sequences. We first divide the animations of the dataset into two parts, seen identities and unseen identities. For the animations of seen identities, we further divide it into seen motions of seen identities (used as training set), and unseen motions of seen identities (used as the test set of S1). For animations of unseen identities, we remove those animations whose motions have already appeared in the training set. This way, we guarantee that the motions of unseen identities are not seen during training. Please also refer to Section D in the revised supplementary material for details.\n\nQ3: Provide ground-truth meshes in Fig. 4.\n\nThere is no ground-truth mesh in the experiment of the user-provided handles. Thus, one cannot calculate the vertex errors of the deformed meshes. However, our method can still obtain more realistic deformations, such as the leg movement of the deer in the last column.\n\nQ4: Non-realistic or non-physical-aware user-specified handles.\n\nOur method will find the closest deformation of animals that can best explain the provided user handles. Further, our goal of data-driven deformation priors is to obtain deformations that are as realistic as possible. However, our method could be easily trained on non-realistic or non-physical-aware samples and learn the respective deformation behavior. We will clarify this in the final revision.\n\nQ5: Robustness to noisy and partial observations of source mesh.\n\nWe directly evaluate our model on noisy and incomplete meshes without fine-tuning. The quantitative results are provided in Tables 3 and 4 of the revised supplementary material. As seen, there are no significant numerical variations between different noise levels and incompleteness ratios. This clearly demonstrates the robustness of our approach to noisy and/or incomplete source meshes.\n\nQ6: Real test on animal scans.\n\nWe are happy to evaluate our method on real scans; however, there are not that many datasets that contain real animal scans. We have sent emails to the authors for the dataset access, and are currently waiting for a reply. In parallel, we plan to capture animal scans by ourselves and include the evaluation results in the revised paper. In addition, we evaluate our pre-trained model on the reconstructed animals from real RGB images using the BARC method. As shown in the Figure 8 of the revised supplementary material, our method estimates realistic deformations for reconstructed animals from natural images. This also demonstrates the generalization ability of our method.",
" Q1: Difference between $\\mathcal{P_S}$ and $\\mathcal{Q_S}$.\n\n$\\mathcal{P_S}$ is the sampled point cloud from the surface of source mesh $\\mathcal{S}$. In contrast, $\\mathcal{Q_S}$ is the sampled non-surface point set from the 3D space of source mesh $\\mathcal{S}$. We obtain $\\mathcal{Q_S}$ by adding gaussian noise permutations along the normal directions of $\\mathcal{P_S}$.\nPlease refer to Section D in the revised supplementary material for the detailed description about $\\mathcal{P_S}$ and $\\mathcal{Q_S}$.\n\nQ2: L195.\n\nThank you for pointing this out. The querying non-surface point sets in 3D space should be denoted as $(\\mathcal{Q}_\\mathcal{S}, \\mathcal{Q}_\\mathcal{C}, \\mathcal{Q}_\\mathcal{T})$. We fixed it in the revised paper.\n\n\nQ3: Loss functions in Equation (6), (7), (8).\n\nOur model consists of two training stages. In the first stage, the backward and forward deformation networks are individually trained using the loss functions defined in Equation (6) or (7), respectively. In the second stage of end-to-end training, the whole network is trained with the loss function defined in Equation (8). Please also refer to lines 214--221 for further implementation details.\n\n\nQ4: More details of network architectures.\n\n\nPlease refer to the details described in Section A in the supplementary material.",
" Q1: Technical Contribution.\n\nThe main technical contribution is the transformer-based deformation network that represents a shape deformation as a composition of local surface deformations. This allows us to learn non-linear, localized deformations in a data-driven way based on learned features of the underlying shape geometry. In contrast to global deformation models like ShapeFlow, our local deformation model enables significantly better generalization ability to unseen motions.\n\nQ2: The generalization issue.\n\nWe wish to clarify that our goal is to learn deformation priors of a specific class of objects (e.g. quadruped animals) and not a generalized deformation model without a class prior. We add an additional baseline of ShapeFlow that is also specific to a class. Please refer to Table 5 and Figures 6 and 7 in the revised supplementary material. From the result analysis, we see that our method can learn more accurate deformation priors compared to ShapeFlow.\n\n\nQ3: Not learning rotations.\n\nIn general, displacements are able to represent arbitrary deformations. However, we agree that SE(3) fields could be a more efficient representation of deformations and can potentially lead to higher quality (given a fixed spatial resolution). We will add an in-depth discussion about our design choice.\n\n\nQ4: Higher L2 error of ARAP on unseen identites.\n\nWe wish to clarify that the deformations of unseen identities are more complicated than those of unseen motions, which can cause the higher prediction errors of ARAP.\n\n\nQ5: The training of NFGP.\n\nNFGP is a deep optimization-based method and cannot learn general deformation priors of a dataset. It overfits the neural network to each provided input during inference.",
" Q1: Comparisons on the dataset of Deformation Transfer and TOSCA.\n\nAs suggested, we directly evaluate our pre-trained model on other animal datasets by providing additional quantitative results on the dataset used in Deformation Transfer. TOSCA does not have correspondences between different poses of the same animal, and hence does not easily provide handle displacements as input. Thus, we provide the comparison under the setting of using user-specified handles as inputs. \nPlease refer to Table 5 and Figure 6 in the revised supplementary material. We can observe that our method generalizes well to the other datasets without the need of re-training our model.\n\n\nQ2: Discussion and comparison against ShapeFlow.\n\nIn lines 109--112 of the revised paper, we detail the difference and connection between ShapeFlow and our method. We also include a ShapeFlow comparison. Please refer to the Table 5 and Figures 6 and 7 in the revised supplementary material. Our method can predict more accurate deformations both quantitatively and qualitatively.\n\nQ3: Robustness to noisy source meshes.\n\nWe directly evaluate our model on noisy meshes without finetuning. The quantitative results are provided in Table 3 of the revised supplementary material. With the noise becoming larger, the performance of our method experiences only slight variation; however, this demonstrates the robustness of our method to noisy source meshes.\n\nQ4: Robustness to incomplete source meshes.\n\nWe directly evaluate our model on incomplete meshes without finetuning. The quantitative results are provided in Table 4 of the supplementary material. As seen, there are no significant numerical variations between different incompleteness ratios. This clearly demonstrates the robustness of our approach to incomplete source meshes.\n\nQ5: Generalization to real scans.\n\nWe are happy to evaluate our method on real scans; however, there are not that many datasets that contain real animal scans. We have sent emails to the authors for the dataset access, and are currently waiting for a reply. In parallel, we plan to capture animal scans by ourselves and include the evaluation results in the revised paper. In addition, we evaluate our pre-trained model on the reconstructed animals from real RGB images using the BARC method. As shown in the Figure 8 of the revised supplementary material, our method estimates realistic deformations for reconstructed animals from natural images. This also demonstrates the generalization ability of our method.\n\nQ6: Combination of ARAP and our method.\n\nWe agree that a combination of ARAP and our method might reduce the amount of required data for training. However, at the same time, this would mitigate the advantage of our data-driven method which facilitates the learning of non-linear, localized deformation properties based on features of the underlying shape geometry. We will be happy to include an experiment to illustrate the trade-off between \"more ARAP for regularization\" vs \"pure data-driven learning\".\n\nQ7: Limitation of requiring dense correspondences as supervision.\n\nWhile our current method uses an existing dataset where dense correspondences between temporal mesh frames are available, our framework can also be trained on datasets without dense correspondences by some adjustments on inputs and loss functions. Concretely, we can change our method to receive sparse handle correspondences as inputs, and utilize Chamfer distance as the loss function that does not require ground-truth meshes with dense correspondences. In Figure 9 of the revised supplementary material, we visualize test results of such a modified framework. As can be seen, without dense correspondences for training, our method can still obtain accurate deformations.\n\n\nQ8: Notations.\n\nWe provide a summary in Table 1 of the revised supplementary material to clearly define the notations throughout the paper.\n\n\nQ9: L195.\n\nThank you for pointing out the issue. The querying of the non-surface point sets in the 3D space should be denoted as $(\\mathcal{Q}_\\mathcal{S}, \\mathcal{Q}_\\mathcal{C}, \\mathcal{Q}_\\mathcal{T})$. We have fixed it in the revised paper.\n\n\nQ10: Rotation Fields in the limitation.\n\nWe plan to further decompose a deformation field as a rotation field and translation field in the future. Please refer to lines 275--280 in the revised main paper for a clearer description.\n\n\nQ11: Smooth canonicalization visualization.\n\nThe reason is that we learn continuous deformation fields defined in 3D space, thus enabling smooth mesh deformations.\n\nQ12: Obtain the ground-truth matching between $\\mathcal{Q_S}$, $\\mathcal{Q_C}$, and $\\mathcal{Q_T}$. \n\nThe ground-truth matching between non-surface spatial point set of $\\mathcal{Q_S}$, $\\mathcal{Q_C}$, and $\\mathcal{Q_T}$ is based on the dense correspondences between surface meshes. Please refer to Section D in the revised supplementary material for the data-preprocessing details.",
" This paper proposes a deep-learning framework to model shape deformations, especially in the context of modifying a surface with user-defined input locations shifted to their desired locations. The motivation is that prominent methods like ARAP are restrictive in their results, partly due to pure geometric action, and that learning these deformations from data allows for more richer semantic variability in the deformations.\n\nThe authors make use of a transformer-based architecture and implement a two-staged deformation learning paradigm. Specifically, a backward deformation network deforms the shape into a canonical position. A forward deformation network, inputs this canonicalized shape, and the desired target handle information to output a deformed mesh. The training is done in two stages, with three different loss functions in action accordingly. \n\nThe authors demonstrate their method on the DeformingThing4D-Animals dataset and show favorable results both numerically and visually in comparison to competing prior works especially ARAP and NFGP.\n Strengths \n\n- Overall, I found this to be an interesting paper, with good motivation, well-written and well-compiled experiments, especially in the supplementary \n- Clever use of the transformer system applied to modifying shapes with target handle information for shape editing \n- In addition, I found the limitation section to be honest and well explained\n\n\nWeaknesses\n\n- The notation is a bit cumbersome and hard to keep up with. Although I did catch up after multiple reads, section 3 could be written more clearly. I specifically suggest having a clear depiction for so many notations on various point clouds - C, O, Q, P, and T\n- As rightly pointed out in the paper, the method is essentially supervised requiring dense correspondences throughout. Although not a deal-breaker this must be noted as a drawback since availability of such data is scarce. \n- I found little attempt to combine the proposed learning-based approach with the optimization approaches like ARAP. Could there be a combination of these losses and does that lead to better generalization (and possibly fewer data requirements?) \n- From an evaluation perspective, this paper is very restrictive in its demonstration and the results are only shown for 1 dataset. As a result, the transfer learning capabilities (for e.g. to 4-legged animals in TOSCA) are largely unknown. \n- Although there is a short commentary on robustness, very little is demonstrated apart from resampling the meshes. \n- Please do contrast, explain, and/or compare with a relevant recent work: Jiang, Chiyu, et al. \"Shapeflow: Learnable deformation flows among 3d shapes.\" Advances in Neural Information Processing Systems 33 (2020): 9745-9757.\n- Typo: Line 195, is that (Qs, Qc, and Qt)?\n Questions \n\n- In the limitations, what do you mean by model rotation? like equivariance to rotations?\n- Is there a reason why you achieve very decent smooth results, especially for the canonical pose visualizations in the supplementary, despite no explicit imposition of some regularity?\n- How exactly do you get ground truth matching between Qs and Qt? As far as I understand Qs, and Qt are the points sampled in space and in the vicinity of the shape (and not on it), how do you then establish correspondence between points in 3D space? when I presume: only surface-to-surface matching is available? \n See Weaknesses. Overall, I found this paper interesting and well compiled and I am inclined to weigh in positively as a pre-discussion rating. However, there are some outstanding issues that need clarification and will wait for the discussion phase to make a more informed opinion. ",
" This paper deals with the problem of surface deformation by training a Transformer network on point cloud. The key idea is, they adopt a canonical model or space as often used in human modeling and design a backward and forward deformation network to deform from any source model to target model. Strengths:\nFirst, solving the mesh deformation under the movement of some user-specified handles is a typical problem in graphics and the proposed idea is straightfoward and pretty easy to understand. So the paper is well written and easy to follow. They have demonstrated better performance on the DeformingThing4D-Animals dataset.\n\nWeakness:\nThey only demonstrated the performance on the DeformingThing4D-Animals dataset. Although they have tested on unseen identities, the models in the dataset are quite similar. But the mesh deformation itself is a general problem and we shouldn't assume the models are all animals. From this perspective, the generalization ability is a big issue for the proposed method. On the other hand, the compared methods are more general and can directly apply to various kinds of models. Therefore, the comparison is just not fair. From my understanding, the network is overfitted to this dataset.\nThe rotation is not included, instead the network only predicts the displacement of vertices. This could be a big issue for surface deformation. The authors have mentioned this in the future work, but as a fundemental problem, I think it should have first priority to be tackled. I have some questions or confuse about the experiments:\n1) In Tab.1, the L2 error of the ARAP method increased quite a bit on unseen identities. Why would happen? ARAP is not a learning method, so it shouldn't get affected whether they have seen those identities before.\n2) For the learning based baseline method, NFGP, I'm wondering whether it is re-trained under this DeformingThing4D-Animals dataset.\n3) What is the major technical contribution the authors want to claim? The point transformer? The authors have mentioned their limitations in the future work which is good, but those mentioned limitions are fundamental problems that have to be dealt with. It would be a bit better to demonstrate results on general models that are not from the DeformingThing4D-Animals dataset. ",
" This paper presents a neural deformation method, which utilize the deformation priors in a large-scale dataset. This method predicts a continuous deformation field in space. The input model can be of any pose. The proposed method first deforms the input model back to the canonical space, and then deforms it to the target that satisfies the constraints given by the user. The authors propose Transformer-based Deformation Networks (TD-Nets), which learns encoder-based local deformation fields on point cloud approximations of the input mesh and outputs the deformation. Strengths\n--The method is technically sound.\n--The proposed method is carefully validated and has been compared to the representative deformation method ARAP and a neural-based method NFGP.\n\nWeaknesses\n-- The description needs to be improved.\n -- What is the difference between the randomly sampled surface point cloud P_S and the querying spatial points Q_S? More description about the querying point set should be given. \nAlso in Line 195, ‘querying spatial points (P_S, P_C, P_T)’ should be ‘querying spatial points (Q_S, Q_C, Q_T)’.\n--Are equations (6),(7),(8) all used during training, or equations (6) and (7) just for the derivation of equation (8)?\n--There are many modules in the proposed network, PAB, PTB, Point transformer encoder, Attentive deformation decoder, and also many features and local codes denoted as Z. I am wondering how the data goes through these modules. It would be better to have a network diagram showing how the modules are combined.\n The authors have discussed the limitations.",
" In this paper is introduced an approach to learn mesh deformations of dynamic bodies from user-provided handles. To manipulate the source meshes into different poses, the shape deformations are learned via canonicalization, i.e., the source mesh is first represented in the canonical space, and after that, that representation is transformed to the target space. To this end, two backward and forward deformation fields are considered, that are learned by transformer deformation networks. Both quantitative and qualitative analysis are provided, as well as a comparison with two competing methods, showing promising results. Additional ablation studies are also included in the work. In general, the paper is clear enough, it is well written, and all the technical details are given in the document. Motivation and related work are concise, and the authors clearly show their contribution. To be honest, I do not have many issues with this submission in its current form. Next, I have some comments and questions. \n\nThe deformation priors are learned by the use of a canonical space, i.e., the shape transformation between two arbitrary poses is divided into two steps: a backward deformation to align the source mesh to the canonical space, and a forward deformation to map the canonical space in the target deformation space. While this process increases the complexity of the neural model, as two different models need to be learned, in practice the process is strongly simplified. This idea is simple yet effective. Once the transformation models are learned, a simple 3D displacement, via a deformation field, is considered to deform the mesh. \n \nEven trivial, all operators in Eq. (1) should be defined. \n \nFor training, the method needs dense correspondences for the three deformation states (canonical, target, and source). While this can be an easy task for synthetic scenarios and CAD-based models, this could be a very hard task in real scenarios, where noisy point clouds could be considered with a variation of points in the representation. \n \nThe authors claim they divide the test set in 143 sequences for seen train identities, and 55 sequences for unseen ones. In my opinion, this should be clarified in the paper. How was that done? Note that some unseen identities could be the result of a simple linear combination of seen ones, i.e., the unseen motions are really seen ones. I would like to know as the authors can guarantee that division with no additional analysis. \n \nRegarding Figure 4. The authors claim their solution produces visibly the best results. I disagree with that. That conclusion is not actually easy, after checking the corresponding image. Maybe the authors could include the corresponding ground truth mesh, or color-based representation where every color displays a different error. \n \nObserving the rest of results, the proposed method outperforms, in both quantitatively and qualitatively, the competing approaches. \n \nIt is worth noting that the user-specified handles could be non-realistic or non-physical-aware. In that case, my question is: could the proposed method obtain the deformation? This could help us to interpret a bit more the learned deformation priors. My doubt is the deformation priors could be just an algebraic representation with no meaning. Could the authors help me with this question?\n \nMore extreme poses could be considered as well as some realistic animal meshes (for instance, capturing the full mesh by a real vision sensor that means the mesh includes noisy and partial observations). Please, see the strengths and weaknesses section for questions and suggestions. The authors have addressed the limitations properly. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"kKOpZtOo27l",
"Q1BXyYI5BII",
"uXeMer-cIQ8",
"nips_2022_pkfpkWU536D",
"rne54RJ8NFk",
"JbqUdqQFOVN",
"TI180jT6d6K",
"ywcU4zezp5",
"nips_2022_pkfpkWU536D",
"nips_2022_pkfpkWU536D",
"nips_2022_pkfpkWU536D",
"nips_2022_pkfpkWU536D"
] |