Input
stringlengths
128
43.6k
Output
stringlengths
141
10k
The authors consider the problem of finding an approximate extensive-form correlated equilibrium (EFCE) of a finite general-sum multiplayer extensive-form game under the perfect recall assumption. They propose an accelerated version via optimism of the algorithm by Farina et al. (2019a). Precisely it combines the framework by Gordon et al. (2008) for \\Phi-regret minimization with the framework of optimistic regret minimization adapted to extensive form game. They prove that when all agents play T repetitions of the game according to the proposed algorithm the correlated distribution of play is an O(T ^−3/4 )-approximate EFCE where O hides quantities polynomial in the size of the game. The main technical point to obtain this result is to characterize the stability of certain fixed point strategies through a refined perturbation analysis of a structured Markov chain. They also provide preliminary experiments on simple two or three-players games. The paper is well written (see specific comments and the concern below). The proofs seem correct (I did not check the appendices in detail). Providing an accelerated convergence rate for multiplayer extensive form games (and in particular proving the counterpart of what is known for normal-form game) is a valuable contribution. My main concern is that in the current form the paper is difficult to read. In particular, the proposed algorithm should be at least described in the main text (see main comments). Furthermore, some parts should be better introduced, e.g. the \\Phi-regret minimization framework (see specific comments). Main comments: -It is hard to find what is the final algorithm mentioned in Theorem 1.1 by reading the main text and even by looking at Appendix B. Thus without the big picture Section 3 looks a bit like a collection of unrelated technical results. -Could you state clearly the complexity of the proposed algorithm (it seems that it scales at least quadratically with the size of the game?) and polynomial dependence on the size of the game hidden in Theorem 1.1. This is important since usually the size of the game is very large. Specific comments: -P1 Th 1.1: Is there any lower bound on the rate? For example, is there hope to improve the rate to O(1/T) as for two players? You should also clearly refer to the description of the algorithm in the paper. -P3 End of section 1: Why normal-form coarse correlated equilibria is solution concept much less appealing than EFCE? -P4, below (1): typo \\cJ^{(i)} -P5, Theorem 2.5: \\psi^(i)-regret is not defined at this point. -P6, Definition 3.1: does the point x_t necessarily coincide with the point played by the regret minimizer algorithm? Ok maybe you should describe the framework of Gordon et al. (x_t is the fixed point of a \\phi_t obtained by another regret minimizer on an auxiliary task?). -P6, Theorem 3.2: Maybe you should define clearly what you mean by a regret minimizer R_\\Phi for the set of transformation \\Phi. -P6, end of Section 2: could you detail what you mean by entropic regularizer on \\cX? -P9, end of Section 3: Could point out precisely where is the final algorithm in Appendix B. -P9, Section 4: At this point, it is not clear at all what the OMWU algorithm is. Did you also experiment on Leduc Poker? Because it is usually the setting where there is a gap between multiplicative weights type of algorithms and RM+ (ith alternating updates). -P13, top of the page: \\cJ is in the introduction the set of information set. Using the same notations for the sequential decision making setting is a bit confusing. Could you -P13, (13): is an observation node always followed by an action node? -P16, above (24): typo -P18, above (32): norm 1 See Main Review <doc-sep>This paper proves a faster no-regret learning dynamics for extensive-form correlated equilibrium (EFCE) in multiplayer general-sum imperfect-information extensive-form games. When the game is played for T repetitions according to the accelerated dynamics, the correlated distribution of play for all players is an $O(T^{-3/4})$-approximated EFCE. This improves upon the previous best rate which is $O(T^{-1/2})$ for extensive-form games. ## Strengths The problem that the authors considered is an important problem for no-regret learning dynamics in games. Compared to existing literature on accelerated learning dynamics for correlated equilibria (CE) or coarse correlated equilibria in normal-form games, the convergence rate for extensive-form game is relatively less understood. This paper shows an improved convergence rate to extensive-form correlated equilibrium (EFCE): when the game is played for T repetitions, the correlated distribution of play for all players is an $O(T^{-3/4})$-approximated EFCE, which improves upon the previous best rate $O(T^{-1/2})$. To me this is a good and important theoretical contribution to the existing literature. A main conceptual contribution is to establish a connection between the optimistic regret minimization to $\\phi$-regret minimization problem. A main technical contribution is a characterization of the stability of certain fixed point strategies through a refined perturbation analysis of the structured Markov chain. Other than that the proof utilized previous results in Farina et al. (2021a), which casts the convergence to an EFCE as a $\\phi$-regret minimization problem, and existing framework on optimistic regret minimization. Numerical simulations are provided in support of the theoretical results. ## Weakness First I think the summary of related works on no-regret learning dynamics for normal-form games are missing some most recent results. For example in [Daskalakis, Fishelson, Golowich, Near-Optimal No-Regret Learning in General Games, 2021] it is shown that one can achieve a O(1/T) convergence rate to coarse correlated equilibrium in multi-player general-sum games. Secondly I don't see much discussion on the technical difficulty of why the paper's framework could not be applied to normal-game. Given the many other existing rates for normal-games, it would be good to add more details on this point. Regarding the technical contributions, the main concern is about novelty given that the main framework is built on previous results in Farina et al. (2021a) which casts the convergence to an EFCE as a $\\phi$-regret minimization problem, and existing framework on optimistic regret minimization as in Gordon et al. (2008). Overall I think this paper provides nice theoretical contribution with an improved rate for no-regret learning dynamics that converge to extensive-form correlated equilibrium. The paper is well-written. However currently the related work is missing some recent relevant results, and on the technical contribution side there should be some more details differentiating the new results and existing frameworks. Post rebuttal update: I have read the authors' response and will keep the original score. <doc-sep>This paper presents an uncoupled no-regret learning dynamic provably converging to the extensive-form correlated equilibrium in general-sum n-player extensive-form games. The central claim of the paper is that the introduced approach results in the convergence rate of O(T^{-3/4}) to the equilibrium and hence supersedes the recent algorithm of Farina et al. which converges with rate O(T^{-1/2}). To achieve this, the authors follow the construction presented by Farina et al., but employ stable and predictive regret-minimizers from the class of Optimistic Follow the Regularized Leader algorithms instead of CFR/regret-matching used by Farina et al. The main portion of the paper is then dedicated to the theoretical analysis of the dynamic to arrive at the desired convergence rate. To this end, the authors first study how to construct stable-predictive regret minimizers for the convex hull of the set of trigger deviation functions for a given sequence, and consequently also for the composite regret minimizer. This enables them to bound the overall incurred regret. The construction also requires that every trigger deviation function admits a fixed point that is efficiently computable by a stable oracle. The stability is hence of the authors' interest in the second part. The paper is concluded by an experimental evaluation of the algorithm that shows that it is superior or performs on par with the algorithm of Farina et al. instantiated with regret matching or vanilla multiplicative weights updates. Computing game-theoretic solution concepts that capture the idea that the players may coordinate their strategies based on external signals has become an increasingly active field in recent years, whether in the form of correlated equilibrium, team correlated equilibrium, or other related concepts. Correlation has several attractive properties, with efficient computability among the most prominent ones and enough applications to justify further research in this direction. Designing uncoupled dynamics to approximate an extensive-form correlated equilibrium faster is hence definitely a problem worth studying. The authors' result is interesting, and I am convinced that the presented experimental results provide enough evidence to support their claims. I went through some of the proofs included in the appendix, and I could not find any obvious mistakes. The authors cite the relevant literature, and I also appreciate that they mention that alternation and linear averaging were left out from the experiments because of their lack of guarantees, which I find to be a valid argument. I have a few concerns as well, though. Perhaps the main one relates to the presentation of this work. Explaining the problem and techniques the authors study requires a lot of background. Already in the introduction, the authors mention different kinds of regrets and other fundamental notions without describing the main ideas behind these concepts or how they relate in general. This may make some readers confused. Several times I also found myself wondering how different claims relate together, e.g., why are "certain Markov chains" important, how are they used in the proofs, and what their stationary distributions relate to. This makes it difficult to follow the authors' thought processes, and the reader is forced to rely on the explanations in the appendix, which contradicts the idea of the main text being self-contained. The work is also highly technical, which is an inherent problem of the framework of sequential decision-making. The problem is further aggravated by the plethora of notation, including upper and lower indexing. I believe a table of symbols similar to Table 1 in Farina et al. 2021a would be convenient. Moreover, the work is absolutely void of examples, despite containing many definitions and advanced concepts. The authors may consider providing some to help the readers familiarize themselves with the introduced notions. The main result the authors present is indeed interesting. Still, after reading through the appendix, my impression is that many of the constructions and proofs (with the important exception of Lemma B.7) are directly inspired by previous works, especially the papers by Farina et al. May the authors elaborate what were the main difficulties they encountered when extending the results into the framework of optimistic regret minimizers? Farina et al. also define approximation of extensive-form correlated equilibrium using regret minimizers on the set of deterministic sequence-form strategies (Theorem 3.7 in Farina et al. 2021a), and moving to the set's convex hull (i.e., mixed strategies) will result in a guarantee with high probability only. According to Section 3.1, the sequence-specific regret minimizers are constructed from the regret minimizers for the mixed strategies (Q^i_j). Should not the approximation mentioned, e.g., already in the abstract, be hence also probabilistic? I understand that using the technique of Chen and Peng would result in a non-polynomial degradation in stability. May I ask how this would affect the overall rate of convergence to extensive-form correlated equilibrium and computational complexity of the algorithm computing fixed points? More specifically, would the convergence rate contain game-specific exponential constants and otherwise be similar, or would it change completely? Lastly, I wonder how fast one iteration of the OMWU-based dynamic is, compared to the alternatives (regret-matching+ and vanilla MWU)? Nits: The proof of proposition 3.3 is missing a reference (it reads "??"). Equation 45 is missing a comma. At the beginning of Section 3, should not (2) be 2) instead? The word "coarse" is missing in Corollary C.7. Overall, I am slightly in favor of accepting this work. The problem is worth studying, the main result the authors present is exciting, and the empirical results support their claims. My main concerns relate to the presentation of this work. The text is extremely technical and notation-heavy, and I believe many concepts should be more adequately explained for a reader less familiar with no-regret learning to understand this work. The manuscript also completely lacks any examples. I remain unsure if a better exposition is possible given the ICLR's strict page limit and if this work would not be better suited for journals or conferences that allow longer narratives. <doc-sep>Previous work has identified uncoupled, regret-based dynamics that converge to extensive-form correlated equilibria in extensive-form games at a rate of O(T^{-0.5}) where T is the number of repetitions of the game. This paper provides a new uncoupled, regret-based dynamics that achieves O(T^{-0.75}). It does so by combining analysis techniques for regret-based dynamics that have shown such accelerated convergence in other settings with the framework used to achieve the O(T^{-0.5}) rate. Technical contributions in realizing this combination include a new analysis of a relevant Markov chain. A limited set of experiments confirms faster convergence relative to prior approaches. On the positive side the improved convergence rate is a clear theoretical contribution and the experiments suggest that a practical implementation of these techniques is possible. On the negative side, as discussed in detail below, the techniques used to achieve it essentially boil down to combining the approach of Farina et al. 2021a with a line of work on slowly-changing regret minimizers which have been shown to achieve accelerated rates in a number of settings, so the acceleration here is not terribly surprising. While such a combination may require substantial novel insights, the exposition does not currently make it clear what those are. More broadly, the positioning of the contribution of this paper relative to related work needs to be clearer. I will discuss several particular relationships which do not appear to be sufficiently clearly explained, and for some of them this appears to expose weaknesses in the paper. Farina et al. 2021b [F21b]. This paper achieves a O(T^{-0.5|) rate to a weaker solution concept (EFCCE). However, relative to EFCE it avoids an expensive computation at each step. This paper is used as a baseline in the experiments Section 4, and thus has two differences (slower rate and different solution concept). The discussion omits the second difference and appears to discuss it as converging to an EFCE. So I am left wondering which factor is responsible for the worse performance in the experiments. Furthermore, Appendix C of the supplemental materials (which does not appear to be referenced in the main text) discusses how the result of this paper can be combined with those of [F21b] to get accelerated convergence to EFCCE. Give this, I am not sure why the experiments do not isolate the two factors (rate vs solution concept) and which version of the algorithm in the paper the experiments are actually testing (i.e. do they actually use the EFCCE version?). The line of work on distance-generating functions (Hoda et al. 2010, Kroer et al. 2020, Farina et al. 2021c). These techniques have achieved O(T^{-1}) in other settings but are mentioned only briefly. Why not adopt these techniques rather than the ones that lead to O(T^{-0.75})? I don’t a-priori see why they should be incompatible with the framework from Section 3 and if they can be achieved it makes the headline result of O(T^{-0.75}) less impressive unless there are other reasons to prefer this approach. Farina et al. 2021a [F21a]. While this work is referenced throughout the paper, the technical approaches share so much overlap that I have some difficulty disentangling where exactly the technical contributions in this paper are. In particular, I am not precisely clear what constitutes the “stable predictive \\Phi template” referred to as being of independent interest. [F21a] appear to use the same framework based on Gordon et al. 2008 which incorporates an arbitrary regret minimizing dynamics. So it would be helpful to call out which specific results require new insights to apply it to a stable-predictive version (which have been explored in several previous works, although perhaps mostly for normal-form games). The one result specifically mentioned is Lemma 3.9, and I could not find a proof of it in the supplementary material (there does seem to be a proof of the reference Corollary B.9 but it builds on different results, although they appear at least somewhat similar in spirit). As a result I have a hard time evaluating the significance of the technical contribution of this paper. Finally, as a minor note I am a bit confused about the status of [F21a]. This paper describes it simply as “very-recent follow-up work” to Celli et al. 2020, while [F21a] describes itself as an extended version of that paper in a note on the first page. While I appreciate the headline theoretical contribution, given my concerns I am not convinced that this version of the paper makes a sufficient novel technical contribution. One possibility is that it does but the technical exposition needs to be improved to more clearly highlight the contribution. Another path forward, since this paper is heavily based on the approach of [F21a], improves its results, and that paper appears to be still unpublished, would be to consider whether the authors of {F21a] would be amenable to merging the papers. A third possibility to extend the results and demonstrate the power and flexibility of the approach would be to show that it can also be adapted to work with the line of work which uses dynamics based on carefully-designed distance-generating functions to achieve an O(T^{-1}) bound.
This paper builds upon existing works to prove that learning (correlated) equilibrium can be fast, i.e., faster than \\sqrt{n} even in extensive form games. Three reviewers are rather lukewarm, and one reviewer is more positive (but seems less confident in his score). The two major criticisms is that this paper is very difficult to read and that the results might seem rather incremental with respect to the literature. I tend to agree with both points but the paper still as merits: the reason is that extensive form games are intrinsically way harder than normal form games and they more or less all have a burden of notations. We agreed that the authors actually did some efforts to make it fit within the page limit. but another a conference or a journal would have been better suited than ICLR. Our final conclusion is that the result is interesting yet maybe not breathtaking for the ICLR community; we are fairly certain that another venue for this paper will be more appropriate and that it will be accepted in the near future (I can only suggest journals based on the large amount of content and notations, such as OR, MOR, or GEB - yet, conferences such as EC should be more scoped too) . It does not, unfortunately, reach the ICLR bar.
This paper addresses the problem of coreset selection for realistic and challenging continual learning scenarios. The authors proposed Online Coreset Selection (OCS), a simple yet effective online coreset selection method to obtain a representative and diverse subset. The reviewer lists the major strengths and weaknesses as follows. 1. Strengths: This paper is well-structured and contains sufficient experiments. Its motivation is meaningful and interesting. 2. Weaknesses: a. The authors did not explain how big the difficulty is to adapt existing algorithms to online learning. And the way of designing the proposed algorithm for online learning seems straightforward and cannot be regarded as a genuine technical contribution. b. The algorithm lacks novelty. The core part of the proposed algorithm seems very similar to the core part of 'Asymmetric Multi-task Learning Based on Task Relatedness and Loss'. The three adopted selection strategies do not demonstrate enough novelty, either. c. Some grammatical errors and typos exist, such as 'let .. is' in definition 3. In a nutshell, the reviewer regards this paper as a borderline paper, given the limited technical innovation. I read the authors' rebuttal. Some of my questions and concerns are replied well. However, I think that the core technique of this work does not advance the research in continual learning significantly. So, I adhere to my previous rating. <doc-sep>The paper presents three gradient-based selection criteria to select the core-set for improving adaptation and reducing catastrophic forgetting. Differently from other methods, the proposed approach selects the instances before updating the model. STRENGTHS: 1) The paper is very well written and presented. 2) The core-set problem is an important and under researched area in continual learning, especially in its online form. 3) Evaluation is very well executed. 4) The introduction of diversity as a criteria for core-set selection is interesting. WEAKNESSES: 1) As the term “online” appears in the title of the paper, this modality should be better introduced and motivated. A clear explanation appears in the related work at the end of the continual learning paragraph. However, the term remains not properly defined and seems to be related with the problem of imbalance. What exactly does imbalance mean? In the reviewer understanding the term seems to be defined as in [*] and in (Aljundi 2019b), that is, task distribution is not i.i.d. Given that definition, why selecting the core-set before should provide an advantage? The reviewer understands that the performance is improved, however, the writing states that selecting the core-set before has a dependency on the imbalanced task distribution and that this is an advantage with respect to (Rebuffi et al., 2017; Aljundi et al., 2019b;a; Chaudhry et al., 2019a;b). If this is not the case and the motivation of selecting the core-set before adaptation is because of the good empirical results, then it should be better remarked in the paper. [*] Chrysakis, Aristotelis, and Marie-Francine Moens. "Online continual learning from imbalanced data." International Conference on Machine Learning. PMLR, 2020. 2) Figure 2 shows a dataset that is not addressed by the method and is somewhat misleading. The "Multidataset" used in the paper does not include the CIFAR dataset (i.e. complex objects like dogs and vehicles). 3) In the reviewer's opinion the claim of large improvement, included in the third contribution, seems to be somewhat a bit bold. For example, in CIFAR-100 balanced and unbalanced learning settings in Tab.1, the performance does not differ too much from the herding strategy used in iCARL (i.e. 60.3 vs 60.5, 51.2 vs 51.4). Although the reviewer noticed that in rotated MNIST and Multidataset improvements are evident, these datasets do not typically “transfer” their performance to larger datasets (i.e., ImageNet) as CIFAR-100 typically does. 4) The validating hypothesis of Fig.3 (i.e. learning from MNIST to CIFAR10) seems to favor diversity, which is exactly what herding is not doing. In herding, examples are selected closer to the class mean of the feature representation in each class. Herding is somewhat orthogonal to diversity. This may partly explain the not improving performance on CIFAR-100. This part should be discussed in depth (i.e., motivate that the method does not mostly favor datasets with diversity). 5) In the reviewer’s opinion the ablation study of the effect of the gradient is not sufficient to justify its usage. As also remarked in the point 3) of this review, the MNIST family datasets typically do not “transfer” their performance to more complex and bigger datasets. In other words, using the gradient on MNIST does not imply that the gradient on CIFAR or bigger datasets is a good choice. Is the gradient computationally expensive for deeper neural networks? For example, what happens with one or two more orders of parameters? This is a nice and well written paper in which some details need to be clarified. Specifically, imbalance, selecting the core-set before/after adaptation, dataset diversity and selection, the "transferability" of performance of the datasets. <doc-sep>This paper proposes an Online Coreset Selection method to select the most representative and informative coreset at each iteration and trains them. The proposed method maximizes the model’s adaptation to a target dataset while selecting high-affinity samples to past tasks, which directly inhibits catastrophic forgetting. Experiments on the benchmark datasets show competitive results compared with baselines. This paper proposes an Online Coreset Selection method to select the most representative and informative coreset at each iteration and trains them. The proposed method maximizes the model’s adaptation to a target dataset while selecting high-affinity samples to past tasks, which directly inhibits catastrophic forgetting. 1. The motivation of this manuscript is not clear. The authors should clearly claim the challenging issues in previous methods. 2. The authors complement the theoretical explanation of the success of the proposed approach. 3. While the online coreset selection method adopted in the manuscript seems plausible, it is not exciting. In general the paper is well organized and clearly written. The technical details are easy to follow. Experiments on the benchmark datasets show promising results compared with baselines. <doc-sep>The author propose a novel approach for online coreset selection, i.e exemplars used in the rehearsal process of past tasks in a continual learning framework. The proposed method is based on the observation that not all the samples in a dataset are equally valuable, but their quality affects model's effectiveness and efficiency. The method selects the most representative and informative samples at each iteration and trains them in an online manner. The approach has been conviently compared with state-of-the art methods and demonstrated its superiority. Positive aspects: - The proposed approach presents scientific novelty - The related work section covers the most relevant papers in the field - The experimental validation is extensive and the authors demonstrated the superiority of their approach. Negative aspects: - The idea is in general well-explained, although the paper lacks clarity in some aspects (see the detailed comments below), therefore it could be further improved - There are also some issues with the proposed approach which are not clear enough Please find below my main concerns: 1. The following statement is not totally clear: "The naive CL design cannot retain the knowledge of previous tasks and thus results in catastrophic forgetting". What do you mean by 'naive CL design'? Please reformulate this statement. 2. Along the paper, you use repeatedly the expression 'target dataset'? What do you mean by 'target dataset' in a continual learning framework? It is confusing: If I have to learn 10 tasks, which one is the target dataset? I guess you refer to the 'current task'. Therefore, please use this formulation instead and change all over the document. 3. How many representative instances do you select from each mini batch? Since a sample is presented several times during the training, shouldn't you use an acumulative measure and the final ranking/selection should be done at the end of the training? How do you guarantee the class-balance of the selected core-set? Please plot the distribution of samples per class resulted after the coreset selection process. 4. I did not understand the equation 3? What index 't' refers to: task ID? How is possible to measure the similarity between a sample and the minibatch it belongs to? Or you consider the similarity between a samples and the batch average? Something is missing there. 5. Equation 4: Confusing in terms of notations and terminology! You refer as 'cross-batch', but the eq. 4 is about the similarity between samples in the same batch! 'Cross-batch' should not refer to the similarity between a sample from one batch and the samples from different batches? Why the similarity measure in eq. 4 is negative? Please reconsider the notation and formulation of eqs 3 and 4. 6. Usually, in exemplar-based CL, the memory allocated for exemplar is known and fixed from the beginning. While new tasks are being learned, the number of exemplars from previous tasks is decreased in order to keep the memory size. What strategy do you adopt in your approach regarding the memory capacity where the exemplars are stored?
The authors propose three strategies for coreset selection in the context of continual learning. In particular, the authors consider class-imbalance and noisy scenarios. The authors run extensive benchmarks and ablation showing that the approach can be effective in practice. All reviewers were positive about this work, but found that the methodological contributions were relatively modest. The clarifications provided by the authors were highly appreciated. I would encourage the authors to revise the paper to incorporate these additional details as there were a number of concepts that reviewers found were not sufficiently documented/explained and lacked clarity. I would also highly encourage the authors to explain their use of "online continual learning" as this reads like a tautology. Finally, I would like to ask the authors to reflect on their insistance with the reviewers; while we would all want engaging and long discussions about our work, the reality is that reviewing papers and discussing them is time consuming and taxing, especially in the middle of continued pandemic. The authors should be grateful of the time reviewers have spent reading their work and providing feedback, and it is not in the authors' interest to ask for a revision of the scores.
This paper proposes NeurOLight, a variant of neural operators that is suitable to emulate optical devices efficiently. While the core behavior of the proposed NeurOLight is similar to the Fourier neural operators (FNOs), the authors introduce several techniques for optical simulations, including the scale-adaptation for merging variant domains, wave priors for Maxwell PDE encoders, masked image modeling for the light source, cross-shaped FNO that separates the horizontal and vertical pattern predictions, and superposition-based data augmentations. These all techniques for NeurOLight can streamline the complexity of the model as well as generalize the predictive performance. The authors show the proposed NeurOLight outperforms other baselines such as U-Net and FNO. They also provide some ablation studies that can support the proposed techniques are indeed helpful. The paper has many merits: the introduced ad-hoc techniques are convincing for the targeted domain, the writing is clear, and the experimental validation is solid. The paper also demonstrates several interesting results including spectrum analysis and domain transfer learning. Without a doubt, it is a good application paper for optical device simulations. My main concern is that the target domain seems to be a niche problem for NeurIPS audiences. Because the proposed techniques are generally applicable to such a niche domain, the relevance and significance of this paper are not entirely clear. Overall, I think this paper is worthy of publication at some venues (maybe a PDE-related workshop or computational physics journals), but the venue does not have to be NeurIPS main conference. The authors do not explicitly state the limitations of their proposed method. As I mentioned in Strengths and Weaknesses, I think that the main limitation of this paper is that the targeted problem is less relevant to the NeurIPS audiences. <doc-sep>This paper proposes a neural operator that jointly models multiple parameters of EM simulation. Given input light wavelength, permittivity, and domain/material properties, the goal here is to establish a parametric mapping from the inputs to the output EM field via a neural network. To this end, the input domains are first normalized in terms of spatial scales and resolutions. Then, the normalized inputs are encoded as wave prior similar to the Fourier positional encoding. Following the masked image modeling allows us to cast the challenging optics simulation problem into a synthesis problem for missing regions given the input light. The network architecture for doing that is inspired by the Fourier network with a separable modification applied to that: using two 1D FFT instead of having one 2D FFT. The authors also propose an augmentation method that uses a linear combination of input lights/output fields as additional inputs/outputs. Overall I like this paper. It tackles an important, but a challenging problem with novel solutions. Execution is also excellent. I hope below comments could be helpful to refine the paper. Strength - The paper tackles an emerging problem of learning-based optics simulation. The proposed method achieves SoTA performance on the device level simulation in terms of both accuracy and speed. - The paper is very well written with clear descriptions and figures. - Execution of the paper is of high quality. Weaknesses - There is no evaluation on the overfitting which often occurs in neural optical operators. - Evaluation is limited on a device scale as mentioned by the authors. - The comparison of the proposed method running on a GPU is done against FDFD method running on CPUs as described in Fig1. - L104: It would be helpful to write detailed forms A,x, and b in the main paper The authors adequately addressed the limitations. <doc-sep>This paper proposed a physics-agnostic light field prediction framework, called NeurOLight, that consists of a joint PDE encoder and an efficient cross-shaped neural operator backbone. A superposition-based mixup technique is developed to dynamically boost the data efficiency and generalization during the training of NeurOLight. Evaluation results show that the proposed method significantly outperforms the existing methods. Strengths: [1] The idea is novel and interesting. It proposed a physics-agnostic light field prediction framework that can improve the efficiency and accuracy of learning parametric Maxwell PDEs. [2] It is the first AI-based framework that can learn the terahertz light propagation inside photonic devices that generalizes to different domains. [3] The proposed framework significantly outperforms the state of the arts with an average of 53.8% lower prediction error. Weakness: [1] It is not clear how much prior knowledge we should have for PDE encoder. If we do not have any prior knowledge in some applications, how does the proposed method work? [2] It is better to report the averaged results with multiple random seeds in Table 1. Yes, they have.
The authors propose a domain-specific extension of neural operators that is appropriate for photonics applications. This is an interesting application of neural operators which demonstrates the usefulness of building in physical priors. Some reviewers expressed concern about the topic being too far outside the usual focus of NeurIPS, but there is also an upside to introducing novel application areas to the NeurIPS community. All reviewers agreed the work was of high quality and worth accepting, so I recommend acceptance.
This paper describes the agreement-on-the-line phenomenon: instead of relying on accuracy-on-the-line, the paper shows that it is possible to use the agreement of a pair of models as a proxy for the ID vs OOD performance tradeoff. Agreement-on-the-line can be estimated solely from unlabelled data and can be used to predict potential OOD performance. Empirical results based on CIFAR-10, ImageNet, and WILDS OOD data show good OOD performance predictions. ## Strengths - Estimating OOD detection and generalization performance of ML models is an important topic. - The paper is well written and easy to follow. - The proposed method of considering the disagreement between multiple models is simple and does not require labeled OOD data access. - Disagreement ensembling can also be performed over checkpointed models instead of multiple fully trained models. ## Weaknesses - Although it seems unlikely that a broad set of models would all agree on the same prediction for an OOD data point, this event probably has non-zero probability, especially for a low number of models. This means that the proposed method can probably not guarantee to be a good OOD performance predictor under all circumstance. This difficulty seems to stem from the fact that behavior on in-distribution data is well understood while this is not the case for OOD data. - Baseline methods for experimental section are not well discussed. It would be helpful to introduce these methods in the related works section. - Limitations are not well discussed. The paper does not appear to explicitly discuss major limitations of the presented work. I was not able to find other limitations explicitly mentioned. It would be great if the authors could comment on the first weakness (and question) listed above. <doc-sep>This paper identifies and extensively analyzes an interesting phenomena relating the agreement and accuracy of models on in-distribution and out-of-distribution data. In particular, they find that when there is a linear correlation between in-distribution test accuracy and out-of-distribution test accuracy across a set of distinct models trained on this data (a concept discovered in an earlier paper), there is also a linear correlation between the *agreements* of pairs of models trained on that data. This phenomena is identified to be unique to neural networks, and somewhat surprisingly, can be used in the reverse direction as a method of unsupervised estimating out-of-distribution accuracy by measuring agreement - even in cases where in-distribution and out-of-distribution agreement is only roughly linearly correlated. * **Originality** While other papers have used agreement to determine how well a model performs on unlabeled data, to the best of my knowledge, this is a completely novel phenomena that has been identified comparing agreement-on-the-line with the recently discovered accuracy-on-the-line. It is clear how this work differs from previous contributions, and the analysis shows new insights that were not described in previous work. * **Quality** The claims are mostly well-substantiated, with extensive experimentation supporting the major claims. The methods used are appropriate, and both the strengths and the weaknesses of the work is remarked on. A possible area of improvement could be in the theory that agreement-on-the-line and acccuracy-on-the-line having the same slope and bias for neural networks but not other models - for example, this could be rigorously tested with a hypothesis test to give further evidence to that claim. Another area of improvement could be increased evaluation with respect to the methods for predicting OOD accuracy - for example, the ALine method that only uses a single model would be useful to have in Table 2, especially if it is in the main body of the paper. * **Clarity** The paper is very clear, both in its organization as well as its figures. It is clear how to run the experiments and reproduce these results, as well as exactly what the two different algorithms are doing. A minor nitpick would be to remove the citation in the abstract so that it can be read completely stand-alone. * **Significance** The paper is significant both from a theoretical and a practitioners point of view. From the theoretical viewpoint, it provides further evidence that some domain shifts have very particular properties that create both agreement and accuracy-on-the-line phenomena. It is very unclear what that phenomena is or how one might be able to predict whether it is occuring, but it clearly has some interesting implications. For the practitioner, this method provides a method for unsupervised prediction of performance on OOD data that is significantly better than other methods (especially when agreement-on-the-line holds), which is a very valuable contribution that I expect will be used. The paper does a nice job describing its limitations, both in the amount of models that need to be trained as well as its reliance on the accuracy-on-the-line phenomena. The paper could benefit from further discussion on kinds of domain shift where agreement-on-the-line is guaranteed to not hold, e.g. if there were a label shift. <doc-sep>This paper introduces a finer-grained version of accuracy-on-the-line called agreement-on-the-line, capable of estimating OOD performance without access to OOD labels. Rather than analyzing the correlation between ID and OOD performance metrics like accuracy-on-the-line, the authors analyze the predictions made by pairs of models, and observe that strong agreement between model predictions on ID data correlates with prediction agreement on OOD data. Moreover, agreement-on-the-line can be used as a test for when accuracy-on-the-line is appropriate to use due to the apparent co-occurrence of the two phenomenon. The main advantages of their method over previous works is not having to rely on assumptions about data shift magnitude, and the ability to aggregate information from many pairs of models. also The authors validate their claims empirically on a variety of datasets, and for several model classes. S1: Agreement between models is a clever surrogate for whether accuracy-on-the-line holds. It is fast and simple to compute. S2: Results are impressive, and consistent across the several datasets considered, particularly for the Camelyon and iWildCam datasets. S3: The ability to use multiple checkpoints for a single model rather than having to train many independent models is very appealing. This was something not investigated by the accuracy-on-the-line paper, so it is a novel and intriguing result. W1: An analysis of agreement vs accuracy would be welcome. For example, what is the biggest possible discrepancy in agreement for two models M1 and M2 that have the same ID accuracy Acc? This could be used to bound the difference in slope between the agreement and accuracy lines, when accuracy is not computable. W2: The definitions of weak (R^2 < 0.75) and strong (R^2 > 0.95) correlation seem a bit arbitrary. Such definitions vary throughout the literature, though an R^2 of 0.75 is generally considered to be quite strong. W3: Some of the claims in section 5 are a bit hand-wavy. For example, "Interestingly, the other benchmarks also did not perform very well on these datasets, suggesting that perhaps the success of these prediction methods could also partially be attributed to accuracy-on-the-line" needs some more justification. There is almost no discussion of limitations, but a couple come to mind. For example, the baselines used in Figure 2 such as linear models and SVMs are not appropriate for CIFAR10 classification. Understandably the authors wanted to see if their results hold for classical ML models, but such models should be evaluated on datasets where they can achieve good performance since conclusions drawn from poorly performing models can be misleading. Also, the experiments section does not investigate why agreement-on-the-line is so effective. It would be nice to see if there is something particular about the optimization procedure used, i.e. the inductive bias of SGD, that enables agreement-on-the-line to work when using model checkpoints. Since the premise of the paper is that the observed phenomenon is peculiar to neural networks, the conclusion should at least discuss what role architecture plays. <doc-sep>The paper investigates empirically the important and practically very relevant question of OOD performance. The approach taken builds on previous work of 'accuracy-on-the-line' (which observes the occurrence of strong linear (transformed) correlation of ID vs OOD accuracy in some datasets) and adds the simultaneous evaluation of agreement between pairs of models in the OOD vs ID. On a varied set of ID/OOD Image classification tasks, the work empirically finds that a similar linear (transformed) relationship exists or does not exists jointly for accuracy and agreement. Further the findings show a strong similarity between the slopes and bias of the agreement and accuracy ID vs OOD lines on a significant subset of the studied datasets. Interestingly outside of neural networks this similarity breaks down. Through these observations the authors address an important practical problem of assessing existing models' performance on OOD data, critically, without the availability of labels for the OOD data. The proposed method relies on the observed joint existence (or lack there-off) of accuracy-on-the-line and agreement-on-the-line phenomena --- such that if there is no agreement-on-the-line one can test (verify) it. Then, by exploiting the similarity of the lines, they show how to estimate the expected OOD accuracy. The paper proposes both a pair-model based assessment as well as a multi-model based assessment which allows for noise and potential bias reductions (e.g. from the transform mismatch). Finally and importantly, the paper shows for a specific case that agreement-on-the-line holds also across training. $\\textbf{Strengths:}$ The paper adds simple, novel and very powerful set of observations: * The existence (or lack there-off) of strong linear (transformed) correlation of accuracy is empirically found to go hand in hand with a strong linear (transformed) correlation of agreement between ID and OOD. * When both exist the correlation coefficient are 'similar' (in a large subset of the cases) for neural networks examined. These observations are interesting both theoretically (for further research) and practically --- for the pertinent question of how one may assess OOD expected accuracy of a given model given OOD, unlabeled, data. By building on the above observations together the authors demonstrate how this practical question may be addressed: OOD accuracy feasibility can be verified (if linear correlation is high) and carried out (assuming identical correlation coefficients between accuracy ID vs OOD line and agreement ID vs OOD line hold). The method is straightforward and clear given the assumptions. Throughout the paper is clearly written. $\\textbf{Weaknesses / areas of improvement:}$ 1. Most important --- assessment of single model OOD performance: In practice, it is most often the case that the practical question is for a given model --- what would be its OOD accuracy. The authors introduce a path to address that (this is a strength of the paper) though the throughout-training agreement vs accuracy experiment, however, the authors only cursorily touch on the same-model OOD performance with a single example. Since this a very powerful and practical setting, seeing that is generalizes across the datasets is important. It is advisable the authors expand this experiment across the datasets and in particular for the imagent/V2 datasets. 2. Generalizability and Implications --- Is agreement-on-the-line a fundamental phenomena, does it hold outside of image classification? It would increase the confidence in the generalizability of these finding (and their applicability) if demonstrated in the NLP context. It would be very good to add for example a checkpointed pre-trained transformers (single model across training for simplicity) and evaluate loss and agreement loss for ID vs OOD. Even if the phenomena does not hold in this setting that would increase the insight provided by this paper for further research. Adding these items would change this to an even stronger paper $\\underline{Minor:}$ stylistic / typos: line 112 --- reference 42 duplication line 231 --- the $\\Phi^{-1}$ probate transform is undefined (left for the reader to piece together) line 233 --- summation index typo, should be $i \\neq j$ line 242 --- should read 'substituting $b=...$ into (5)' In practice (and in the appendix) the 'almost the same slope and bias' observation seems to hold to various degrees across the data examined and is left unquantified, or only quantified indirectly (e.g. through the MSE of the accuracy estimation in a subset of the cases). consider adding a systematic quantification in the paper across the datasets (with details in the appendix) in terms of MSE OOD accuracy prediction. yes
This work addresses the “agreement-on-the-line phenomenon” by extensively analyzing a phenomenon relating to the agreement and accuracy of models on in-distribution and out-of-distribution data. In particular, one of the findings is that when there is a linear correlation between in-distribution test accuracy and out-of-distribution test accuracy across a set of distinct models trained on this data, there is also a linear correlation between the agreements of pairs of models trained on that data. Agreement-on-the-line can be estimated solely from unlabelled data and can be used to predict potential OOD performance. The main advantages of their method over previous works are not having to rely on assumptions about data shift magnitude, and the ability to aggregate information from many pairs of models. The paper proposes both a pair-model-based assessment as well as a multi-model-based assessment which allows for noise and potential bias reductions. Empirical results based on CIFAR-10, ImageNet, and WILDS OOD data show good OOD performance predictions. The paper convinces in all four categories (originality, quality, clarity, and significance), and the reviewers all agree on accepting this work for publication. For the camera-ready version, it would be great if the authors could include a short description of the baseline methods and briefly discuss the reasoning behind choosing the R2 threshold.
The authors study how pruning network parameters affects relative recall across classes. In particular, they argue that high levels of pruning increases the asymmetry in recall already present in the dense network. If recall for a class is below accuracy before pruning, it tends to decrease relative to accuracy after pruning, and if it is larger than accuracy before pruning, it tends to improve relative to accuracy after pruning. They evaluate this hypothesis by constructing a model-level metric for “intensification” that aggregates information across classes and conducting statistical tests to verify this intensification effect. Using similar statistical tests, they observe that intensification is higher at high pruning ratios, and for high model and data complexity, but lower for low pruning ratios. They also study how the intensification effect varies across 3 existing pruning methods and introduce a new pruning method which they claim decreases this intensification effect. **EDIT:** The authors have sufficiently addressed my main concerns and the additional experiments are quite promising and make this a stronger paper. The methodology in this paper is quite thorough and I think that the findings will be interesting to the community. I am raising my score from a 6 to 7. **Strengths:** Though previous work has demonstrated that the impact of pruning is uneven, as far as I know, the specific hypothesis that pruning intensifies a pre-existing recall distortion is novel. This is an interesting hypothesis and has potential impact as it enables one to predict, pre-pruning i.e. based on the dense model, which classes are likely to be more affected by pruning. The authors perform a thorough and systematic statistical analysis to verify their hypotheses. The authors also study under what conditions (variations in model/dataset complexity and pruning ratio) the hypothesis holds. The methodology is sound and commendable. Additionally, the finding that recall distortion at low pruning ratios decreases intensification is potentially very interesting and impactful. The paper is overall well written and clear, though a few minor typos and grammatical errors are a little distracting—I would recommend a careful proofread. **Weaknesses:** Despite the thorough experimentation, this work has a few weaknesses that I think needs to be further addressed. I will outline them here but please see the Questions section for a more thorough discussion. 1. The primary metric $\\alpha$ used in this paper has some weaknesses. 2. The direct utility of this work is a little unclear. A more explicit discussion about how this work is different from previous work and what the authors think the marginal benefit of this work is would also be helpful. (a) The motivation for and benefit of UP is unclear. (b) Reduction in recall distortion from small amounts of pruning is under-explored. (c) Lacks a demonstration of how this can be operationalized. The authors have sufficiently addressed the limitations of the work. <doc-sep>Paper studies the problem of recall distortion in neural network pruning. Particularly, authors observe that pruning makes recall relatively worse for a class with recall below accuracy and makes recall relatively better for a class with recall above accuracy. Authors proposed a new pruning algorithm namely undecayed pruning which can attenuate such effect. Observations made in the paper will be helpful to the future work on network pruning. **Strengths:** 1. Paper studies an important problem of recall distortion in neural network pruning. 2. Selected number of criteria is quite representative and contains magnitude based criteria, gradient based criteria and random pruning. 3. Proposed properties of the pruning are quite interesting and novel. For example, intensification is less severe with the proposed undecayed pruning algorithm but nevertheless more pronounced with relatively more difficult tasks, less complex models, and higher pruning ratios 4. Many pruning papers consider only one setting, but authors use the same framework (datasets, models, pruning ratios) and provide insights by rigorous statistical tests. **Weaknesses:** 1. Paper lacks some conclusions about how to interpret intensification. In other words, how intensification helps design better pruning algorithms. 2. The proposed undecayed pruning is affected by the regularization techniques applied during training. For example, weight decay is often used in normal training. Batch Normalization and Dropout have implicit regularization. Could authors comment on how those settings will affect the undecayed pruning? 3. Other pruning methods: Significant progresses have been made recently in developing advanced network pruning techniques, such as [1, 2]. Do these methods have the same effect of recall distortion? Or are these methods already a stepping stone in the right direction towards reducing intensification? [1] Minsoo Kang and Bohyung Han. Operation-aware soft channel pruning using differentiable masks. In ICML, 2020. [2] Yang Sui et al. CHIP: CHannel independence-based pruning for compact neural networks. In NeurIPS 2021. **Post rebuttal** Thank you for your detailed rebuttal. All my concerns are addressed, it includes a comparison with other pruning methods. I decide to increase my rating from 5 to 6. Yes <doc-sep>This paper claims that network pruning inherently makes the model's class-wise accuracy (i.e., recall) imbalanced across classes, which the authors call intensification effect. In order to prove the hypothesis, some ratios are introduced, which models relative performance of each class with respect to the overall accuracy, and thereby intensification ratio is finally defined and experimentally measured. In the experiments, intensification effect is examined in terms of model complexity and dataset complexity, and the corresponding results show that the more complex the dataset, the simpler the model, the stronger the intensification effect. (Strengths) 1. The paper is well written and motivated, providing novel insights on network pruning. 2. The hypotheses are adequately designed and analyzed in a statistical manner. 3. Undecayed pruning is devised via a novel finding about the relationship between magnitude pruning and gradient pruning. (Weaknesses) 1. As the authors also mentioned, the tackled pruning schemes are somewhat naive and conventional, compared to the SOTA pruning methods. This weakness makes this work less interesting in practice. 2. Some new metrics should be more rigorously justified. For example, recall balance and intensification ratio can be defined in many other ways rather than computing the relative figure with respect to the overall accuracy. 3. In Sections 7.2 and 7.3, the expected results on intensification effect are not well explained. Thus, why the smaller models are prone to intensification effect? and why the more complex datasets can make the effect stronger? Yes.
This paper studies the disparate effect of model pruning across classes and proposes a new method to reduce the "recall distortion" across classes. This is a critically important problem, and one which has just begun to be carefully studied in the literature, so this work is timely and relevant. All reviewers recognized the relevance of the problem and the novelty of the authors' approach, both with respect to the new approach presented here, as well as the detailed analysis of the various factors which impact recall distortion. There were some concerns regarding the complexity of the pruning algorithms studied, but the authors provided a number of additional experiments on other pruning approaches in their response, finding qualitatively similar effects (as might be expected given the reliance of many of these approaches on some form of magnitude pruning). I think this paper will be a valuable addition to a poorly understood and important research area, and should be accepted.
The manuscript challenges the widely believed assumption that Bayesian neural networks (BNNs) are well suit for out-of-distribution (OOD) detection, by showing empirical results obtained using infinite-width (allowing the exact inference, because a network can be equivalently represented by a GP with the included kernel) and finite-width networks (requiring the approximate inference). The manuscript provides several observations in line with this purpose (to disclose potential problems when BNNs are used for ODD detection). For example, the exact inference in infinite-width networks does not necessarily lead to desirable OOD behavior, e.g., the posterior function standard deviation obtained from an infinite-width 2-layer ReLU does not reflect the underlying data generating process and this uncertainty is not suit for OOD detection. In addition, this observation is consistent with the corresponding finite-width networks. Strengths 1. The manuscript includes not only finite-width networks but also infinite-width networks in the analyses. 2. The manuscript points out potential problems when naively using BNNs for ODD detection. Weaknesses 1. The manuscript does not provide any theoretical analyses for BNNs on OOD problems. 2. All the conclusions are made from toy simulated datasets. The manuscript has a certain contribution that it points out potential problems when naively using BNNs for ODD detection. However, I think that this contribution is not enough due to the same reasons I listed as the weaknesses of this study above. <doc-sep>In this work, the authors challenge the assumption that underlies many recent works that Bayesian neural networks should be well-suited to out-of-distribution detection. In order to do this, the authors focus on a function-space view by examining the properties of infinite-width BNNs. They use this analysis to argue that BNNs may not necessarily be well-suited to OOD detection. They further argue that there is a tradeoff between OOD generalization and uncertainty, and finally propose an alternative method of validating OOD properties of models. The paper is well-written and clear, and I enjoyed reading it. I think the topic is extremely timely and I fundamentally agree with the authors that there is not necessarily any good reason to think that BNNs should be good for OOD detection. While the analysis does not really involve anything new from a technical level, the observations the authors make have important implications for the field and have not, to the best of my knowledge, really been discussed before. However, there are aspects that I believe could be improved. Therefore, I am willing to accept the paper, but only tentatively. My main reservation is that the paper only truly considers regression tasks, as the classification task it does consider is done via regression. While the authors do this to avoid having to use approximate inference, classification with classification likelihoods tends to perform quite differently to classification by regression. This is particularly relevant, as most works assessing BNNs on OOD detection that I am aware of do so on classification, not regression. It would be good if the authors could perform the same HMC using logistic regression on their toy problems. Another reservation I have is that practically all the examples are low-dimensional and toy. While I don't think this is strictly necessary for acceptance, since the paper makes no strong claims about more difficult problems, the paper would be significantly strengthened by using more complicated problems. Finally, it would be good if the authors could comment more about OOD generalization. This would be particularly relevant, as there has been a recent surge of interest in the performance of methods on OOD datasets such as CIFAR-10-C. In particular, the authors propose a method for validating OOD properties in Sec. 7 as an alternative to these sorts of datasets. However, the authors do not really provide many details as to how one could achieve this, nor do they provide any experiments where they actually use this proposal. I think that an example of actually using this would be very useful in gauging the effectiveness of the authors' proposal. There are also some references that it would be good if the authors could add or discuss, although this is more minor. For instance, the NNGP equivalence for deep networks was derived concurrently with Lee et al. (2018) and more rigorously proven in [1]. The authors should also discuss the recent work of [2], which proposes a new BNN prior to improve OOD generalization. Finally, the authors may find it interesting to discuss [3], which provides an explicit example of how the uncertainty of a BNN may fail in classification settings. Minor points: - Have the authors checked how well the HMC converges? According to the experimental details, the HMC is only run for a relatively short amount of time, even accounting for parallel chains. References: [1] https://arxiv.org/abs/1804.11271 [2] https://arxiv.org/abs/2106.11905 [3] https://arxiv.org/abs/2010.02709 In summary, I found that this paper represents interesting and timely work that should inform future discussion as to how we think about OOD detection and generalization with BNNs. However, I found the experimental evaluation somewhat lacking, and therefore am only tentatively recommending accept at the moment. I look forward to reading the other reviews and to hearing the author responses. <doc-sep>This paper carries out an analysis that motivates that the use of the Bayesian predictive distribution and its uncertainty is not appropriate for detecting out of distribution data. The paper focuses on the case of Bayesian Neural networks and its infinite-wide generalization. Namely, a Gaussian process. The paper has no experiments but gives illustrative insights about why the Bayesian posterior is not suitable for detecting out of distribution data. They show that exact inference with GPs (infinitely wide networks) does not lead to desirable OOD detection. They discuss desirable kernel features for OOD. They emphasize that the choice of weight-space prior has a strong effect on OOD performance. They argue that there is a trade-off between good generalization and having high uncertainty on OOD. I believe this is an interesting paper. However, it lacks novelty in the sense that everything that is covered in the paper is more or less already known in the machine learning community. Furthermore, it is natural that the posterior predictive uncertainty is not suitable for OOD detection in some cases. The reason is that the posterior uncertainty can be high simply because there is no data in a particular region. It is natural to expect that exact inference in infinite-width networks under common architectural choices does not necessarily lead to desirable OOD behavior. This is especially true if the model assumptions are wrong. The idea that weight-space prior has a strong effect on OOD performance is something to be expected, since it will have a big impact in the posterior uncertainty. I do not see why the use of incorporating prior knowledge, that is usually encoded in an input-domain agnostic manner, can negatively affect OOD uncertainties. It will reduce uncertainty, but that need not be bad for OOD detection. In fact, it can be beneficial. Summing up, I believe that, although this paper shows some interesting concepts or ideas, it is does not provide a significant break thought in terms of analyzing OOD and the obtained results are more or less already known. This will limit its impact in the community. I also the paper a bit misleading since in several figures the data presented for training is incompatible with the assumptions implied by the chosen prior. For example, in Figure 6 it seems that the RBF kernel is better for OOD detection since it will lead to higher uncertainties. I do not see this. In fact, it seems to me that it will be better to use the ESS kernel for that. (I mean OOD in the y values no the x values). I also have the feeling that this paper is using the uncertainty on y to detect OOD in the x domain which is counterintuitive. I believe that if you want to detect OOD in the x domain you should have a model for x, not y given x, which is what BNN and GPs do. The paper is more or less clear, however, and well written. A well written paper showing interesting concepts that are mostly well known in the machine learning community. The paper lack novelty.
The authors question the assumption that the epistemic uncertainty provided by Bayesian neural networks should be useful for out of distribution detection. They start their analysis in the infinite width limit so as to be able to understand how the induced kernels in a Gaussian process behave. The paper also discusses the potential tradeoffs between generalization and detection. Overall, the paper presents some facts that, while not surprising, (Reviewer fGuy), are helpful in questioning the default assumption. Overall, though, the combination of the lack of surprise with the multi-part, somewhat loosely connected message reduces the quality of the submission.
Summary: The paper provides an interesting algorithm for tensor PCA, which is based on trace invariants. The problem consists of recovering a (single-spike/multiple orthogonal spikes) tensor corrupted by a Gaussian noise tensor. The authors proposed a new algorithm which allows recovering a signal for a sufficiently small signal to noise ratio. ########################################################################## Reasons for score: Overall, I vote for accepting. I like the idea, and the proofs seem to be coherent and correct. The problem has clear importance for the theoretical/statistical physics community; however, I am not convinced of the importance of the problem considered here for the ICLR community and appreciate the author’s comments on this. I also have a few minor concerns, which, hopefully, can be addressed by the authors in the rebuttal period. ########################################################################## Pros: 1. The paper takes an interesting question about tensor PCA and proposes a promising approach to solve it based on the trace invariants. For me, the problem is encouraging, while I would appreciate a discussion about possible machine learning/AI applications (learning latent variable models? anything else?) 2. The mathematical justification of the statements seems to be correct for me and ok to follow. 3. It is claimed in the paper that the algorithm improves the state-of-the-art (signal to noise ratio requirements) in several cases, while a brief survey/table of the recent results is missing. Unfortunately, I am not working in this area and probably not familiar with recent results ########################################################################## Cons: 1. Applications for ML/AI/Language processing are not very clear for me, and I would appreciate a discussion on this in the paper. 2. Empirical justification. I would highly appreciate having more experiments on real data (if any) and a detailed comparison of the methods in terms of accuracy/memory/time. <doc-sep>The paper presents a pair of interesting algorithms using trace invariants to detect the signal in the signal-plus-noise tensor PCA framework. The algorithms function by considering cutting an edge in the graph representation of the trace invariant, yielding a matrix whose leading eigenvector provides a (up to a rotation) estimate of the signal vector $v$. This algorithm appears to be very interesting and works well in a series of simulations. Unfortunately, the presentation of the paper makes it very difficult to assess the importance of the contribution. The introduction is well-written and well-motivated, though the later segmentation of the paper into many small subsections without much exposition makes the flow of the paper and its results hard to follow. In addition, the notation and terminology in the paper are imprecise and, with important terminology and symbology introduced without definition or background citation. Pros: - The proposed algorithm is clever and appears to do well compared to existing approaches in experiments. - Well written introduction (with the only complaint being some minor grammatical errors). Cons: - Important notation is introduced, and is not defined; Equation 4 is an example of this, where $\\langle \\cdot \\rangle$ (I assume this means $\\mathbb{E}$?), $\\bar{\\mathbf{T}}$, and $\\mathcal{E}^0(\\mathcal{G})$ are all undefined. This occurs often in the paper and in the appendix. - In the $\\bullet, \\times, \\bullet$ decomposition at the start of Section 2.3, what is $\\sqrt{N}$? - What is the variance of a graph (as in Theorem 4)? The proof sketch of this theorem is very hard to follow. - Algorithm 1 is imprecise; what does "compare $\\alpha$ to $\\sigma(I^{(N)}(T))$ mean? If $\\alpha>\\sigma(I^{(N)}(T))$ then a spike is detected? How do you compute the variance of $I^{(N)}(T)$? How would you compute this if the noise model did not have unit variance)? - Both algorithms are only presented for 3-way tensors, but the Theoretical claims are for higher order tensors? - The proofs of the theorems and the statement of the theorems are, in general, a bit imprecise. For example, in the proof of Theorem 2, Chebyshev's inequality will not guarantee disjointness everywhere, but only with high probability. This is the case if $\\beta_{det}$ is finite. This is a finite $\\beta_{det}$ result, with a claim only holding in the limit. - In Theorem 5, what are the intermediate graphs/matrices? In addition, this section (and Appendix C discussing perfect one-factorization) are a bit opaque. - Is the decomposition after equation 5 only for the melon graph? For more complex graphs (i.e., the tetrahedral), I believe you will have additional trace-like coefficients on all terms. In any event, I am confused about the summands. I do not see why the all $Z$ sum would have a $\\beta$, while the cross-terms would not. Furthermore, why would the all $v$ sum not have a $\\beta^d$ coefficient? This is what is implicitly being used in the proofs? - In the experiments, important details are left out. What is the setup here: what are the $v$'s, how many iterations of tensor power method are applied, how many MC replicates are run to produce the error bars, what is the y-axis, what are the runtimes here, what is Random in Figure 6? More detail would help a lot to understand how your new approach compares (it appears well) with the current literature. <doc-sep>Summary: This paper studies the detection and recovery problem in spiked tensor models in the form T = \\beta v0^\\otimes k + Z, where v0 is the underlying spike signal and Z is a Gaussian noise. The authors claim that they propose a new framework to solve the problem, by looking at the trace invariants of tensors. The authors provide a detection algorithm (Algorithm 1) and a recovery algorithm (Algorithm 2), as well as the corresponding phases. The authors claim that: 1) they "build tractable algorithms with polynomial complexity", "a detection algorithm linear in time"; 2) the algorithms are very suitable for parallel architectures; 3) an improvement of the state of the art for the symmetric tensor PCA experimentally. The authors furthermore discuss the asymmetric case and the multiple spike case. Recommendation: At the current stage I vote for rejection. I am not able to follow the proofs in this paper due to missing definitions of terms and notations. Also some claims are not proved. See below for details. Pros: - The methods used in the paper seem new for spiked tensor models. - Some experimental results are provided. Cons: - The readability of this paper severely suffers from its writing. At the current stage, filled with undefined or inconsistent notations and terms, this paper is not self-contained and hard to follow. This becomes worse considering the fact that this paper studies tensor problems -- many tensor-related terms have multiple definitions (e.g., eigenvalues, ranks). It will be very hard to follow the proofs if the definitions are unclear. Here is an incomprehensive list: - Middle of Page 3: what is the *formal* definition of contracting (instead of saying "equivalent to a matrix multiplication")? Also, trace invariants are never formally defined in this paper. - eq.(2),(3): what is O(n) here? Also, what does the bold O refer to? Right before eq.(4) the authors use another notation \\mathcal{O}(n). Is this the same as the first O(n)? In the abstract the authors use \\mathcal{O}(1) to refer to the constant order. Why the inconsistency? - End of Page 3: how is \\mathcal{G} related to trace invariants formally? - Section 2.2: this is not clear. What are the matrices here? What is the definition of M_{G,e}? - Section 2.3: what is the definition of I_G(T)? - Theorem 3: what is the Loss function here? - Top of Page 5: what is the exact definition of "dominating" here? - It should be noted that, without clear definitions of I_G(T) and M_{G,e}(T), there is no way to verify Algorithm 1 and 2. - The authors claim "polynomial complexity" at the beginning of the paper, but it is never proved. Theorem 7 claims that Algorithm 1 and 2 run in linear time. I cannot find that in the proof. - It is unclear why the algorithms "are very suitable for parallel architectures", as the authors have claimed. Have the authors tried running the experiments in parallel? - Theorem 4, 5, 9, 10 do not have complete proofs. Minor comments: - Page 2 Notations: typeface of v is not consistent. - Page 8: "eg" should be "e.g."
This paper studies the tensor principal component analysis problem, where we observe a tensor T = \\beta v^{\\otimes k} + Z where v is a spike and Z is a Gaussian noise tensor. The goal is to recover an accurate estimate to the spike for as small a signal-to-noise ratio \\beta as possible. There has been considerable interest in this problem, mainly coming from the statistics and theoretical computer science communities, and the best known algorithms succeed when \\beta \\geq n^{k/4} where n is the dimension of v. The main contribution of this paper is to leverage ideas from theoretical physics and build a matrix whose top eigenvector is correlated with v for sufficiently large \\beta using trace invariants. On synthetic data, the algorithms achieve better performance than existing methods. The main negative of this paper is that it is not so clear how tensor PCA is relevant in machine learning applications. The authors gave some references to applications of tensor methods, but I want to point out that all of those works are about using tensor decompositions, which despite the fact that they are both about tensors, are rather different sorts of tools. Many of the reviewers also found the paper difficult to follow. I do think exposition is particularly challenging when making connections between different communities, as this work needs to introduce several notions from theoretical physics. I am also not sure how novel the methods are, since a somewhat recent paper Moitra and Wein, "Spectral Methods from Tensor Networks", STOC 2019 also uses tensor networks to build large matrices whose top eigenvalue is correlated with a planted signal, albeit for a different problem called orbit retrieval.
The paper addresses the task of GZSL – more specifically, they provide a way to improve the quality of the generative samples in generative GZSL. A closed-form probe model is introduced to provide an efficient and differentiable solution in compute graph. In this manner, the generator receives feedback directly based on the value of its samples for model training purposes. It shows the results on two different settings, with fine-tuning features and without fine-tuning the features. Strong Points: 1- The presentation is clear, and the research problem is well motivated. 2- The proposed method is evaluated on four relatively comprehensive benchmark datasets. Weaknesses: 1- The paper could not highlight its novelty well. The idea to improve model generalization ability with cross-validation is not new. The proposed method seems to be an integration of generative models and the existing closed-form solution. The closed-form probe model is borrowed from [a],[b],[c]. 2- To prove the better efficacy of the proposed model, it should be trained using a few examples (5/10 samples) per seen class. 3- The experiments for a large dataset (ImageNet) should be included for better evaluation. [a]- Meta-Learning for Generalized Zero-Shot Learning, AAAI 2020. [b]- Episode-Based Prototype Generating Network for Zero-Shot Learning, CVPR 2020. [c]- Towards Zero-Shot Learning With Fewer Seen Class Examples, WACV 2021. Due to lack of novelty and the performance of the proposed model is marginally above the existing approaches. <doc-sep>This paper aims to address the problem of generalized zero-shot classification based on generative models. The main contribution is that it considers training and evaluating a generative model in synthesizing training examples that are helpful to improve classification performance. To this end, it leverages the zero-shot learning model of ESZSL that can be efficiently fit using a closed-form solution. This then serves as a sample probing mechanism, so that the generator receives training signal directly based on the value of its samples for classifier training and thus enables end-to-end training. The corresponding sample probing loss function is added into the standard generative model training loss. The final training procedure is performed in a way that is similar to meta-training. The approach is tested on standard generalized zero-shot classification setups, including CUB, SUN, AWA2, and FLO, and compared with state-of-the-art results. Paper Strengths: The authors tackle an important and challenging problem of generative modeling based generalized zero-shot learning. The proposed approach is simple. Experimental evaluations demonstrate the effect by introducing the sample probing method and end-to-end training. Paper Weaknesses: 1) In the related work section, the authors discussed meta-learning and few-shot classification. However, there was a lack of discussion on using generative models to perform data augmentation for meta-learning and few-shot classification, such as [Low-shot visual recognition by shrinking and hallucinating features, ICCV, 2017] [MetaGAN: An adversarial approach to few-shot learning, NeurIPS, 2018] [Low-shot learning from imaginary data, CVPR, 2018] [Delta-encoder: an effective sample synthesis method for few-shot object recognition, NeurIPS, 2018] [Image deformation meta-networks for one-shot learning, CVPR, 2019]. While zero-shot and few-shot learning are different, they are related. In particular, I feel that the generative model framework in this paper is similar to [Low-shot learning from imaginary data, CVPR, 2018], where the classification loss is used to train the generator in an end-to-end fashion and the generator is integrated into the meta-training framework. An in-depth discussion on this is needed. 2) The proposed approach relies on a closed-form zero-shot model, which is also the main technical contribution of this paper. For this purpose, the ESZSL model [Romera-Paredes & Torr, 2015] is used. However, it seems that such requirement makes the method restrictive to this zero-shot model, and thus is not general and cannot be combined with more state-of-the-art zero-shot models. Even for ESZSL [Romera-Paredes & Torr, 2015], its closed-form solution only applies for some particular parameter setting. 3) From the results, like in Table 1 and Table 3, the improvements of the proposed approach seem quite marginal, especially when the features are fine-tuned. Also, the proposed approach is not consistently better than existing methods. For example, in Table 3, for the fine-tuning setting, on 2 out of 4 benchmarks, the proposed approach is worse than the state of the art. 4) It would be interesting to see the ablation study regarding the different loss components. 5) Somehow it is a little vague – is the paper synthesizing raw images or feature vectors in the pre-trained feature space? This is only explicitly mentioned until it talks about the implementation details. It would be better to make this explicit starting in the introduction. 6) Following the previous comment, it would be interesting to visualize the synthesized features, like using t-SNE visualizations, and analyze why they are helpful. 7) There are a bunch of hyper-parameters involved in the proposed approach. The authors also mentioned the difficulty in consistently setting up these hyper-parameters. Moreover, how is the hyper-parameter sensitivity? 8) There are several grammar issues and typos in the paper. Please proofread. The paper introduces sample probing to train a generative model that synthesizes data in feature space for generalized zero-shot learning. The proposed approach is restrictive to certain zero-shot learning models. Some ablation studies and analysis are missing. ================= post rebuttal: I thank the authors for the extensive experiments and clarification made in the rebuttal. The rebuttal has addressed most of my concerns, especially showing the generalizability of the proposed approach (different types of generative models and different types of closed-form ZSL models), so I raised my score. However, I am still a little concerned about the novelty of the proposed approach and its marginal improvements. <doc-sep>This paper proposes a generalized zero-shot learning approach by providing feedback for generator with the evaluation of real samples of unseen classes. Strengths: 1- The paper is well written. 2- This approach tries to generate realistic, relevant, and informative features train an accurate classifier using a sample probing mechanism. 3- It also uses feature fine-tune to improve the performance further. Weaknesses: 1. The novelty of the proposed approach is limited. The main contribution of this paper is the sample probing technique to generate features. However, prior works ([1],[2],[3]) have proposed a very similar approach. Therefore, given [1],[2],[3] methods, the contribution of this paper is not novel. 2. Considering prior works ([3], and other results from Table 1 of [3]), the accuracy improvements are either not significant or are lower. [1] Episode-Based Prototype Generating Network for Zero-Shot Learning [2] Meta-Learning for Generalized Zero-Shot Learning [3] Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot Learning Considering the weaknesses of the paper, I would give this paper marginally below the acceptance threshold. <doc-sep>The paper outlines a method for improving generative zero-shot learning (ZSL) approaches. Rather than simply training the generative model with the goal of reproducing some real data distribution, the work proposes to train it with an additional goal of synthesizing samples that directly improve the performance of the downstream classification model. The authors propose to do so through a novel sample-probing loss in which generated samples are used to train a closed-form ZSL model with a differentiable solution. The ZSL model is then evaluated directly on the classification task - and gradients are back-propagated to the generative model's parameters. By applying this approach to an existing generative ZSL model, the authors demonstrate improved sample quality for their synthetic data and increased classification performance across multiple datasets. In summary, the paper's contributions are: 1) A major contribution: An approach to improving existing generative ZSL methods with a loss that maximizes their performance on the downstream task. 2) A minor contribution: A more detailed and rigorous reporting of the methodologies used to fine-tune model hyperparameters, aimed at increasing reproducibility. Paper strengths: 1) The paper is well written and easy to follow and understand. 2) The motivations for the work are clear, as is the method itself. 3) Providing greater detail on hyperparameter choices is great, and as the authors demonstrate - it is also crucial. 4) The proposed method is also general in the sense that it can be readily applied to future work. In my opinion, this is a major selling point of the work. 5) Results appear to indicate an improvement over the state of the art. Note, however, see point (1) in the weaknesses section in regards to points 4 and 5. Paper weaknesses: While the method is promising and the suggested approach makes intuitive sense, I feel like the experimental results do not currently support it well enough. In particular (listed in order of importance for my evaluation): 1) As the author's note in their closing remarks: "Our method works in an end-to-end manner and it can be easily integrated into any mainstream generative zero-shot learning framework.". This is, in my opinion, one of the most significant selling points of the paper. Alas, it is not investigated. The method is applied only to a single baseline model*, where it shows improved performance only when the baseline is trained with a different set of hyper-parameters than in it's original implementation. *There is an additional experiment on a single dataset (out of 4) with a different, very basic baseline. 1.1) What were your results when you used the same hyper-parameters as the baseline model? Why do you think your model outperforms the baseline with a specific number of iterations, but underperforms with another? Perhaps your model is simply more efficient and converges faster, but introduces other problems which mitigate the advantage in the long-run? 1.2) If things are highly hyper-parameter sensitive, why did you optimize only their number of iterations? What about the other parameters? 1.3) Can your method be applied to other generative ZSL models? I would have more confidence in the general applicability of the method if it demonstrated improvements when integrated into multiple existing works. In this context, Table 3 (where most of the results reside) is largely irrelevant. Most of these numbers are simply an indication that TF-VAEGAN is better than its competition. (I am not advocating the table's removal - but applying your method to the models listed there would be a considerable improvement). 2) Do you have any intuition as to why the choice of ZSL / GZSL in your loss (i.e. table 4) is so crucial that a wrong choice may make your model perform equally to, or worse than the baseline on some datasets? Is this merely a function of how well the original model performs on seen vs unseen classes? (this is something that could be seen if we had more adapted baselines to compare with!) 3) How much of an effect does your method have on training times? Does the closed-form solution of EZSL have a noticeable impact on the time required per training iteration? 4) A natural alternative to your approach may be the use of meta-learned ZSL model in place of the closed-form solution. This would allow updating the model with a single training iteration which may produce sufficiently strong gradients for your generator. My knowledge of recent works in the field is limited, are you aware of any works doing something similar? If so, comparisons to them could strengthen your work. 5) Given the aim of increasing reproducibility by reporting hyper-parameter tuning methodologies, I would add answers to the following questions (at the very least to the released code): 5.1) Are you using the same 20% validation split for all experiments on a given dataset? If so, can this split be released? 5.2) Did you use a single split, or cross-validate? 6) Sample quality metrics - Fréchet distance is typically sensitive to the number of generated samples used in the comparison. These should be reported to facilitate future comparisons. Overall, I enjoyed the paper and would have liked to recommend acceptance. The approach makes intuitive sense, and can no doubt be extended with multiple future works, offering the community a parallel line of investigation into improving GZSL results. However, I think the flaws outlined in point (1) of the weakness section are fundamental enough that I am worried that any future works which try to build on this one might simply be wasting their time. I do not believe this to be the case, but I would like to see additional experimental results that would convince me otherwise. *I have marked my confidence as 3 due to limited familiarity with related work. As an extension to that, I may have missed prior art which already suggested similar ideas. ****************************** Post rebuttal update: The authors conducted an extensive set of experiments and addressed my primary concerns (the method's generalizability to additional baselines). I am still concerned about some aspects of the evaluation (considerably worse results for all baselines compared to their originally reported values). However, since the paper suggests a method for improving other models, the relative improvements are what matters most. As such, I am willing to accept the current demonstration of a (fairly) consistent improvement when using the same hyper-parameter selection approach across the board. Even if accepted, I urge the authors to better highlight and explain the difference between their experimental results and the original baseline values. <doc-sep>This paper proposes sample probing, a meta-learning scheme for zero-shot learning (ZSL), to measure the quality of the synthetic samples provided by certain generative models. Specifically, an existing closed-form ZSL solver is plugged into an existing generative ZSL framework. Owing to the differentiability of the solver, the whole pipeline is end-to-end trainable. Experiments were conducted on four standard benchmarks, where we can observe the state-of-the-art performance achieved by the proposed sample probing based approach. This work attempts to address a major concern in current generative ZSL models, i.e., the quality of the synthesized training examples used to train the final GZSL classifier, as the final GZSL performance highly depends on those generated samples. By measuring the quality of those samples during the training process, more informative samples can be obtained, leading to improved performance. Overall, the paper is well-written and easy to follow, with adequate technical details for re-implementation. Below please find the detailed suggestions and questions: 1) In the last paragraph on Page 1, the authors claimed that samples need to be realistic, relevant and informative. However, in the method section, there are no detailed discussions regarding how the proposed solution endows the synthetic samples with these three properties. More clarifications on this point need to be added. 2) Another concern is the performance gain over TF-VAEGAN. From the tables, we can only observe less than 1% increase w.r.t. the harmonic mean in most cases on the benchmarks. One could have expected to see more gains as measuring the sample quality provides important additional information for the generative model. 3) Does the overall framework highly depend on the solvers it adopts? Is it possible to adopt other solvers like the one in SAE [R1]? If employing this solver, can we obtain even more improved GZSL performance as SAE shows the advantage over ESZSL? 4) From Fig. 3, it is interesting to see that the validation set and the test one exhibits the almost opposite trend w.r.t. the harmonic mean. Does this indicate that the hyper-parameter tuning policy is not suitable for the GZSL task? 5) Table 2 only depicts the results on the CUB dataset - how about the performance on the other three datasets? [R1] Kodirov et al., Semantic Autoencoder for Zero-Shot Learning, CVPR 2017. This paper addresses an important issue existing in current generative ZSL models. The paper is well-motivated and the solution seems to be effective. The major concern lies in the performance gain and some details regarding the method need further clarifications.
The paper proposes to improve (generalized) zero-shot learning, by training a generator jointly with the classification task, such that it generates samples that reduce the classification loss. To achieve this, they use a zero shot model that has a (differentiable) closed form solution (ESZSL), so the full model can be optimized end-to-end. The approach is evaluated on the standard benchmarks of GZSL. Reviewers had some concerns regarding novelty compared with previous work and quality of experiments and evaluations. The authors answered most of these concerns in their rebuttal including discussion with previous work and additional evaluations. As a result, the paper would be interesting for the ICLR audience.
The paper proposes a generative model that consists of two components: a normalizing flow and an energy-based model with short-run MCMC as proposed in (Nijkamp, 2019). The whole procedure operates by pushing some simple distribution through the NF and then running the short-run chain on these samples. Both models are learned jointly as follows. The normalizing flow tries to capture the limiting distribution of the EBM by minimizing $$\\text{KL}(K_\\theta q_\\alpha \\Vert q_\\alpha),$$ where $q_\\alpha$ is the density of the NF with parameters $\\alpha$ and $K_\\theta$ is the transition kernel of the short-run chain with parameters $\\theta$. Simultaneously, the EBM is learned to capture the data by minimizing $$\\text{KL}(p_{\\text{data}}\\Vert p_\\theta) - \\text{KL}(K_\\theta q_\\alpha\\Vert p_\\theta).$$ In Section 4.2, the authors provide some analysis of the proposed procedure. Namely, the authors consider the perfect scenario (where the NF matches the EBM density and they both match data) and give some intuition for the case when the models are not expressive enough to perfectly match the data. Finally, the authors provide an empirical study of the proposed procedure. They report metrics for image generation on CIFAR10, SVHN, Celeba (downsampled to 32x32); give several examples of image inpainting and latent space interpolation. The paper describes a reasonable approach to generative modeling and demonstrates promising metric values. However, the main issue of the paper is that the particular choice of the generative models is not well-motivated. That is, the first question that appears in the reader's mind is why do we need to combine a flow with a short-run chain? Couldn't we use any other pair of $C_n^2$ combinations (where $n$ is the number of different approaches) to learn an efficient generative model? Finally, couldn't we just increase the size of our baseline model instead of mixing it with another approach? Unfortunately, the paper doesn't address any of these questions. To be more precise, the authors do not provide any motivation for the usage of these particular models. The main argument in favor of the proposed approach is the empirical comparison, which is, however, not extensive, since the models are not compared in terms of computational budget. Other comments: - The proposed objective in equation (8) closely resembles the Contrastive Divergence objective as noted by the authors. The contrastive divergence objective is notorious for its biasedness since the samples from the model ($K_\\theta q_\\alpha$ here) are usually treated to be independent of parameters $\\theta$. I wonder if the authors follow the same way for their objective. Unfortunately, the authors neither clarify this in the paper nor provide an analysis of the bias (in the case of the biased scheme). - The ability to model the density is not demonstrated empirically. The authors mention that their approach is able to model the density of the learned distribution (which could be a very nice motivation by the way), but they do not validate this property empirically even on a toy example. - There is no convergence analysis. The authors neither prove nor demonstrate empirically the convergence of the procedure. Moreover, according to the last paragraph of Section 4.2, the scheme performs "an infinite looping" hence does not converge. This non-convergence property is usually considered as a significant flaw of the approach (e.g. in GANs literature). - Some empirical results are not trustworthy. Providing comparison with other generative models the authors report numbers from different papers. However, the comparison on Celeba is hardened by the fact that many papers do not downsample the images to 32x32 resolution as the authors do. For instance, NT-EBM (Nijkamp, 2020) uses 64x64 resolution and couldn't be compared in terms of FID to the proposed model. - Reconstruction use case is not clear. The experiments with image reconstruction require additional clarification. It is not clear what kind of a problem the authors approach there. Are these images corrupted by some noise? The choice of the intermediate representation (the output space of the flow) for the latent space is also not motivated. Why couldn't one take the initial distribution that is an input for the flow? - Overall, the numerous applications listed after the generative modeling experiments should be measured quantitatively rather than qualitatively by several examples. The paper describes a reasonable approach to generative modeling and demonstrates promising metric values. However, the approach itself is a combination of two well-known models. The main issue of the paper is that the particular choice of the generative models is not well-motivated. <doc-sep>This paper proposes to train an energy-based model with a normalizing flow to achieve rapid and high-quality MCMC sampling. It shows the trained CoopFlow is capable of synthesizing toy data and realistic images (reasonable compared to many baseline methods in terms of common evaluation metrics). Applications to image reconstruction and interpolation are also included. Strength - The idea of the paper is well simple, and it seems to work well to generate realistic samples. - It makes progress with respect to Cooperative learning via MCMC teaching, by using the normalizing flow instead of other kind of generators. Weakness - The theoretical validity of the generator is not very clear based on Section 4.2. In particular, there is no condition or evidence to show that the generator converges to a fixed point in (8) and (9). Why there exists a fixed point? - If possible, the baseline in Table 1 should also include the result of Ho2019 (used in the proposed model) in the category of Flow, to see if the MCMC teaching is important. Questions: - It is not clear what is evaluated in Section 5, is it the training set or test set? Can MSE score (often used in image reconstruction) be provided for Figure 3? - What would happen if you use 200-step instead of 30-step in CoopFlow (Pre) in section 5.2? Typo - section 4.1, in practise -> in practice This paper introduced a simple idea to train an energy-based model, which is very challenging in high-dimensions. The numerical result makes quite significant progress over state-of-the-art methods in terms of FID scores on natural images. <doc-sep>This paper develops a new generative model, called CoopFlow, by combining the energy-based model (EBM), normalizing flow (NF) and Langevin dynamics (LD). The CoopFlow updates all the parts in an cooperative manner: Parameters of NF are updated by using samples modified by LD, LD is performed based on the potential function provided by the EBM, and the gradient of log-likelihood of EBM are estimated according to samples generated by NF + LD. Here, LD can overcome the limitation of the model capacity of NF, and NF + LD can provide a more accurate estimation of the gradient of EBM loss. After learning, a powerful sampler and an accurate EBM can be obtained. This paper provides a novel cooperative way to train the flow model and energy model. The idea is very attractive, and could be able to solve existing difficulties of generative models. My major concern for the CoopFlow comes from the fact that the training procedure cannot be interpreted as the minimization (or minimax) of a loss function. It is not clear whether the cooperative learning process is always convergent. Some other comments are as follows: 1) The Langevin dynamics used in the paper is not an exact MCMC chain due to the time discretization. Is it possible to utitlize some MCMC samplers, e.g., Metropolis-adjusted Langevin algorithm in CoopFlow? 2) It is claimed that the KL div between \\tilde p_\\theta and p_\\theta "decreases to zero monotonically". This conclusion is not widely known and requires reference. 3) Some important previous models and related references are omitted, including the deep energy model proposed by Liu within the stein variational framework, and the stochastic NF proposed by Noe consisiting of NF and MCMC. 4) Analysis in Section 4.2 is too confusing for the reader. This section is called "theoretical understanding" and there is no theorem. Can authors provide more understandable explanations? The article not only introduces some innovations, but also conducts a large number of experiments to demonstrate the superiority of the algorithm. <doc-sep>The authors propose a cooperative learning framework of normalizing flow and energy-based Langevin MCMC models (CoopFlow). Specifically, the normalizing flow suggests a better initialization for the Langevin flow, then the latter generates synthetic samples via short-run MCMC. While the Langevin flow is trained with a standard MCMC likelihood toward the data distribution, the normalizing flow chases the Langevin one by maximizing the likelihood of the synthetic samples. The proposed CoopFlow seems to outperform each of its components. This paper is strongly motivated by the short-run MCMC energy-based model [1]. The key difference between this work and [1] is the use of a normalizing flow model for a nice initialization to shorten MCMC steps even more. The idea is sound and well-motivated, however, my main concern on this work is that it seems both the theoretical explanation and empirical results do not support such a motivation very clearly. * First, the theoretical explanation reveals that the short-run MCMC CoopFlow $\\pi^*$ is a moment matching estimator of $p_{data}$. However, the current version of the explanation does not clearly show that $q_{\\alpha}$ initialization can yield a shorter transition path (i.e., lower MCMC steps) compared to the standard Langevin flow started from $p_{0}$ (which is the main motivation of this work). Of course, it can be deduced from the fact that $q_{\\alpha}$ tries to mimic $\\pi$ thus automatically $D(\\pi, q_{\\alpha})$ can be lower than $D(\\pi, p_{0})$ for a certain discrepancy D in most cases; but it will be nice if the authors can discuss such a case in a more systematic way, with adding some intuitive comparison of Figure 1 with Figure 5 of [1]. * In addition, the current empirical results do not validate the authors’ motivation sufficiently well, because a detailed cost-vs-performance trade-off study is not presented here. Table 1 (a) only reveals that the proposed CoopFlow gets worse FID scores than, for example, NCSN++ or EBM-Diffusion. I don't think every newly proposed model should beat the best model. However, since the main advantage of CoopFlow is a fast generation (as well as training) owing to the short-run MCMC with the normalizing flow-based amortized sampler, the author should discuss such a cost perspective in detail. For example, how many iteration MCMC/SDE steps are required for other models, and how about generating samples with the same (and small) number of steps for all models? Currently, I can only conclude the CoopFlow outperforms each of its components (it is also not absolutely certain because the reference flow model, Flow++, seems to be not evaluated solely, thus the author should ), which is also a nice contribution but not greatly fascinating. *** [1] Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent nonpersistent short-run MCMC toward energy-based model. arXiv preprint arXiv:1904.09770, 2019. Overall, while the paper is interesting, the lack of justification for the authors’ motivation prevents me from recommending a clear acceptance for this paper. My current evaluation of this paper is borderline but can be changed after reading the author’s rebuttal. *** Post-rebuttal: I appreciate the authors’ thorough responses to my questions. After reading the responses, I have tended to accept this work, thus raised the review score accordingly. However, this rating is only marginally above the borderline, thus I would not champion the paper for acceptance. After the authors' follow-up response: I feel more comfortable accepting this work. I am sorry to keep my score as 6, mainly due to there is no review score of 7 in the system. Instead, I increased the significance score of this paper accordingly.
The paper proposes a methodological improvement in the Langevin-based training of energy-based models. The idea is to initialize the Langevin flow used to train an energy-based model with a normalizing flow which learns to mimic the Langevin flow as the energy-based model is being trained. The method is empirically evaluated on synthetic data and image benchmarks. The reviewers are currently divided: one argues for strong rejection, two for weak acceptance, and one for strong acceptance. In summary, the reviewers have expressed two main points of criticism: (a) that the motivation is unclear or not experimentally demonstrated; (b) that the convergence properties of the algorithm are unclear. Regarding (a), I believe the authors have adequately addressed this concern, and in my judgement the method is sufficiently motivated. Regarding (b), the authors responses have mostly relied on non-rigorous argumentation and appeal to prior work, so I don't think the issue has been addressed to the reviewers' satisfaction. Having said that, in my judgement lack of clarity regarding convergence is not a sufficient reason to reject the paper, as there don't seem to be reasonable doubts that the method doesn't converge in practice. On balance, although the paper has certain weaknesses, it proposes an interesting and potentially useful method without major technical inadequacies, so I'm leaning towards recommending acceptance.
This paper presents an architecture for adapting the locomotion of a legged robot in response to semantic terrain information. A human teleoperates the robot with a joystick to provide scalar velocity labels associated with different terrains. A mapping from RGB images to velocities is learned using the features of a pretrained vision module to promote generalization. Then, velocity commands are heuristically mapped to gait parameters, and the resulting locomotion skill is actuated using an MPC controller from prior work. The complete system enables the robot to automatically run faster on easier terrains and walk carefully on harder terrains. The system can complete novel trails in this manner without failure. Strengths: The perception of semantic terrain information is a promising research area for locomotion, and this paper is the earliest known functional proposal. The proposed architecture is novel and demonstrates nice integration of several control components. Weaknesses: A main conclusion of the paper is drawn from a single trial of a high-variance metric. <doc-sep>The paper proposes a system which learns to adapt walking speed based on semantics of terrain. The proposed approach learns, from human demonstration, to regulate forward speed as a function of RGB input. The model is pretrained on a semantic segmentation task and optimized via behavior cloning for the speed control problem. A gait selector is manually designed as a function of desired walking speed. The result is a system which is capable of traversing rough terrain more efficiently than the built-in Unitree controllers. Strengths - The paper studies an important problem - clearly some amount of semantic information is important for locomotion in offroad environments. Existing works typically fail to adequately address this in that they only model the geometry of the surface rather than material properties like friction, or by being entirely reactive. Thus the paper suggests an interesting innovation. Weakness - The learning problem proposed is perhaps too simple - controlling speed alone is likely insufficient for more challenging terrain, as illustrated in recent papers which additionally learn to regulate gait (e.g. the Hutter Science paper cited). It seems as though some combination of these ideas is required for robust locomotion, controlling both the desired forward speed and the parameters of the gait together as a function of anticipated terrain properties and proprioceptive data. That the method never fails on the test domains suggests that perhaps the tests are not sufficiently challenging. Moreover, comparison to a method which is purely geometry-based would be instructive in evaluating whether the semantic module indeed constitutes a substantial improvement in performance in the domain studied (e.g. even a simple variant without the semantic segmentation pretraining). Additionally, the reliance on human demonstrations may present a challenge in scaling the method. <doc-sep>This paper presents a framework to learn a set of perceptive locomotion skills. The key idea of this work is to leverage existing offroad driving dataset for quadrupedal offroad navigation tasks. Strengths: - A very clean and neat framework to enable perceptive locomotion skills - The use of existing datasets from other domains for the target downstream tasks - Both quantitative and qualitative evaluation of the approach, especially the robustness of the models. Weakness: - Line 96: Does this mean that authors assume there are no obstacles in terrain that would stop robots from traversing at all? The limitation section mentions future work of navigation, but that would require the robot to reorient its heading directions. This should also be mentioned in the limitations. - Line 122: Authors should provide more intuition on why using FC on each pixel instead of using convolutional layers over pixels and directly output the forward speed. What is the intuition behind having this intermediate speed map? - For real robot videos, it would be even better if there is a list of average speed over different domains. When watching the multi-terrain type video, it’s hard to tell if robots are executing at the same speed as individual evaluation. - Also, since the locomotion skills are not high-speed behaviors, it’s still difficult to tell the speed differences by playing each video individually. Maybe putting all video clips on one single canvas would give a better idea of the gaits’ differences. <doc-sep>The paper presents a quadruped robot locomotion method, where MPC-based locomotion is enhanced with visual-based terrain type identification that drives correctly speed and gait. The whole method learns from human demonstrations on the real hardware. Strengths: - This is a clever work. It is a basic concept, i.e., when moving on different terrains, the speed and gait might need to adapt accordingly. - The technical part is easy to follow and sound -- having said that the development part is not extremely challenging. - The experiments are overall nice, with real-world locomotion on various terrains. Weaknesses: - Abstract: "close-to-optimal speed" is a bit of an exaggeration. It has not been shown anywhere in the results what is the optimal, and thus this sentence is not supported. Moreover, I have the feeling that it is also not true. - Intro/RW: the intro and later the related work, lacks some work on terrain semantic classification for locomotion, e.g., "Terrain Classification and Locomotion Parameters Adaptation for Humanoid Robots Using Force/Torque Sensing", by K. Walas et al. (IJHR'16) for proprioception and "Terrain Segmentation and Roughness Estimation using RGB Data: Path Planning Application on the CENTAURO Robot", by V. Suryamurthy et al. (IJHR'19) for exteroceptive segmentation. - Intro: "human expert demonstration" there is a big discussion now how those humans became experts and what does this mean? - Intro: "Fast and Failure-free: this is again very vague and generic. In best case, terrain types need to be mentioned at least. Otherwise, this does not have any particular meaning. - RW: In the end of the RW section, the selection to learn from real demonstration is highly proposed. Although, we know that learning in simulation (e.g., in isaac) achieves very good results without the need to adapt anything on the physical robot (e.g., the work on ANYmal in the latest Science publication). It needs to be justified better why in this case simulations could not do the job. - Sec 4: "instead, we extract a semantic embedding" this is very vague and needs to be explained further. Why the semantic segmentation is not good, but the embedding is? Why it is feeded via a fully connected layer? More justification is needed. - Sec. 4: "1m long, 0.3m wide in front of the robot." what if the camera changes orientation or position. How much of the method can work as is? How much retraining is needed? - Sec. 4: "using imitation learning": it is still confusing how the humans decided the "optimal" speed for each type of terrain. - Sec. 5: the speed-gait selection seems to be very tedious and not extendable easily to other robots. Any alternative? Any less heuristic approach? - Sec. 6: slope handling is claimed but none of the experiments include slopes. Not sure if it is good to be mentioned if it was not tried at all. - Sec. 7: you compare with Unitree's controller, but in reality [21] should be your baseline. How much better than [21] do you perform? - Sec. 7: " it can only walk up to 0.5m/s on rock", how was this evaluated? What if different gaits were applied? - Sec. 7.3: "traverses through all of them without failure", this is very generic and probably not true. Unless the authors claim we have found the optimal MPC-based control to deal with all terrains. In other words, a specification of gait and terrain type is needed to justify this. - General: What happens when you traverse from one terrain type to another? How the robot adapts to the new gait and speed? What happens when terrain segmentation is noisy, and what happens when the robot walks on a misclassified terrain?
I believe many reviewers are convinced by the author's response. The motions and behaviors of the robot in the video are not impressive but I respect the reviewers' opinion. Quality: The reviewers agree that the paper's quality is high. Clarity: All reviewers rated the clarity as good or above. However, a few comments about it in the reviews should be addressed. Originality: The presented method is quite simple but the application to legged robotics seems original. Significance: The reviewers provide a detailed list of issues. There are a few critical ones that the authors should answer in detail, especially the ones by the reviewer Rr19. First, changing the forward speed alone is insufficient to deal with many wild environments. The paper narrowed the scope of learning too much and the performance of the controller is highly limited. The accompanying video is not as impressive as those of the recent related works. The robot probably could run on most of the terrains if it used a better lower-level controller. Second, there are many heuristically designed components because the proposed controller is used as a combination of an MPC controller and a learned planner.
Inspired by recent work on score-based generative modeling, this paper proposes to solve Schrodinger bridges for generative modeling. Different from other Schrodinger bridge works, this paper connects the training to maximum likelihood, and provides a way to compute the log-likelihood of the model. The resulting method can shrink the gap between $p_T$ and the prior distribution, and produce high quality image samples and likelihood comparable to score-based generative models. ## Strengths 1. The proposed method is motivated from optimal stochastic control in a principled way. It bridges the gap between $p_T$ and the prior distribution in existing score-based techniques. 2. Compared to other works based on the Schrodinger bridge, the approach described in this paper seems to be more scalable, without the need to hand design part of the transformation as in Wang et al., 2021, or iterative proportional filtering as in De Bortoli et al. 2021. ## Weaknesses 1. The efficiency of the proposed method is unclear in current writing. In Theorem 4, the loss function is described as integrals of expectations. However, it is unclear with respect to which random variable is the expectation taken. How do you estimate the expectation in the loss function? If the expectation is over $x_t$, don't you need to simulate the forward SDE in equation (12a) for each datapoint? Since equation (12a) depends on $z_t$, which is parameterized by a deep neural network, there is no easy way to sample $x_t$ without solving the SDE numerically. By contrast, in score-based generative modeling, $x_t$ is sampled as a noise-perturbed Gaussian with a closed form. 2. There are many errors in writing that affect reading and understanding. For example, in page 3, $p_t^{(2)}$ is never defined. In Lemma 2, I am not sure how to understand the expression $v(t, x) = y(t, x)$. As the solution to an SDE, shouldn't $y$ be a stochastic process? How can a stochastic process equal to $v(t, x)$, which is a deterministic function? What's the definition of $y(t, x)$? Similarly, in Theorem 3, how is $y_t \\equiv y(t, x_t)$, what's the definition of $y(t, x_t)$, and how is $y_t$ a function of $x_t$? In Corollary 5, why does an ODE contain the stochastic term $g d w_t$? In Algorithm 1, why are there references to (23a) and (24)? Do you mean other equations? 3. Authors reported better performance compared to prior optimal transport methods like Wang et al., 2021. However, is this because the network architecture used in this paper (NCSN++) is better than those used in previous methods? Will it be more fair to compare with the same network architecture? 4. Isn't the framework proposed in this work exactly the same as score-based generative modeling? If you let the forward SDE be $d x_t = (f + gz_t )dt + gdw_t$, and parameterize the score network as $z_t + \\hat{z}_t$, then the reverse SDE is the same as (7b). In this case, you can derive Theorem 4 and Corollary 5 automatically from the theory of score-based generative models. The paper proposes a useful Schroding bridge framework for generative modeling that is simpler than previous counterparts. However, there are concerns on efficiency issues, writing errors, and fairness in experimental comparison. It is also unclear how much difference is this method compared to the original formulation of score SDEs. <doc-sep>Inspired by stochastic control and forward-backward SDE theory, this paper proposed an iterative algorithm to approach the Schrodinger bridge (SB) problem, generalizing variational likelihood-based training of score-based generative models (SGM). The proposed method is then used to train both the generative process and the inference process of SGMs, recasting the generative modeling problem as the problem of SB. The author then performed experiments on standard image datasets. This paper introduced a new way to look at the SB problem via a non-linear Feynman-Kac representation of a PDE that defines the SB optimal path measure. This overall framework seems to provide significant new insight on how SB can be solved practically, which is a key merit of the work, and I applaud the authors for the finding. The strengths of the paper are: * Proposing an interesting framework to approach the SB problem, via solving the SB optimization problem using a PDE to define a pair of equivalent forward and backward SDEs whose path measures correspond to the SB solution, and then solving this SDE using a Feynman-Kac type of representation. * Making a connection with prior work, i.e. generalizing the likelihood-based training of score-based generative models. Novelty aside, I do find the paper requires some major improvement in terms of clarity and a clearer discussion and analysis of limitations and the similarity/difference with prior work. Below are the weaknesses of the paper. * Clarity needs work: the narrative at times is not very clear, which makes it harder to grasp what is being solved. (See more detailed questions below) * Differences with some prior work (such as iterative proportional fitting) are not not well explained, even though the proposed likelihood training of the forward SDE and the backward SDE seems very similar. * Limitations are not properly discussed: training the inference process is done at the cost of an increased cost of computation, which is arguably one of the most important features of SGM. Convergence property of the proposed method not discussed, i.e. whether or not the proposed likelihood training stage will lead to convergence towards the SB solution. A thorough discussion of these limitations and perhaps an experiment on compute cost analysis would be needed. Some more detailed comments and questions: * Throughout the introduction, it is not very clear what “computing the log-likelihood objective of Schrodinger Bridge” means. The narrative needs more work. More precisely, it wasn’t very clear to me what the “model” is until I finished reading section 3, and it took me multiple passes. For example theorem 4 refers to the log-likelihood of SB, it’s the log-likelihood of which model exactly? Fixing the inference SDE (i.e. $z_t$) and looking at the marginal likelihood induced by $\\hat{z}_t$? Is it a likelihood or a lower bound? The (bold) $z$ used in the previous theorem seems to not depend on any parametric form (as it solves the SB problem), but here it suddenly becomes parametric. * End of 2.1, it is not clear why having a more flexible framework will help mitigate the instability problem just mentioned. * Is the order of the arguments of $h$ in (11) incorrect? * The presentation of Lemma 2 is a bit confusing, is it correct to say that despite the randomness induced by the Brownian motion used in the ito integral, the solution of y (an SDE) will still be a deterministic and smooth function, as it is after all a solution of the PDE? What’s confusing is that v is a solution of a PDE and y is a solution of an SDE, which is random by nature. * Presentation of theorem 3 also needs work: is the bold $z$ related to the regular $z$ used in Lemma 2? The notation hasn’t been introduced. * Last paragraph of page 5: what does the new interpretation of optimal control bring to us? Is there any practical benefit of it? * Last paragraph of 4.1, I am not sure how meaningful the comparison is with some other OT methods, especially since the SB problem is connected to “entropy-regularized” OT, and is not exactly OT. Minor points / typos: * End of page 6, is it a maximization (of likelihood) instead of min? * The last sentence of page 7: can be founded -> found The paper introduces a new framework for likelihood-based training of the forward and backward SDEs that are inspired by the Schrodinger bridge problem, which is quite novel. But the paper is not very well written and the practical limitations of the proposed method are not sufficiently addressed. Therefore I do not vote for acceptance given its current form. <doc-sep>This paper presents a framework for likelihood-based training of Schrödinger bridge-based generative models using the theory of Forward-Backward SDEs. In doing so, the paper draws relations to score-based generative models (SBGMs) and shows that the proposed framework is a generalization of the SBGM framework. The proposed framework also provides additional control to the forward SDE to reach the prior distribution unlike the case of SBGM. The authors propose a practical training algorithm which alternates between likelihood training of the forward and backward controls. Experiments on multiple image generation benchmarks demonstrate that the proposed SB-FBSDE performs well as a generative model of the data. [Novelty and Significance] The paper presents an elegant framework for likelihood training of Schrödinger bridge (SB)-based generative models. SB generative modeling is a developing area in the field of diffusion generative models and this paper makes a significant contribution to the area. It draws interesting connections to (and generalizes) the framework of previous diffusion-based generative models (e.g., SBGM). [Writing and Clarity] The paper is written fairly well, albeit for the expert reader. There exist some clarity issues: - For Theorem 1, please provide a proof-sketch and/or cite the exact theorem number. In particular, it is unclear to me how to arrive at 7(a) and 7(b) from the Kolmogorov equations in (6). Is this after time-reversal? - For the proofs, please include a brief description of each step. In the current state, some steps of the proofs are unclear. - In the proof of Theorem 3, why do you begin with (1) as the reference measure $\\mathbb{P}$ and not 7(a)? - In several steps, $dt^2$ and $dtd\\mathbf{w}_t$ terms have been dropped. Please mention this in the proof text for clarity. - In Eq. (20), I do not understand how you go from $\\nabla \\cdot(g \\hat{z}_t-f)$ to $-\\hat{z}_t^{\\top}(g \\nabla \\log p_t^{\\mathrm{SB}}) - \\nabla \\cdot f$. Corrections: - Eq. (15) is not an ODE and looks like a typographical mistake — the $g\\mathrm{d}\\mathbf{w}_t$ term should not be there. - "... which can be _probability_ expensive on high-dimensional datasets" — *prohibitively*. - Section A.1: ito —> Ito. - Section A.1 before Eq. (17). What is $b$? I believe it should be $f$. [Empirical Evaluation] The empirical evaluation, although not extensive, is reasonable. One thing that is unclear from Table 2 are the primary baselines; in my opinion, they should be Multi-stage SB and SGMs. It's good that the authors have reported results from several previous works; however, most of the results are not directly comparable, so it is important to specify what the main baselines are. A particular example is DOT which is not a generative model in itself but a sample improvement technique that operates on a pretrained GAN. The authors are missing the following work in Table 2 in the optimal transport model class: Ansari, Abdul Fatir, Ming Liang Ang, and Harold Soh. "Refining deep generative models via discriminator gradient flow." *arXiv preprint arXiv:2012.00780* (ICLR 2021). On reproducibility: From the paper, it appears that the practical implementation has several moving parts and is not straightforward. The paper does not contain a reproducibility statement and also does not provide enough details for an expert reader to reproduce the results. I encourage the authors to release their code as supplementary material and include further details of the practical implementation in the Appendix for the benefit of the research community. [Questions] - Can you elaborate what exactly is meant by "... SGM by enlarging the class of diffusion processes to accept *nonlinear* drifts ..."? - How is $\\nabla\\log p^\\mathrm{SB}_t$ computed? - What is the form of the function $f$ in your practical framework/implementation? [Suggestions] - It may be better to move the paragraph on "Connection to flow-based models" before "In practice, we parameterize the forward...". The current structure breaks the flow of the reader. - To make the paper more accessible to readers unfamiliar with Schrödinger bridges, it would be helpful if you provide a brief review in the Appendix. The paper presents a novel framework for likelihood training of Schrödinger bridge (SB)-based generative models. The technical contribution of this paper is significant: it generalizes the SBGM framework and provides a practical algorithm for likelihood training of SB models. Although the paper has some clarity issues, I believe these can be fixed in the revision. Overall, I think this is a good paper and I recommend acceptance. <doc-sep>In this paper the authors introduce a score-based generative model which relies on a Schrodinger bridge formulation. More precisely the authors derive a loss function for the control of a forward/backward SDE. By alternating the minimization of the loss function between the forward control and the backward control they obtain an approximation of the Schrodinger bridge. This loss function is obtained using the forward backward SDE (FBSDE) framework to obtain a non-linear Feynman-Kac representation of the evolution of the log-potentials. This theoretical/methodological contribution is accompanied by a experiments in generative modeling on the MNIST/CelebA/CIFAR-10 datasets. STRENGTHS: -Overall the presentation of the paper is good with a clear presentation of the score-based generative models and the Schrodinger bridge problem. -The idea of using the FBSDE framework is new and original. I think that the idea of using Feynman-Kac based representations to solve (non)linear PDEs is definitely a good idea. -The toy experiments and the generative modeling experiments are satisfactory. WEAKNESSES: -My first concern is with the formulation of the loss function that I find opaque to say the least. Looking at (14) it turns out that the loss function is in fact given by $\\int_0^T \\mathbb{E}[\\| \\hat{z}_t - g \\nabla \\log p_t^{SB} + z_t \\|^2]$ where the expectation is taken over the path measure induced by the forward SDE. It is not clear how $\\nabla \\log p_t^{SB}$ is computed. The classical ideas of [1,2] cannot be used since the $\\nabla \\log p_t^{SB}$ is not given by a Ornstein-Ulhenbeck process. Therefore the authors might rely on the idea used in that approximate $\\nabla \\log p_t$ for general processes using their Langevin discretization. The authors should clearly state what loss function they use in practice (discretization of the time + approximation of $\\nabla \\log p_t$). Maybe they use the first part of (14) to compute the loss function but this is not clear either. The pseudocode provided by the authors is not helpful and there is no anonymous repository to check out the code. -My second concern is a consequence of the first one. The authors claim that their work is different from [3,4] but in practice Algorithm 1 is the same training algorithm as the one derived in [3]. Indeed because the authors propose an alternate minimization of the loss function they derive the loss function they minimize at each step corresponds exactly to compute the IPF step (Algorithm 1 in the current paper is almost the same as [3, Algorithm 1], see also [4, Algorithm 1], with the only difference being in how the loss is computed which is not clear from the current paper, see previous comment) . In this sense the algorithm provided by the authors corresponds to a rewriting of [3,4]. Hence I think the claims of the authors concerning the originality of the method is overstated and misleading. Similarly it is misleading to state that no connection with Langevin based sampling as been established for Schrodinger bridge models, see [3,4]. To summarize there is a clear overlap between the method proposed by the authors and the work of [3,4] which is not acknowledged in the paper. -Another concern I have is related to the derivation of the algorithm. Given how the authors derive Algorithm 1 there is no guarantee that 1) the procedure is going to converge and 2) if it converges that the limiting bridge is indeed the Schrodinger bridge. Both these facts can be obtained using the IPF approach, see [3, Proposition 6]. As of now Theorem 4 is only valid at equilibrium, i.e. for the optimal set of controls $(z_t, \\hat{z}_t)$. For arbitrary controls Theorem 4 might still be valid (or at least in a lower-bound form) but this is not what is currently stated in the paper. The way the paper is written it looks like Algorithm 1 is motivated and justified by Theorem 4 which is not the case. -I have a minor concern with the contribution of the authors to the FBSDE theory. It seems that Lemma 2 and the current FBSDE is not enough to provide a non-linear Feynman-Kac representation. I find that this contribution is not clearly presented by the authors. Also the proof seems to be very heuristic, the authors do not check the necessary regularity conditions to apply Ito formulas. They refer to the ``same regularity conditions in Lemma 2'' but Lemma 2 is not informative because it is only stated that the functions ``G, f, h and $\\varphi$ satisfy mild regularity conditions''. The concept of viscosity solution is not reintroduced in the paper (even in the supplementary material). -Finally I have also concerns with the experiments. In particular there is no comparison with [3,4] which are currently the concurrent for score-based generative modeling using Schrodinger bridge. In particular, since Algorithm 1 and [3, Algorithm 1] appear almost identical it is unfortunate that the authors do not provide any comparison with this approach in similar setting (same number of steps, stepsize and without correcting step). COMMENTS: -It would be interesting to observe what is the generative model obtained after one iteration of the algorithm, i.e. when the backward network is updated for the first time. This should recover the generative model obtained using SGM and further steps are a refinement of the method. -It would be interesting to precisely quantify the influence of the corrector (which seems to constitute the only difference between Algorithm 2 and the sampling procedure proposed in [3]). -``Poof'' and ``Remakrs'' in the title of the sections in the appendix. -In Algorithm 1 do the authors really use a gradient descent algorithm? Or is a more efficient like ADAM is used to train the neural network? To conclude I think that the idea of using FBSDE to derive the Schrodinger bridge is original. However the method has a lot of overlap with the works of [3,4] which is not acknowledged by the authors. The authors do not precise which loss function they use in practice and because they do not give access to the code (or at least to a detailed pseudo code) it is hard to determine what is the original methodological contribution of the paper. I think that the paper is not mature enough and that a true justification of Algorithm 2 with FBSDE is still missing. Experimentally speaking the authors did not compare their work with existing SB methods. Based on these comments I recommend the rejection of the paper.
The paper presents a new computational framework, grounded on Forward-Backward SDEs theory, for the log-likelihood training of Schrödinger Bridge and provides theoretical connections to score-based generative models. The presentation of the results is not satisfactory (the algorithm should be clarified in several places and the notation is not accurate which raises doubts about the soundness of the method). The paper is thus very hard to read for the non-experts on the subject. Furthermore, some reviewers raise concerns about the similarity of this method to other algorithms that were never cited in the paper. Finally, the empirical analysis, as of now, is limited. In the rebuttal the authors carefully addressed lots of the comments. However paper's presentation still needs to be substantially improved (de-densification of the paper would be extremely important since now the main narrative is very convoluted). The authors made several changes in the manuscript, but detailed discussion regarding training time complexity still seems to be missing (main body and the Appendix) in the new version of the manuscript, even though this was one of the main raised concerns. Overall, the manuscript requires major rewriting. Since the comments regarding the content were successfully addressed (the reviewers are satisfied with detailed answers given by the authors), the paper satisfies the conference bar and can be accepted.
The proposed work introduces a defense which uses an auxiliary reconstruction task along with the classification to ‘trap’ the backdoor within the classification head. In the second stage, a new classification head is trained from scratch to completely remove the effect of the backdoor. The intuition is that the auxiliary task ensures that low-level features of the image is preserved within the backbone or the stem network, hence reducing the effectiveness of the backdoor. Results are shown on CIFAR-10, GTSRB and ImageNet-12 against various attacks. **Strengths:** — The defense method introduced is novel and effective. It is also easy to use without requiring too much hyper-parameter tuning compared to previous methods. — Authors conduct extensive ablation study to illustrate the effectiveness of the method. An Adaptive attack is also considered, against which the defense still remains effective. — Results are shown on a variety of backdoor attacks, indicating that it can be used to defend against multiple attacks. **Weaknesses:** — Although the authors do provide an intuition that low-level features are preserved better with the auxiliary task, it is not clear why the proposed method should be effective. A feature analysis on the difference with and without the reconstruction task can be helpful in identifying why the defense is effective to such a degree. — The reconstruction task proposed by the authors is not very well understood. An experiment where the quality of reconstructed images vs ASR can improve understanding. Although authors do provide a variant of this experiment in Table 8 of the appendix, a qualitative analysis would be better in this scenario. Another experiment would be to vary the size or capacity of the ‘decoder’ and observe the variation in ASR. — Similar to previous methods, a clean heldout dataset is required which is typically 10%. This can be difficult to obtain for larger datasets. Yes <doc-sep>This paper proposes a new defense strategy *Trap and Replace* to protect deep neural networks from the backdoor in the poisoned dataset. This paper presumes that the backdoor pattern is easy to learn, so it trains a standard image classification model consisting of a stem network and a classification head and an extra image reconstruction model, which consists of the stem network and reconstruction head at the same time to encourage stem model to learn the low-level visual features. Then freeze the stem work and initialize the classification head by setting those parameters to random values. And train the new model in a small but clean dataset. The experimental results show that Trap and Replace outperforms SOTA defense strategies in most cases. Strength: 1. Introducing a reconstruction task to force the stem model to learn visual features is a novel idea. It is self-supervised training that does not need extra labeled samples. 2. Training classification head-on clean dataset can mitigate the attack success rate and keep accuracy at a high level. 3. The paper's summary of relative works is well organized and comprehensive. 4. The experiments are well designed and clearly shown in the tables. 5. Ablation study showing that both Trap and Replace are necessary is convincing. Weakness: 1. Why using the fully connected layer as classification head fails to defend the backdoor should be discussed. 2. How many layers should be chosen as the classification head in others models that are not included in this paper should be considered. Comments: 1. In previous works, the classification head is considered the fully connected layer. You can explain the definition of classification head and reconstruction head in the introduction session. 2. In line 156, there is a repetitive word "assumptions". <doc-sep>The authors propose to trap the backdoor by a lightweight classification head on top of a low level feature extractor and replace it with a clean classifier to remove the backdoor. Extensive experimental results on different dataset against various attacks show the effectiveness of the proposed method. Strengths: a. The idea is interesting, and the motivation is reasonable. b. The authors provide rich ablation experiments to evaluate the proposed method. Weaknesses: a. Most of the previous methods only need access to either the poisoned dataset or a small clean dataset, but the proposed method requires access to both, which limits its practical value. b. According to Table 1, the degradation on clean accuracy is considerably large. This may because the final classifier is only trained on limited data (the holdout dataset). No. I think the largest limitation is the threat model. As far as I know, this is the first method that require both a poisoned dataset and a clean holdout set. If so, I suggest the author to discuss the practical scenario of such threat model. <doc-sep>The authors of this work proposed a defense method against backdoor attacks. The defense method consists of two stages. In the first stage, they trapped the backdoors in a subnetwork. In the second stage, they replace the poisoned subnetwork and retrain the network with clean samples. Consequently, this method outperforms previous state-of-the-art methods. Strengths: 1. The writing of this paper is very clear and easy to follow. 2. The experimental results show that the method outperforms the other six baseline methods in various attack methods except in some cases (Blend, Trojan SQ, Trojan WM). 3. The ablation study clearly shows the significance of the two stages. Weaknesses: 1. In my opinion, the threat model is pretty unrealistic (with some holdout clean samples) however you can follow this assumption from previous works. 2. The auxiliary image reconstruction task encourages the stem network to keep sufficient low-level visual features that are hard-to-learn but semantically correct, protecting the stem network from overfitting to the easy-to-learn but semantically incorrect backdoor correlations. I cannot figure out the relation between this intuition and how the effect of poisoned data can be trapped. Typos: 1. Line 156 has two “assumptions”. 2. Line 238 “Tojan” --> “Trojan”. Yes. The authors state that there are no potential negative societal impacts of this work.
The recommendation is based on the reviewers' comments, the area chair's personal evaluation, and the post-rebuttal discussion. This paper proposed a new training method to defend against backdoor attacks. While all reviewers see merits in this paper, some discussions about (1) the practicality of the defense using clean data samples and (2) fair comparisons to existing defenses have been raised and discussed. During the author-reviewer discussion phase, the reviewer had detailed interactions with the authors to clarify different use cases and practical scenarios of the proposed defense and the fairness of the evaluation. So both major concerns are adequately addressed. Another reviewer also champions acceptance in the internal discussion. All in all, I am recommending acceptance. My confidence is lower compared to other submissions simply because this paper has the lowest average rating score of all papers I recommend acceptance.
This paper presents a novel algorithm for identifying "provably robust adversarial examples": large regions in the input space that provably contain only adversarial examples. Each region corresponds to a single adversarial example $\\tilde{x}$ found in the center of the region, along with all the points that can be generated by applying a sequence of transformations to $\\tilde{x}$. The transformations considered in the paper are semantically meaningful changes to the original image. Critically, we can be guaranteed that $\\tilde{x}$ will be misclassified even if _it_ is perturbed. The paper demonstrates that the algorithm can generate regions of non-trivial size on networks of non-trivial size. For example, for a CIFAR10 classifier with 6 layers and ~62k neurons, it finds axis-aligned regions containing a median of $10^{573}$ adversarial examples. In addition, the paper shows that provably robust adversarial examples can be used to create adversarial examples to $L_2$-smoothed classifiers that are more robust to $L_2$ noise as compared to adversarial examples generated directly via PGD attacks. # Strengths - **Originality**: The paper presents a novel algorithm to generating provably robust adversarial examples corresponding to semantically meaningful regions. - **Quality**: - The paper demonstrates that the algorithm can scale to generate provably robust adversarial examples of non-trivial size on networks of non-trivial size. - The experiments in the paper have clearly been carried out with an attention to detail. - **Clarity**: - The paper describes the algorithm in sufficient detail to enable reproducibility. (In particular, the appendix explains important details that would be required to re-implement the approach.) - **Significance**: - The approach presented is modular, using existing certification algorithms as a subroutine. This has two key benefits: - Improvements to existing certification algorithms can be used to improve the search efficiency for provably robust adversarial examples - Certifiers which handle new classes of transformations could be used to generate provably robust adversarial examples for these classes of transformations. - While this paper focuses on adversarial examples, the approach can be used in any setting where we are interested to find large regions of input space with a constant classification (or, more generally, where a linear function of some neuron activations exceeds a threshold). I can imagine this being applied to better understanding/visualizing how the output of a neural network varies as the inputs change. # Areas for Improvement ## Originality - In the introduction, the paper states that "our regions are guaranteed to be adversarial while prior approaches are empirical and offer no such guarantees". Section B.2. mentions Liu et al., which "is also capable of generating provably robust hyperbox regions". Is the statement in the introduction wrong? ## Quality - The baseline used seems to be a straw man, since it is simply "our method" but with uniform rather than adaptive shrinking; I would always expect "our method" to outperform the baseline. I would prefer to see the comparison to Liu et al. (and any other methods that produce provably adversarial regions if they exist) in the main body of the paper instead. - In Table 2, the transforms selected appear quite arbitrary; in particular, they appear like they could have been cherry-picked to flatter the presented approach. Some detail on how the transformations were selected would alleviate this concern. ## Clarity - Experimental setup for Section 5.3: - I struggled to understand what experiment was run in this section and what the results in Table 3 show. I understand that the goal of this section is to show that robust adversarial examples are significantly more effective against defenses based on randomized smoothing, but the setup for the experiment is still not clear to me. I'd be happy to discuss this with the authors, but some preliminary questions: - What are the units of 'robustness'? - Is the result for "Ours" normalized to 1? - Table 3: - Are the results for "baseline" and "ours" mean, or some other summary statistic? - What result exactly is shown for "individual attacks"? Were multiple attacks generated for each image, or was the individual attack the attack that was used to determine the value of $R'$? - Reporting # of concrete examples: - In Table 1, the SizeO column reports an _upper bound_ on the number of concrete examples in the polyhedral region. This is not immediately clear from the description; a reasonable reader might expect that this is just the number of concrete examples. I would request that the authors either: - Estimate the actual number of concrete examples - Clearly indicate in the table description that this is an overestimate - Remove the SizeO column. - I have a similar concern with the "Over" column in Table 2; I don't see how an overestimate of the number of concrete examples in the region is relevant. # Additional Comments ## Clarity Here are some issues with clarity that do not constitute major issues, but would still improve the paper significantly if addressed. At the high level, he paper appears to be trying to squeeze too many results in, leading to important details being omitted. ### Missing Details - Section 5.3: "We exclude images whose radius is too big" - what constitutes too big? For these images, what is the robustness to $L_2$ smoothing defenses of adversarial examples generated by your method? - Table 1 / Table 2: Is the time reported here a median or average, and is it only for instances where the methods succeed? - Table 2: The value of #Splits is listed but no guidance is provided to the reader as to how to interpret the result. I would recommend moving this information to the appendix or adding an interpretation. - Definition 2: "whose $L_2$ ball is certified as adversarial" - I didn't find a definition in the paper of what it means for the $L_2$ ball to be adversarial. (I would have assumed that this means that every sample in the ball has a different classification as compared to $x$, and not that every sample has to have the same classification as $\\tilde{x}$, but the rest of the paper seems to suggest the latter definition.) ### Miscellaneous Points - Section 3 (Overview): "... assumes an algorithm $\\alpha$" - the variable $\\alpha$ is already used above to indicate the probability that `CERTIFY` fails. I'd recommend using a different variable here. - Section 3.2 (Computing an Underapproximating Region): "sacrificing a few pixels where the network is not very robust" - did you mean where the _adversarial example_ is not very robust here? If the network is not robust for a certain pixel, it doesn't make sense to me to sacrifice _those_ pixels ... - Section 5.1: "Column #Regions" is referenced, but it is "#Reg" in both tables. ## Spelling / Grammar - Section 2.2 (Geometric Certification): "creates overapproximation of the region" -> "creates an approximation of the region" - Figure 1: "repred crosses" -> "red crosses"? # Questions This is out of the scope of this paper, but the result in Section 5.4 suggests that it might be possible to find perturbations to empirically robust adversarial examples (empirically verified by an EoT approach) that result in a correctly classified image. Do you have any sense whether it would be possible to consistently find such "dis-adversarial attacks" on empirically robust adversarial examples? Overall, I recommend accepting the paper. The paper presents a novel approach to finding large regions of adversarial examples, with strong experimental evidence that it scales well. The details provided would enable other researchers to reproduce the presented approach. Most importantly, this approach is likely to be something that other researchers can use and build upon. Having said that, the paper has some issues with clarity. Details are provided in the main review, but I'd like to highlight in particular Section 5.3, which I found particularly hard to parse. N.B. My current recommendation for this paper as-is is 6, but I'd be quite happy to upgrade the recommendation to 8 if the bulk of my concerns around clarity are addressed. --- ## After Paper Discussion Period During the paper discussion, the authors addressed the bulk of my concerns around clarity, and I've upgraded my recommendation to 8 as a result. <doc-sep>The manuscript introduces a definition of provablely-robust adversarial examples, a set of examples that are verified to be classified as different labels compared with the input of interest. The main idea of the technique is to shrink a box-like region from an over approximation to a verifiable smaller sub-region such that a robustness verifier will return robust for all points in that particular sub-region. In the evaluation part, the author demonstrates the effectiveness of the approach with several experiments, i.e. robustness against intensity transformation and randomized smoothing defense. ### Strength This paper has clearly stated its objective, the approach and the corresponding evaluations. The proposed technique is designed for a concrete objective, generating provably-robust adversarial examples. The approaches are well-documented in the paper and evaluations are conducted over several datasets and models. The paper has spent a lot of space explaining the technique from a high-level perspective down to its implementation details, which helps the reader to better understand the algorithm. ### Weakness My concerns of the paper mainly focus on the following three aspects: 1. Motivation of the “provable” part of the adversarial examples is missing. This paper relates to the prior work in generating robust adversarial examples [1, 2], where they can serve as a good motivation to generate “robust'' adversarial examples”: these papers discuss several physical distortions in applying the adversarial examples into the real-world cases for images and audio. In the other word, these distortions are real-world adversaries for artificial adversarial examples. However, the motivation for the “provable” part is missing to me. I understand the “provable” part can be related to a counter problem, probably-robust networks. The provably-robust network is motivated to build networks where the robustness can be guaranteed for all possible attacks and the evaluations are free from the choice of adversaries. To that end, can the authors explain more about the motivation for “provably”-robust adversarial examples? A follow-up question is: does the robust region proposed in this paper actually contain the physical distortions that may be encountered in the real-word cases [1, 2] and how often? It seems that the more important part we need to prove is that these regions are guaranteed to contain all or part of the distortions you can possibly encounter so that an adversary does not need to worry about that an adversarial example fails in practice. 2. Important experimental setups and discussions are missing. - Unfortunately with some amount of time during my review I can not locate the concrete definitions and actual implementations of the intensity changes and geometric changes as mentioned in Table 1 and 2. This information should be helpful to understand the importance of the results. - How “provable” is evaluated? Table 1 and 2 seem to only evaluate how big the region is and Table 3 seems to use randomized smoothing as an attack to adversarial examples generated by the proposed approach. However, the motivation of the paper mostly relies on [1, 2], where the “robustness” of an adversarial example is actually not designed against a prediction defense, i.e. randomized smoothing, but transformations and distortions. - Unless I misunderstand the results, Table 1 and 2 seem to only aggregate over less than 100 examples per dataset and it may take up to ~5000s for one example in CIFAR. I understand that the bottleneck is that the verifier is usually resource-consuming. If that is the case, the authors may need to convince the readers under what circumstances this trade-off between resource and probably-robustness is worthwhile compared to the fast empirical approaches. 3. Writing. I find the writing of the method part is well-organized and polished, which makes me enjoy the reading of the approach. However, the experiment part is relatively dense and sometimes even difficult to read when a lot of notations and symbols appear in the paragraph without sufficient explanations to remind the reader what they refer to. Also, it would be best to add explanations to notations in the captions of figures and tables so the reader does not have to search for what is measured in the table. [1] Athalye, A., Engstrom, L., Ilyas, A., & Kwok, K. (2018, July). Synthesizing robust adversarial examples. In International conference on machine learning (pp. 284-293). PMLR. [2] Qin, Y., Carlini, N., Cottrell, G., Goodfellow, I., & Raffel, C. (2019, May). Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In International conference on machine learning (pp. 5231-5240). PMLR. Overall I incline to a weak rejection at this stage of the reviewing process but I am open to any discussions. The reasons that prevent me from giving higher scores are the insufficient descriptions of the motivations and the current way the experiment sections are written with, which I have mentioned in my main review. <doc-sep>The main contribution of the paper is a framework that will output a large batch of adversarial examples, assuming access to a few black-box mechanisms. The term "provably robust" appears misleading; there is no theory showing that the examples must be adversarial. While authors highlight that there are massive amount of adversarial examples (say $10^{573}$) produced by the algorithms, such number seems really dependent on particular problems while lacking a theoretical justification. On the novelty of the algorithms, I feel it relies many black-box components and their properties; which lowers the technical contribution if the work. **Updates after discussion** I agree that the paper brings out interesting ideas and the experimental results are convincing. However, I also feel authors need to tune down the contributions on the theoretical part because many of the guarantees hinge on black-box components that are leveraged from prior works. See above.
In this paper, authors introduce and study provably robust adversarial examples. Reviewers had mixed thoughts on the work. One reviewer mentioned that the "provable" robustness is somehow overstated in the work: looking at the title and abstract, it sounds like the paper develops a new algorithm that is guaranteed to be robust, but in reality the robustness hinges on the black-box verifiers (which is acknowledged by the authors during discussion). I agree with this. This should be more clearly stated in the work. I strongly suggest authors to calibrate exaggerated statements of contributions in the revised draft. Having said this, reviewers liked the the experimental study of the paper and found it to be comprehensive and convincing.
1. In the introduction, the author separately pointed out the issues of DUQ and DKL. However, these issues are not convincing as no citations or theoretical proof is provided in this paper. The notations in the intro are also not well-defined. X, x, x* are used without difference, which however should be clearly defined as vectors or matrices. 2. The technical contribution is very incremental. The proposed vDUQ is simply applying the inducing point GP in the DUQ to mitigate the uncertainty collapse in DKL. The ‘’inducing point variational approximation of the GP predictive distribution’’ referred as inducing point GP is not clear for me. What exactly does ‘inducing point GP’ refer to? Why the so-called inducing point GP can speed up inference in GP model? What does ‘’decouple it from dataset size’’ mean? All these important points are not clarified in the introduction. 3. The theoretical contributions are also not well-organized. The author fails to prove that the spectral normalization as a regularization scheme can be uses to mitigate uncertainty collapse. Moreover, how the spectral normalization guarantees the effectiveness of inducing point GP in vDUQ? 4. I also have some concerns on the experimental results of causal inference. Why the treatment effect estimation has uncertainty settings. The authors should fully explain the uncertainty settings in causal inference, as most of the causal baselines are not proposed for uncertainty settings. <doc-sep>This paper proposes a single deep deterministic model harnessed with spectral normalization to improve uncertainty in regression and classification tasks. The method relies on deep kernel learning + inducing point approximation for GP, and spectral regularization of the deep model to avoid uncertainty collapse in OOD. The main contribution of this paper is methodological. The paper has extensive simulations, and demonstrates the utility of the proposed approach in a wide-range of applications for both regression and classification. Having said that, the proposed approach can be seen as a modification to [Liu et.al, 2020], with different approximation and different regularization. In that sense, the novelty of the paper could be seen as modest, and a comparison with [Liu et.al, 2020] highlighting the differences in practice is missing. More comments: * What is the main advantage of this approach w.r.t [Liu et.al 2020]? How do these compare in terms of uncertainty estimation and computational speed? This should be included. The authors discuss [Liu et.al, 2020] in related work, highlighting the important difference that [Liu et.al, 2020] is a parametric model, but it is similar in that both formulate it as a GP and regularize for distance-awareness. A comparison with [Liu et.al, 2020] would strengthen this paper considerably. * Figure 1 could be improved: it only shows deep ensembles as baseline, how about the other approaches discussed in the paper? Moreover, it is unclear whether vDUQ provides better in-between uncertainty compared to Deep Ensembles (similar width, but deep ensembles interpolation is more smooth. * A deeper focus on the normalization, i.e., theoretical or empirical properties of spectral normalization and comparison with other normalization schemes would make the paper more interesting. * The paper has some strong/categorical sentences with which we do not agree: e.g., first intro paragraph: "there is no single method that works on large datasets ..." [Liu et.al, 2020] would be such method for example. * Simulation results are convincing in terms of utility, as the authors demonstrate that the proposed approach works in high-dimensional big datasets, and meaningful applications such as causal inference for healthcare. Yet, the experiments miss the point of elucidating how much spectral normalization compared to other normalization schemes. * Could the authors include an ablation study showing the impact of the number of inducing points in the approximation? The authors mention as a strength that a low number of inducing points is good enough, so showing evidence for that would strengthen the paper.<doc-sep>--- ##### Summary: This paper proposes variational deterministic uncertainty quantification (vDUQ), which adopts the stochastic (sparse) variational deep kernel learning (DKL) method to enable uncertainty estimations for deep models. To avoid uncertainty collapse, the deep neural network in the GP kernel is regularized with spectral normalization, which ensures a bi-Lipschitz constraint. Experiments show that vDUQ is effective in uncertainty quantification tasks. ##### Reasons for score: The idea is clear and the paper is easy to follow. However, my major concern is about the significance of the contribution and the experimental results. See my detailed comments below. --- ##### Pros: The idea of obtaining uncertainty estimations with DKL is interesting. Moreover, by using sparse variational inference in DKL, the entire model is just like a DNN with an extra “layer of inducing points”, requiring only a few extra parameters and computational cost, which is also desired. Overall, the paper is well-written. The figures are instructive and helpful for understanding. --- ##### Concerns: My main concern is about the significance of the contributions. Sparse variational inference methods for DKL was previously proposed in Wilson et al. 2016a. The main contribution of this paper seems to be the idea of introducing the spectral normalization regularization to stabilize the training of DKL and avoid uncertainty collapse. Although this is an interesting idea, I think the authors did not provide clear enough explanations and rigorous analyses. In Page 3 the authors mentioned “without the spectral regularization on the deep model, the deep network is free to map data points that are far away from the training distribution to a feature representation that’s similar to in distribution data”. This perhaps explains how uncertainty collapse happens in out-of-distribution data to some extent, but is this the primary cause of undesired uncertainties? Since the parameters of NN become the parameters of the GP prior (as mentioned in Section 2), optimizing the marginal likelihood or the ELBO w.r.t the variational parameters and the NN parameters is actually “fitting the prior to data”, which could also cause biased uncertainties [1]. Although the bi-Lipschitz property can intuitively alleviate the biases, it is not clearly explained how it works. It would be better to provide more theoretical analysis. In Section 3.1, the authors raised a question about how informative the “distance preserving explanation” is about the motivation of using bi-Lipschitz regularization. However, a more informative explanation is not provided. Also, the last paragraph of Section 3.1 is misleading. The author mentioned “A complete, rigorous theory … is not remains an open question.” If it is addressed and has theoretical insights into the use of spectral normalization, the authors should add necessary references and explanations. [1] Salimbeni, Hugh, and Marc Deisenroth. "Doubly stochastic variational inference for deep Gaussian processes." Advances in Neural Information Processing Systems. 2017. Minors: (1) In Table 1, the results of vDUQ and DUQ is outperformed by the ensemble method in terms of both accuracy and AUROC. This seems a different conclusion form the results in Amersfoort et al. 2020. It would be better to provide more discussion about it, which seems not to be expected. (2) The authors claim the vDUQ can be trained in an end-to-end fashion in Section 3. However, since the inducing points are initialized with k-means algorithms that need to look at the training data, which I think is still a (lightweight) pre-training. <doc-sep>Variational deterministic uncertainty quantification Summary: The paper proposes a method for out-of-distribution detection by combining deep kernel learning and Gaussian processes. Using neural networks as a kernel for the GP as well as inducing point approximation alleviates the scalability issues of GP. The idea itself has merits, however, the presentation and experiments are not convincing. Strengths: The idea of using deep kernels within GP is a good solution that allows benefiting from both the expressiveness of the kernels and uncertainty estimates for GP. Additionally, using the uncertainty estimates for causal inference is a nice application. Weaknesses: Although the approach is interesting it needs to be further developed and evaluated in multiple setups. I find it limiting that it relies on the residual connection, making it unsuitable for other NN architectures, which means it will apply to only a limited number of tasks. The presentation of the method should be better structured. I appreciate the background on deep kernels and how it helps to overcome the limits of GP, however, there is a lack of presentation of the method itself. A description, algorithmic listing or even an equation for the uncertainty score proposed is missing in the current version of the text. In the introduction vUQD is presented as favorable wrt UQD due to its rigorous probabilistic interpretation, however, this was never further analyzed in the text. Also, seems that the method is concerned only with the epistemic uncertainty in the data? In general, the whole presentation of related work and positioning of this paper in the uncertainty literature is not clear. What source of uncertainty does the method address? There is much to be elaborated on this topic and I believe the discussion on this will significantly improve the paper. The discussion on spectral-normalization and bi-Lipschitz in 3.1 Please clarify it or explain it better, in the current writing it is contradicting the proposed method: “A complete, rigorous theory of why the spectral normalization as used in this and previous work is a useful regularization scheme is not remains an open question” Experiments: Toy examples: Figure 1 - on regression, I do not find this example motivating, first, why choosing noiseless data? Second, why is the vUQD increasing in reasons where there is data (such as the peaks?) Why does it compare only to deep ensembles? Figure 2 - Why choosing a toy example where a linear classifier works in the original space? What is the sensitivity to the number of inducing points for the GP? An ablation study at least for the toy data sets can help. Why were standard datasets such as MNIST and fashion MNIST not included? The empirical evaluation should be extended with more baselines and datasets. Minor: The manuscript needs proofreading, language errors increase increasingly towards the conclusion. ------------- Update after reading authors response ------------- I thank the authors for their detailed responses, they have answered most of my concerns and I raise my score to 5. I am still not convinced about the method covering both the aleatoric and epistemic uncertainties, without any theoretical or intuitive justification, and without any discussion/clarification on that part. If indeed this is the case, then additional experiments should be included, for example for a regression task, the standard UCI datasets [1]. [1] Hernandez-Lobato, J M and Adams, R P. Probabilistic ´ backpropagation for scalable learning of bayesian neural networks. In ICML-15, 2015.
The reviewers all agreed that the paper represent thorough work but also is closely related to existing literature. (All referees point to other non-overlapping literature so it is a crowded field the authors have entered.) The amount of novelty (needed) can always be discussed but given the referees unanimous opinion and knowledgable input it is better for this work to be rejected for this conference. Using this input can make this work a good paper for submission elsewhere.
The authors proposed a new method for retrosynthesis, which does not require the mapping numbers and extracting templates from the literature. Basically, the model consists of a graph-based encoder and a sequence based encoder. The encoder consists of local aggregation from neighbors and global attention using a new positional method. The decoder is a Transformer model with relative positional encoding. The method achieved promising results on several retrosynthesis datasets. 1. From Eqn.(1) to Eqn.(5), you choose to use a complex gating mechanism to aggregate information. Is every component necessary? What if using a simple GCN or GAT? 2. I think the model contains more parameters than conventional retrosynthesis models like conventional Transformer, GLN. Could you please show compare the number of parameters of different methods? 3. The authors should provide some real cases to show how the method outperforms previous baselines, and why the method can obtain good results without templates. Missing References: 1. Dual-view Molecule Pre-training, https://arxiv.org/abs/2106.10234, the authors also work on retrosynthesis using Transformer and GNN models. A comparison is necessary. The results in this paper are good, although the method itself is not quite novel. <doc-sep>This paper proposes a graph-to-sequence architecture called Graph2SMILES for the retrosynthesis and the reaction outcome prediction. Graph2SMILES uses an attention-augmented D-MPNN encoder to capture the local information and a global attention encoder with graph-aware positional embeddings to capture the global information. Experiments show that Graph2SMILES is competitive with Transformer baselines but does not outperform state-of-the-art methods on tasks of the one-step retrosynthesis and the reaction outcome prediction. The main strengths of this paper are as follows. 1. This paper proposes Graph2SMILES, which is a graph-to-sequence architecture without using sequence representations of input SMILES. Therefore, Graph2SMILES is permutation invariant to the input and does not need the data augmentation. 2. Graph2SMILES has a wide range of potential applications because it can serve as a drop-in replacement for Transformer in many tasks involving the molecule(s)-to-molecule(s) transformation. My major concerns are as follows. 1. This paper states that Graph2SMILES achieves state-of-the-art top-1 accuracy on common benchmarks among methods that do not use reaction templates, atom mapping, pretraining, or data augmentation strategies. The authors claim that integrating the above features or techniques with Graph2SMILES could improve the performance. However, they do not conduct experiments to demonstrate their claim. Besides, as the aforementioned techniques are commonly seen in predictive chemistry tasks, the authors may want to explain why they do not equip Graph2SMILES with these techniques. 2. D-GAT is a variant of D-GCN with attention-based message updates. However, D-GAT does not outperform D-GCN in terms of the top-1 accuracy, which is the basis for comparison throughout the discussion in this paper. According to Table 1, D-GAT has a small advantage over D-GCN only in terms of the top-5 and top-10 accuracies in the reaction outcome prediction. 3. Graph2SMILES involves calculating pairwise shortest path lengths between atoms, which can be computationally prohibitive. The authors may want to compare Graph2SMILES against baselines in terms of the computational complexity. This paper studies two important problems in the computer-aided organic chemistry and proposes a graph-to-sequence architecture called Graph2SMILES. However, the empirical results do not show a superior performance of Graph2SMILES to existing methods, and the technical contribution is incremental. <doc-sep>This paper proposes a graph-to-SMILES framework, which incorporates several recently developed engineering techniques from the community, for synthesis planning and reaction outcome prediction tasks. The proposed method leverages graph neural networks and Transformer attention model to encode the graph inputs and then utilizes a Transformer decoder to generate the SMILES string as outputs. Experiments on benchmark retrosynthesis and reaction prediction tasks show that the proposed approach outperformed the vanilla SMILES-to-SMILES transformer baseline, but obtained inferior results than some other advanced methods. The paper is interesting, but both the technical novelty and the experimental studies are weak to me. The proposed framework integrates several recently developed engineering techniques and empirically shows its superior performance over vanilla SMILES-to-SMILES transformer baseline. This paper provides another comparison baseline for research on retrosynthesis and reaction prediction. Nevertheless, I have the following concerns regarding the paper. 1. The proposed framework is similar to the NERF approach (Bi et al. ICML 2021) as cited by the authors. NERF formulates the reaction prediction problem as a graph-to-graph translation problem. Also, NERF first leverages graph neural networks to capture the local information in individual molecules and then utilizes a Transformer encoder to further models the intermolecular interactions between nodes from multiple molecules. Furthermore, NERF uses a Transformer decoder to decode the output as graph. These are almost the same as that in the method proposed in the paper. The only different to me is that the NERF uses a Transformer to decode the output into graph directly (in a non-autoregressive fashion), while the proposed method here uses the Transformer to decode the output into SMILES strings (in an autoregressive fashion). In this sense, the novelty of this paper is limited to me. Note: I think NERF can naturally apply to two or more molecules since the Transformer encoder is used by considering all node embeddings from multiple molecules as a node set. 2. Experimentally, the proposed method is not directly compared with NERF. I think such comparison is necessary since as shown in the paper, NERF outperforms the SMILES-to-SMILES transformer baseline, and even the augmented version of it, to which the proposed method here obtained inferior performance. The two methods are similar and closely related. I would expect the paper to include NERF into the main results in Table 1. Also, for the USPTO_STEREO_mixed task, I wonder what the reason was for only comparing with the vanilla Transformer, and why not comparing with the Augmented Transformer or the state-of-the-art method Chemformer? 3. Results in Table 1 show that, the proposed method is inferior to the Transformer baseline with simple augmentation. Also, 1% less than the tested method Chemformer, which makes the paper’s contribution less significant to me. 4. The claim in the Abstract “molecular graph encoders that mitigates the need for input data augmentation” is a strong claim to me. Nevertheless, there is no evidence to support that claim. The slightly better performance over the SMILES-to-SMILES transformer baseline is not a convincing evidence to me. Input data augmentation may play a significant role on regularizing the deep neural networks. I think better justification to support the claim is necessary. 5. The statement in the last sentence of the first paragraph on Page2: “[SMILES augmentation]…be interpreted as evidence of the ineffectiveness of the SMILES representation itself.” I think this hypothesis may need better support and analysis. To me, the augmentation of SMILES strings can act as a model regularization method, which helps the trained model to generalize well to unseen data, and may not directly infer the ineffectiveness of the SMILES representation itself. 6. I am not fully understand the claim in the second paragraph of Page2 “… we guarantee the permutation invariance of Graph2SMILES to the input, eliminating the need for input-side augmentation altogether.” I think it would be useful to specify how and why so. 7. The proposed method integrates several performance engineering techniques (such as attention weights and multi-headed attention in the graph encoder, integration of shortest path length in the positional embeddings etc.), so where the improvement is really coming from is not clear to me. In the ablation study in Table4, both the positional embedding and global attention are key to the Transformer’s performance, so the performance degradation is expected when remove them: Transformer expects a positional embedding to work, and without a global attention the encoder will not be able to capture information from multiple molecules (their graphs are disconnected). The proposed method is similar to NERF as proposed by Bi et al., so the technical novelty is limited. Also, the experiment study missed important comparison baselines. Furthermore, a more comprehensive ablation study is needed since several engineering techniques are employed, and it is difficult to tell where the performance improvement is really coming from when compared to the Transformer vanilla model. <doc-sep>The paper proposes a GNN-based extension of transformers, which have been shown to be effective for reaction prediction etc. before. In particular, the GNN-based embedding of molecules in the reaction embeddings overcomes the artificial bias inherent in the often applied sequence embeddings. The experiments show that there are sometimes increases of performance in reaction and retrosynthesis prediction - and the approach could be applied to similar problems. (+) Using a sequence-independent encoding in the easy-to-use transformer makes sense and is a research question which is interesting for the - I think mostly, AI in chemistry - community. (-) - As far as I can see, the technical novelty is limited. The proposal is a combination of rather well-known methods. D-MPNN (where attention is added) and re-parameterized, relative positional encodings (using 0 to represent atoms in different molecules) in transformers - It is unclear if the proposed attention in the GNN is useful: Since we only see "For reaction outcome prediction, there is a small advantage of using D-GAT over D-GCN", and the ablation results are missing. - The results in Table 1 are only convincing for USPTO_STEREO_mixed. - The experimental comparison for retrosynthesis compares to methods which use different forms of pretraining or augmentation, which makes sense. It shows that the augmentation still provides advantages. And, as far as I understand, the graph-based molecule embedding entails that Smiles augmentation would not improve your results further? Also, it would make sense to include the ablation results for Graph2Smiles just using the transformer into Table 2 directly. ------------------------------- Other Comments - It is unclear to me what "Our hyperparameters for D-GAT and D-GCN are adapted from GraphRetro" means. - In (8), s_uv should be AttnSum(...) - How exactly is \\mathcal{B}_u,v used in the learnable \\tilde{r}_u,v? - Table 4: Since the paper proposes the attention-based GNN, the ablation should be provided for that model. - Table 4: What is "no global attention encoder"? Just the combination of GNN embeddings w/o transformer? Since transformer is the baseline, I would not consider this as an ablation setting. - Table 4: How do the results look on the other tasks? For retrosynthesis, all open existing systems that yield full retrosynthesis trees which I know use top50 (or similar), so the fact that the base model is better at top10 (already) renders the analysis questionable. - "We include part of our code to reproduce some specific results" - Why not all? Overall, I think the authors' proposed model for reaction prediction makes sense. However, as mentioned above, the paper's writing could be improved, the technical contribution is limited, and the experiments also show only limited improvements. Altogether, I therefore suggest to reject the paper at the current moment. I am happy to adjust my score in case I missed critical parts.
While the reviewers appreciated the method's ability to replace transformer models and SMILES data augmentation their main concerns were with (a) the experimental section, and (b) the technical innovation over prior work, which updated drafts of the paper did not fully resolve. Specifically for (a) this work performs very similarly to prior work: for reaction outcome prediction the proposed method improves top-1/3/5 for USPTO_STEREO_mixed but is outperformed by prior work for top-1/5/10 for USPTO_460k_mixed; for retrosynthesis the model is outperformed for USPTO_full and only outperforms prior work that does not use templates/atom-mapping/augmentation for top-1 on USPTO_50k. The authors argue that their method should be preferred because their method does not require templates, atom-mapping, and data augmentation. The reviewers agree that template-free and atom-mapping-free methods are more widely applicable. However, the benefits of being augmentation-free is not convincingly stated by the authors who only state that their approach is beneficial by "simplifying data preprocessing and potentially saving training time." The authors should have empirically verified these claim by reporting training time, because it is not obvious that their model which requires pairwise shortest path lengths is actually faster to train. For (b) the reviewers believed that the paper lacked technical novelty given recent work (e.g., NERF). The authors should more clearly distinguish this work from past work (e.g., graphical depictions and finer past work categorization may help with this). Given the similar performance to prior work, the lack of evidence to support training time claims, and the limited technical novelty, I believe this work should be rejected at this time. Once these things are clarified this paper will be improved.
This paper proposes to incorporate the proprioceptive and visual information together for quadrupedal locomotion. The authors introduce a new model architecture named LocoTransformer that consists of separate modality encoders for proprioceptive and visual inputs, the output of which is fed through a shared Transformer encoder to predict actions and values. Through experiments, the authors demonstrate that the robot, with the help of both the proprioceptive and visual inputs, can walk through different sizes of obstacles and even moving obstacles. They have also transferred the learned policy from simulation to a real robot by running it indoors and in the wild with unseen obstacles and terrain. [Strength] This paper tackles an important question of how to incorporate visual information in learning policies for quadrupedal locomotion, where most existing learning-based control of quadruped robots in the published works only considered proprioceptive information, and the robots are essentially "blind." The use of visual information can allow the robots to be less conservative and plan their actions for a longer time horizon, as has been evident from the authors' comparison with a state-only baseline that only considers proprioceptive inputs. This paper has extensive experiments and in-depth analysis in simulation, which provides a good reference for the readers to understand the benefits and limitations of different design choices. The real-world demo from sim-to-real transfer also provides concrete empirical evidence on the practical use of the proposed method. [Weakness] While I like the direction this paper is going, I'm not entirely convinced that the real-world experiments in the paper fully demonstrate the necessity of visual information. For example, in [1, 2], the authors have shown working demos on terrains seemingly much more challenging than this paper. [1, 2] also showed examples of stair climbing, a task where vision is supposed to be extremely helpful: a blind robot may have to make a few failed trials before it knows the height of a stair. You could also imagine the benefit of vision in cases that require more precise footstep planning (e.g., https://youtu.be/k7s1sr4JdlI?t=176). The paper will be much stronger by including some more concrete comparisons with the current state-of-the-art learning approaches on what can be made possible via vision while previous blinds robots struggle. While I agree that vision is important for robots to make long-term plans and autonomously traverse around obstacles, I'm not sure whether this paper's approach is better than more classic robotic pipelines. For example, instead of treating the depth image as a 2D grid and processing it using CNN, one could use the depth camera to build a 3D map of the surrounding environment and blend in the explicit notion of what's traversable and what's not. You can then plan the trajectory based on the perception results. This seems to be how Boston Dynamics' Spot uses the visual information (https://www.youtube.com/watch?v=Ve9kWX_KXus) and has shown great generalization ability in real-world scenarios — showing examples of how this paper's way of using visual information is better than classic pipelines may be essential to claim improvements. Continuing my previous point, it would be better if the authors could include more discussion on the current state of quadrupedal locomotion both in academia and industry, where Boston Dynamics' Spot is seemingly better in terms of generalization and robustness than any of the reinforcement learning-based approaches. The authors may also shed light on under which scenarios we should choose RL-trained robots over Spot. How does the method work in the real world if there are moving obstacles, e.g., humans and other animals? How well does the method work compared with the built-in controller of the robot? The paper may need a few passes of proofreading where the current manuscript includes a lot of typos, just to name a few: Section 1, contribution bulletin points: We the propose LocoTransformer, ... Section 3: a MDP --> an MDP Section 6: The visual inputs also inputs the locomotion ... [1] Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen Koltun, Marco Hutter, "Learning Quadrupedal Locomotion over Challenging Terrain" [2] Ashish Kumar, Zipeng Fu, Deepak Pathak, Jitendra Malik, "RMA: Rapid Motor Adaptation for Legged Robots" ===================== [Post Rebuttal] I thank the authors for the detailed feedback and additional experiments, which addressed most of my concerns. Great job! I have also read the reviews from other reviewers and decided to raise my score to 8: accept, good paper. I like the direction this paper is going: combining visual and proprioceptive information to train RL agents for quadrupedal locomotion. I also love that the authors include real-world demos of the learned policy on a physical robot. However, my main concern is that, although there are extensive evaluations in the simulation, the current set of real-world examples may not be sufficient to show the benefit of visual inputs. Examples like climbing stairs or scenarios that require more precise footstep planning would make the paper much stronger. I'm generally excited about the progress in this direction, thus I'm currently leaning towards the acceptance side, but I hope the authors can address the issues mentioned above. <doc-sep>In this paper, the authors proposed a transformer based architecture that combines both visual (depth) and proprioceptive inputs (i.e. IMU and joint angles) to solve visual locomotion tasks. The authors demonstrated that their approach can solve challenging visual navigation tasks and locomotions task on uneven terrains. The proposed method out perform proprioceptive only, visual only, and HRL baselines. The sim trained policy has been demonstrated on the real A1 hardware. The main strengths of the paper: (1) Proposed a novel transformer based architecture that can train visual-locomotion policies end-to-end, and demonstrated good navigation/obstacle avoidance/uneven terrain walking results in the simulation. (2) Zero-shot real world transfer to a A1 robot and demonstrates walking + navigation behavior in various environments. The main weakness of the paper: Not enough baselines to compare with. As the authors cited, there are many approaches to tackle visual locomotion + navigation problem besides end to end training. For example in the hierarchical approach one can combine: learned/optimization based navigation + pre-trained or hand tune walking (i.e. MPC) motions. So in total even the hierarchical approach can have four different combinations to compare with. Yet I saw non of them here. I would say the authors should include at least one or two such baselines to compare with, and document the performances and cons and pros. The authors proposed to use transformer architecture to solve visual locomotion + navigation tasks. The proposed approach is compared with a few end-to-end trained baselines including HRL and has demonstrated advantages. The authors also deployed the trained policy successfully to the real robot. <doc-sep>This work proposes a novel architecture for quadrupedal locomotion that fuses proprioceptive and visual information with a transformer-based model to enable an agent to proactively maneuver environments with obstacles and uneven terrain by anticipating changes in the environment many steps ahead. The method is extensively evaluated in simulation and on a sim to real transfer tasks. The method is shown to both achieve higher reward but also better capacity to generalise in the context of sim to real. Overall, the paper is well written and the provided evaluation is conducted fairly and well. ## Strengths and Weaknesses ### Things I liked about this paper - **a powerful framework**: fusing visual and proprioceptive data for quadrupedal locomotion using transformer architectures is an interesting and also valuable approach that works well and sets an excellent opportunity for future work. - **a well written paper**: overall the paper is well written and all sections are broadly very clearly described. - **useful insight**: I like the provided key insight that proprioceptive states offer contact measurements for immediate reaction while visual sensory observations can help with longer-term planning. ### Things that can be improved - **number of seeds is not great**: The adopted model-free approach is known to have very unstable learning process of the dynamics function which ideally requires 10 or more seeds to provide a solid results. Using only 5 seeds is not great. More details below. - **prose is not perfect**: There are some minor details and clarifications that may help further improve clarity. Using 5 random seeds for a model free approach is rather small as a number. Ideally, the evaluation should be done on 10 or more seeds. In fact, I suspect that some of the results such as the moving obstacles from Table 3, would change if the approach was evaluated on more runs. Nevertheless, the provided training curves seem to have fairly small variance as illustrated in Figure 4 which makes me more inclined to agree that 5 seeds are sufficient to report on accurate results. In addition, I would expect that the variation in the learnt dynamics to primarily affect the performance of the learnt agent on the physical quadruped system. However, this does not seem to be the case in the reported results, which is great as long as all 5 seeds were used to extract those results. It is great that the paper considers 15 runs per seed but I wonder if the results were acquired through cherry picking best n seeds. This is a detail that is not currently mentioned in the paper but would certainly improve clarity if it did. There are a few additional minor comments. Currently, the distance measurement reported in meters is mentioned only in the text and not in the tables. Stating this there too would make it much clearer. Similarly, what exactly does the collision happened represent. Are these total number of collisions over 1000 steps? 'the number of time steps where collision happens between the robot and obstacles over the course of an episode' states the explanation seems a bit overly complicated. Why not just 'the number of collisions with obstacles per 1k step long episode' or something along those lines? There is a typo in the contribution 'we the propose' should be 'we propose'. Another typo is '... whereas it for our method either plateaus' should be 'whereas for our method it either plateaus' Overall this paper is written well and has a sound idea that is supported well by an extensive evaluation. There are some minor details that may further improve the quality of the paper but I see this as a strong submission which I can recommend for acceptance. <doc-sep>This paper proposes an approach to legged locomotion which leverages a Transformer-based model and is trained via end-to-end reinforcement learning. It provides extensive experimental evaluation of the approach in terms of performance and safety metrics, both in simulation and using real-world experiments. The code is expected to be open-sourced. This paper is very clear in its exposition, providing a detailed diagram of the method, clearly labeled inputs and outputs, and relevant implementation details. The experiments appear sound, with a number of baselines and ablations provided. I want to emphasize the value of real-world experiments in this space, as the 'sim-to-real' gap can be significant and invalidate otherwise good looking results. Scientifically, the paper proposes an architecture that is novel and can serve as a broader proof point that vision-based locomotion can be competitive and robust when trained with a sufficiently expressive model. Strengths: + The paper tackles an important research problem: vision-guided locomotion and has made significant progress along this direction. + Novel network architectures (such as Transformers) are under-explored in the legged robots community. This paper demonstrates that incorporating such architecture indeed makes a difference in performance. + The proposed method is validated on a real robot. The evaluations are comprehensive and the conclusions are convincing. Weakness: - The technical novelty is lean. Neither of the key components of the paper are novel: RL for locomotion and Transformers. Although this can be considered as a weakness, it is not a deal breaker given that the combination of these two and the application to legged robots are novel and potentially influential. - The results are mostly obstacle avoidance on the flat ground, where legs are not essential. Most of the experiments can be done with a wheeled robot. To show the true value of this paper, more challenging terrain needs to be considered and tested on, such as stairs, stepstones, tall grasses, rocks, etc. In these terrains, both vision and legs are critical. From the accompanying video, it is a bit disappointing that the robot only learns steering for obstacle avoidance, but does not learn foot clearance or foothold location on different types of terrains. The robot always drags the foot, even on pebbles (5:25s in the video) where higher foot clearance is clearly a preferred choice. Since this paper trains end-to-end, I would expect that these behaviors would emerge automatically if trained in relevant environments. Showing these behaviors (foot clearance, foothold location, change of gait pattern) in addition to obstacle avoidance, would significantly strengthen the paper. Additional questions: 1) In addition to domain randomization, does the paper apply other techniques for sim-to-real transfer? For example, I would imagine that there will be a large sim-to-real gap in vision. The depth images from Intel realsense can be noisy and with holes, especially in outdoor environments. Do these sim-to-real gaps in vision cause any problems when deploying the policy on the robot? 2) How much tuning is needed to learn natural (deployable) locomotion gaits? In the video, the learned locomotion gait is quite reasonable and deployable on the robot. The paper explicitly mentioned that it did not use trajectory generators. Does it purely rely on reward shaping? If so, how much tuning is needed? And what are the most important terms in the reward function that encourage the emergence of natural gaits? Good paper on relevance and experimental evidence, on a topic that is very much of interest to the robot learning community today. Novelty limited due to combination of known techniques. EDIT: bumped confidence to a 5 based on comments and rebuttal. This is a solid contribution.
The paper addresses vision-based and proprioception-based policies for learning quadrupedal locomotion, using simulation and real-robot experiments with the A1 robot dog. The reviewers agree on the significance of the algorithmic, simulation, and real-world results. Given that there are also real-robot evaluations, and an interesting sim-to-real transfer, the paper appears to be an important acceptance to ICLR.
Summary: This paper proposes an unsupervised graph-level representation learning method considering global-local disentanglement. Specifically, the authors propose a GL-Disen model based on graph VAE architecture to jointly learn global and local representations for a graph. The global information is shared across the whole graph while the local information varies from patch to patch, corresponding to common and local factors, respectively. Empirical experimental results show that the learned representation achieves superior performance in the downstream graph classification task, and analyses demonstrate the learned representations exhibit some disentangle property. Pros: 1. Unsupervised graph representation learning considering global and local disentanglement seems to be a novel problem. 2. The proposed method generalizes disentangled VAE into graph data to disentangle common factors from the local ones. The formulations and model descriptions are clear. 3. Experiments, including both qualitative analysis and quantitative results, demonstrate the effectiveness of the learned global factors in downstream tasks. Cons and questions: My major concern lies in the insufficiency of experiments. Specifically: 1. The disentanglement part is modified from Beta-VAE. Since normal VAE is adopted in graphs (e.g., Variational Graph Auto-Encoders by Kipf and Welling), the authors need to compare these methods to demonstrate the improvement is actually from the disentanglement part rather than the VAE structure. 2. Although the authors demonstrate the effectiveness of disentanglement in downstream tasks (i.e., graph classification), it is unclear whether these global factors have intuitive explanations on some of the datasets, e.g., the showcases of molecular graphs in Duvenaud et al., 2015, or the authors may adopt some synthetic datasets. 3. Since both the global and local node representations are disentangled, I am curious whether the local node representations can also be validated in some downstream node-level tasks. 4. Figure 2 in Section 4.2.1 is not entirely convincing since there is no reference line of how much correlation a non-disentangled method will have (e.g., in Ma et al., 2019, the authors compare the disentangled method with GCN). Other questions: 5. How the proposed method can handle the mode collapse problem, i.e., only a few latent factors learn useful information? 6. As shown in Table 1, though the proposed method outperforms other GNNs, it does not always compare favorably to kernel-based methods such as GCKN. The authors may want to further elaborate on the pros and cons of using GNN vs. kernel-based methods. 7. There lacks a discussion on the complexity of the proposed method. 8. The technical contribution is somewhat limited since both Beta-VAE and graph VAE are known in the literature. It would be more interesting if the authors can integrate local-global disentanglement with local neighborhood disentanglement in Ma et al. 2019 to derive a more novel architecture. I will be happy to improve my scores if authors can address the above questions. ========= I have updated my score considering the paper has improved its quality after the revision (adding more experiments/baselines, comparison with the literature, etc.). ========= New updates: following the new comments of Reviewer 4, I also briefly check the code in the supplementary material and find it indeed seems to have the consistency problem (i.e., not reconstructing graph edges as mentioned in the paper). Thus, I am also wondering how the authors implement Graph-VAE in the rebuttal phase and whether the improvement of their proposed method over Graph-VAE is really from disentanglement or the differences in the autoencoder. Based on this potentially serious problem, I reinstate my original score and think the paper should be clarified before acceptance.<doc-sep>The authors propose a VAE-type generative model approach to characterize the hidden factors, with a divided focus on the global and local reconstructions. The claim is that the learnt hidden representations are disentangled (which is not defined clearly) using two reconstruction terms. The setting of the problem adopts the graph VAE setting in [1,2] (which I think the authors should mention in the related work), and the ELBO & local aggregation (convolution) approaches used in this paper are relatively standard in the generative modelling and graph representation learning domain. Apart from the limited novelty, which would not have affected my evaluation if it solves the problem as claimed, I have several major concerns about this paper: 1. The notion of disentanglement is not well-defined in the first place. In the VAE setting where the hidden factors are stochastic, does disentanglement refer to independence? Or they are orthogonal under a specific measure induced by the graph itself? The claims made by the authors can never be examined rigorously (the visual results do not constitute supportive evidence as I shall discuss later). 2. There is no guarantee that the so-called global and local factors are not confounded. Both the global and local reconstruction terms involve the two types of factors. Given the high expressivity of deep learning models, the local factors can easily manage both tasks, or the global factors are merely enhancing the signals of the local factors. There no mechanism to prevent the cross-terms during the optimization, so the learning process of the global and local factors confounded as a result of how the authors design the objective function. 3. Unclear interpretation of the visual results. It seems that the visual results showcase a similar pattern among the local and global factors, despite the difference that the signal is stronger for the local factors (which is evident as they play a more critical role in the objective). In the absence of a clear definition of disentanglement, more persuasive numerical results and interpretations are needed. [1] Kipf T N, Welling M. Variational graph auto-encoders[J]. arXiv preprint arXiv:1611.07308, 2016. [2] Xu, Da, et al. "Generative graph convolutional network for growing graphs." ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. <doc-sep>I think the idea of the paper is interesting. The writing is well and easy to read. However, it does not meet the condition of acceptance from my point of view. I have some concerns with its characterization of the literature. - Some important related work is missing. It seems authors ignore talking about some literature of unsupervised graph representation learning, such as [1], [2], etc. Also, they do not make a performance comparison with the methods above in experiments. [1] Contrastive Multi-View Representation Learning on Graphs. ICML 2020 [2] Self-supervised Training of Graph Convolutional Networks. Arxiv 2020 - Disentangling the global and local generative factors graph representation learning is important. However, the authors didn't explain the definition of “Global” and “Local” factors clearly. It would also be better if they can show an example of global/local factors when generating graph. - The experiments are missing. I have some concerns as follows. What is the best number of generative factors which is important for this method? Can this method occur mode collapse and how to valid or prevent it? How can this method prove that each factor is necessary for the generative process? What is the real meaning of each factor? How about the time/space complexity of this method? More experiments or discussions should be conducted to answer these questions. Based on the above reasons, this paper can have much more improvement. <doc-sep>In this paper, the authors proposed to disentangle the global level information from the local level one to reduce the effect of the irrelevant information. The proposed method outperforms several state-of-the-arts on multiple datasets for graph classification. Overall, I like the idea of applying unsupervised disentangled learning to graph level representation learning. Some concerns are on the experimental study and missing references. Strong Points: 1. Disentanglement learning is a cutting-edge field and has gained much attention in recent years. It is true that global and local features often entangle together when we learn graph representations. The problem is real and important. 2. The architecture of the model is easy to understand and reasonable. 3. The experimental study is comprehensive, including both qualitative analysis and quantitative analysis. The experimental setup instructions and pseudo-codes are very clear, making the algorithm easy to be reproduced. Weak Points: 1. Performing experiments only on graph classification tasks weakens the significance of the paper. It is common for graph representation learning methods to be tested on other tasks, such as graph similarity/distance computation and graph-level clustering, in order to draw a general and convincing conclusion. 2. Some important references are missing. The authors should discuss and compare with them. On graph-level representation learning: - Bai et al. Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity. IJCAI 2019. On disentangled representation learning: - Yang et al. Factorizable Graph Convolutional Networks. NIPS 2020. - Guo et al. Interpretable Deep Graph Generation with Node-edge Co-disentanglement. KDD 2020. 3. The paper mentioned that the global and local latent generative factors are sampled from their respective posterior distributions. More details are expected. <doc-sep>In this paper, the authors proposed a disentanglement learning based approach for unsupervised graph level representation learning. They assume that disentangled representations which capture these global and local generative factors into independent latent units can be highly beneficial for graph level tasks. The extensive experiments and analysis show that our method achieves the state-of-the-art performance on the task of unsupervised graph representation learning. =========== Strengths: 1. The paper is well written and the disentangling factors can benefit the unsupervised graph representation learning. 2. The performance of this work is good compared with the state-of-the-art baselines. The source code is also available. 3. The related work is sufficient to understand the motivation of this work. The ===== Weakness: 1. The idea is not very novel. For example, two important assumptions 1) a global and local factor for graph analysis 2) local latent factors are independent. Those two assumptions actually have been explored in unsupervised learning tasks. For example, the follow paper[1] exactly disentangle local and global information into two separate sets of latent variables within the VAE framework. It seems that migrating this idea under graph is straightforward. The paper is more like a mixture of [1] and (Higgins et al., 2017), and GCN [1] Charakorn, Rujikorn, et al. "An Explicit Local and Global Representation Disentanglement Framework with Applications in Deep Clustering and Unsupervised Object Detection." arXiv preprint arXiv:2001.08957 (2020). 2. In Figure4, It seems that the GL-Disen global has very good accuracy. The GL-disen global-local combines only outperform GL-Disen global within a very small range of \\lambda but with large fluctuation. Does that mean the local factor contribution little to the overall performance? In Conclude,the authors propose a VAE based learning algorithm to disentangle the global graph-level information. The overall presentation is good. The similar ideas have been explored in unsupervised learning. The novelty of this work is thus not very impressive.
In this paper, the authors designed a disentanglement mechanism for global and local information of graphs and proposed a graph representation method based on it. I agree with the authors that 1) considering the global and local information of graphs jointly is reasonable and helpful (as shown in the experiments) and 2) disentanglement is different from independence. However, the concerns of the reviewers are reasonable --- Eq. (2) and the paragraph before it indeed show that the authors treat the global and the local information independently. Moreover, the disentanglement of the global information (the whole graph) and the local information (the patch/sub-graph) is not well-defined. In my opinion, for the MNIST digits, the angle and the thickness (or something else) of strokes can be disentangled (not independent) factors that have influences on different properties of the data. In this work, if my understanding is correct, the global and the local factors just provide different views to analyze the same graphs and the proposed method actually designs a new way to leverage multi-view information. It is not sure whether the views are disentangled and whether the improvements are from "disentanglement". If the authors can provide an example to explain their "disentanglement" simply as the MNIST case does, this work will be more convincing. Otherwise, this work suffers from the risk of overclaiming.
The paper proposed a joint approach to learn embeddings for KG entities from both descriptions and KG structure, for the task of open-world knowledge graph completion. The main idea is separate modules for obtaining structural embeddings (of entities and relations in a KG) and description embeddings (of descriptions of possibly new entities), and a description projection module to project the description to a space where the structure and description embeddings are aligned (KG embeddings). (+) The approach was evaluated on three different datasets, showcasing the robustness of the approach. (+) Comprehensive analysis of the proposed approach compared with several prior work on the topic. (+) The differences with considered baselines are well explained. (-) Some design choices of the approach need justification (see Q1). Questions: 1. What is the disadvantage of using transformer-based model to represent an entity description when the description is short (ten of fewer words)? Why is there a need to use two different encoders depending on the description length? 2. In Section 6.2, four tasks were introduced but the last one was without any elaboration unlike the other three?<doc-sep>The authors propose a new method for computing Knowledge Graph Embeddings on KGs where textual descriptions of entities are available. They evaluate the effectiveness of their method on a number of downstream tasks, including open-world KG completion, closed-world KG completion and entity classification. They show that their method (FOlK) outperforms baselines across all tasks in key metrics. The authors also perform several ablations to show effectiveness of different parts of their model. They also highlight areas in which their model is better than baselines such as OWE, and provide intuition for it. However, there are several shortcomings in the way the paper is written and organized. Some of these are: 1. 'Cross domain alignment' Sec 4.3 is hard to understand. Authors say they 'treat an entity’s immediate graph neighbourhood as its context and use skipgram objective to maximize the probability of the entity given the neighbourhood entities'. However, this is nowhere reflected in the loss functions used. Loss function $\\mathscr{L}$ as well which is used in eqn 2 and 3 hasn't been defined in the main text which makes things even more confusing. $\\mathscr{L_{proj}}$ is not described anywhere. Training algorithm should be part of main text, since it is a vital differentiator from OWE. 2. DSA score is hard to understand, given description is not enough to be able to reproduce it. 3. 'FOlK is the first framework to use a transformer-based encoder to embed open-world entities' KG-BERT (2019) uses BERT for encoding closed-world entities. Extending it to open-world entities is a straight forward solution and not a novelty. 4. 'the score for any triple must be of the order of the embedding dimensions' Section 1, point 3-'Efficient Ranking'. What does this mean? 5. Whats the difference between FOlK(s) and FOlK(l)? It's not specified 6. A lot of the content that is being referred to in the main text lies in the supplementary material. The authors should move at least the most relevant stuff (such as training algorithm, loss functions) to main text. It seems that the paper has been written in a hurry and is lacking sufficient description for the method, given that the main contribution is the proposed new method FOlK. A better method section and reorganizing is needed. <doc-sep>This paper presents an approach to learn representations of (open-world) entities in a KG with given textual descriptions of open-world entities. This paper describes a technique that jointly learns embeddings for KG entities from descriptions and KG structure for open-world knowledge graph completion. The technique is experimentally validated on the YAGO3-10-Open and WN18RR-Open datasets and it beats previous open world entity representation learning methods. I think this paper is a decent focused contribution. However, I think it can be significantly improved in its presentation quality: 1. The drawbacks of existing approaches is not clearly described and thus it is hard to understand the contributions of this work 2. The method description is not complete. For example L_proj is not described. It is hard to understand the approach. 3. The descriptions of the DSA score is also fuzzy. It is not clear how I would implement this with just this description. The method is taken from BiSkip. But it is hard to understand how the application is different here. Statistical significance numbers should help in table 4. I also have the following questions: 1. Why do we need a 2 phase learning approach. How does it compare to a one phase learning model? 2. how do you find the optimal value of alpha?
This paper presents a new approach called FOIK to learn entity embeddings that exploits not just the KG structure but also the textual entity descriptions, evaluated on the task of open-world knowledge graph completion. Extensive experiments show the value of the approach. The authors were able to address most reviewer feedback in the revised version that was uploaded to the system.
The present paper analyzes the learning dynamics of Node Perturbation (NP), a biologically plausible learning algorithm used to train feedforward networks. Overall, the paper states a negative result about the unstability of NP due to weights diverging through learning, grounded in analytically tractable results obtained on linear models in the student-teacher setting carefully checked against numerical experiments. This analysis leads the authors to prescribe weight normalization to prevent this phenomenon. The predicted behavior in the linear regime is empirically observed in non-linear models on two training tasks, which validates the soundness of the aforementioned analysis. More precisely: - Section 3 introduces the NP algorithm, recalls that the resulting weight update provides an unbiased estimate of the SGD weight update and shows that the cosine similarity between these two weight updates for a given output layer scales as the inverse of the square-root of the size of this output layer (Eq. 8), suggesting that NP and SGD updates become nearly orthogonal for wide networks. Also, by comparing the covariance matrix of the weight updates for NP and SGD (Eq. 10), they want to highlight that NP is much noisier than SGD. - Beginning of Section 4 recalls the minimum number of training steps needed to reach an error level for NP in the linear regression setting, and introduces a “deep linear model”, consisting of two linear layers, namely $y=W_2 \\cdot W_1 \\cdot x$. The rationale behind this choice is to analyze separately the impact of the number of perturbed units and that of the size of the output layer. This deep linear model is studied in the student-teacher setting, whereby the target is given by a teacher network consisting of a linear transformation with some additive Gaussian noise (if this noise is non-zero then the authors say there is a “mismatch” between the student and teacher networks). - First paragraph of Section 4 shows the analytical NP learning dynamics in terms of the error $\\epsilon$ (defined as the squared distance between student and teacher model) and the input weight norm $a=||W_1||$ through time (where time unit is a batch iteration), obtained in the large hidden layer, noiseless (i.e. no mismatch) limit. All the calculations leading to these results are provided in the Appendix. This analysis unveils two main results, summarized by Eq. 14. First, the weight norm grows monotonically increases through time. Second, there are two working regimes for NP, depending on the value of the learning rate used. If the learning rate is smaller than a critical threshold, the error converges to zero, if not the error decreases until the input weight norm a reaches a threshold (corresponding to the sign inversion of $\\dot{\\epsilon}$) wherefrom the error rises again. Hence the “instability”, which is caused by the input weight norm crossing a threshold. This theory is successfully checked against numerical simulations (Fig 1). Also, the minimal number of training steps required to reach an error level is analytically computed (Eq. 15) and numerically verified (Fig. 2), and highlights its weak dependency with the hidden layer size. - Second paragraph extends the previous study to the case of mismatch (e.g. the labels given by the teacher network are noisy, $\\sigma_t \\neq 0$). It shows that the instability previously always happens, regardless of the learning rate value, the frontier at which it occurs is analytically derived (Eq. 16), and these results are checked numerically (Fig. 3A). There is still a “critical” learning rate threshold delimiting different scalings of the number of updates required to reach minimal error as a function of the learning rate (Fig 3B), a behavior which is in stark contrast with that of SGD (Fig 3C). Fig 4D shows that by normalizing the NP weight update, NP learning is stabilized, at the cost of a bias in the weight update which grows with the hidden layer size. - In Section 5, the learning dynamics of NP are numerically analyzed on non-linear models on the MNIST and SARCOS training tasks. Again, the instability is observed and vanishes when normalizing the NP weight updates, suggesting that the previous analysis also holds in the non-linear setting. It is also shown that NP is up to 1000 times slower than SGD, which is partially explained by the fact that NP relies on a scalar rather than vector-valued error signal. Strengths: - The structure and the writing of the paper are clear. The figures are neat. - The mean-field model derived in the linear student-teacher setting matches very well the experiments. - The theoretical analysis seems to lead to a simple trick to unlock NP scalability. Weaknesses: - The derivations in the Appendix are not easy to follow, especially because multiple approximations are sometimes used without detailed steps (I will come back to this in the questions). - The title raises big expectations that are not met: the term “scalability” calls for complex tasks beyond MNIST. This is a major limitation of the paper. My main concern is that this work, titled "stability and **scalability** of NP learning", does not go beyond the MNIST task. I think the most important contribution of this work is theoretical: there is an excellent match between theory and experiments as per the Figures shown in the main on linear models, which extend to non-linear models and gives a very simple insight as to how to use NP properly -- that is: by normalizing the NP weight updates. I think this simple result would have been even more compelling if for instance demonstrated on a 4-5 layers convolutional architecture trained by NP on CIFAR-10, the ideal situation being: without weight update normalization the model doesn't train, with weight update normalization it trains (even very slowly, but it does). Then, it could be concluded that the theoretical model proposed totally accounts for the unscalability of NP when normalization is not applied. As it stands, with MNIST being the most difficult task tested, it's harder to conclude the same. In the bioplausible deep learning literature, the MNIST task alone as a benchmark to claim scalability is unsufficient to my eyes. However, it might be that the sole theoretical contribution of this work and MNIST benchmark abide by the standards of another community (e.g. statistical physics x ML) I may not be aware of, so my judgement might be biased by the fact I come from the bioplausible deep learning literature. Also, I think that the theoretical contributions should be better highlighted in the main. I would recommend accept if: 1 - The authors ran a CIFAR-10 experiment with a 5 layers-deep convolutional architecture, observed the same kind of behavior as on MNIST and SARCOS. 2 - Computations in the appendix are more detailed than they are now -- see the questions section above. <doc-sep>The paper considers the dynamics of training deep networks with node perturbation. The authors provide a detailed theoretical analysis of learning dynamics of node perturbation vs. SGD in linear networks with one hidden layer in the limit of infinite width. The analysis reveals that the input weight norm determines whether the loss increases or decreases during training with node perturbation, with large weight norm corresponding to unstable training. This motivates using weight normalization for node perturbation. Empircally, on linear and nonlinear networks, weight normalization stabilizes training with node perturbation. **Originality** The work is original. The analysis on deep linear networks is novel and the insights from the experimental and theoretical results are not previously explored. To my knowledge, this paper is the first to provide a theoretical explanation for why weight normalization is useful for node perturbation. **Quality** The contributions are high quality. The theoretical analysis appears sound and the experiments are comprehensive. One potential drawback is that the experiments mainly consider simple datasets (SARCOS and MNIST) and mostly consider only one hidden layer networks (although a few experiments consider multilayer nonlinear networks). Investigating the performance of node perturbation in more challenging settings and more complex architectures could help justify the usefulness of the linear network analysis. However, this is not strictly necessary given that many of the main contributions of the paper are theoretical. **Clarity** The paper is well written. The mathematical notation is clear and the figures are well illustrated. However, many of the key and interesting theoretical results of the paper are in the supplementary material (particularly the mean-field dynamics of NP). It may help to provide a brief sketch of these results in the main paper. **Significance** The paper may have significance to researchers specifically studying node perturbation as a learning rule. However, its applicability to the field of biologically plausible learning appears more limited especially given that node perturbation does not appear to be empirically as effective as other biologically-plausible alternative learning rules. Moreover, the experimental results are limited to simple datasets and architectures. The significance of the paper could be significantly enhanced by exploring more complex settings and showing, for example, significantly improved performance of node perturbation when it is combined with weight normalization. The authors address the limitations of their work in the discussion section. As they note, the utility of node perturbation is limited in the supervised setting, although its utility is more clear in a reinforcement learning setting. The authors may also want to comment on the potential applicability of their linear analysis to other settings; as they empirically find, the qualitative observations of the linear network extend to certain nonlinear networks. The authors may wish to specify in which settings these observations may not apply. <doc-sep>The authors present a neat analysis of the scalability and stability of the node perturbation algorithm, which is one of the popular bio-plausible credit assignment algorithms, for deep networks. Based on their analysis and inferences, they introduce a weight normalization trick that seems to alleviate the issues with the algorithm, albeit at the cost of adding a bias to the gradient estimates. In the first part of the paper, the authors use analytical tools to dissociate the effect of number of output nodes of the network from the number of perturbed nodes of the network to claim that node perturbation is scalable for deep linear networks. However, they demonstrate that the dynamics entails an instability in the weight norm. They validate these analytical results empirically in deep nonlinear networks and therefore establish a key result in training networks using node perturbation. They also show that the instablity is worse when there is noise in the labels (i.e. teacher noise). Finally, they demonstrate that the weight normalization trick can help stabilize the algorithm, albeit hurt performance by introducing a bias in the algorithm -- as evidenced by lower performance on the SARCOS and MNIST tasks. Strength: 1. The paper is well motivated and tackles an important problem in the field. The analytical methods used in the paper are an important contribution towards understanding how node perturbation algorithms can be used to develop bio-plausible learning rules in deep networks. 2. The paper is theoretically strong and demonstrates via simulations how the analytical results hold in practice, as well as when nonlinearity is introduced in the network. 3. The weight normalization solution is a smart solution and a good integration of the inferences from the analytical framework. Furthermore, it also fits well with the homeostasis viewpoint wherein weights of a neuron are thought to conserve some quantity over time. Weaknesses: 1. The presentation of the paper can be improved, particularly how the analytical results are presented. I felt a lot of the details of how the results were derived were buried in the appendix with limited reference to these considerations/assumptions in the main text. The authors could consider rewriting certain sections of the paper to better reflect the derivations of their theorems and in doing so, allow the reader to appreciate the contribution that this paper makes. 2. The weight normalization strategy could be better introduced, particularly how in adding a weight normalization step introduces bias in the weight update. It would be great if the authors could add elaborate on this bias and how a bigger hidden layer size contributes to a higher bias. 3. The metrics used in the paper make sense while analyzing the instablity of the node perturbation algorithm. However, it would be nice to report the accuracy/performance metrics on SARCOS and MNIST for SGD and weight-normalized NP. I feel this would be more complete and lay the foundation for further researchers (who could use weight-normalized NP as a baseline method). 4. The discussion about representation similarity is a bit sudden and lack compact presentation. Given that the merits of the paper lie in the stability analysis of the NP algorithm and proposing a possible workaround to that, I would suggest either cutting down on analyzing learned representations (although it is very important and an interesting direction) or fleshing it out more to convey the inferences better. Overall, I feel that the paper presents a very thorough analysis of the NP algorithm and demonstrates a key feature in the dynamics. If the authors could improve the writing a bit, this work would appeal to a larger audience and could make a significant impact in the field. I think the authors do a commendable job in stating the limitations of their proposal but I feel it could be elaborated in the discussions. Specifically, how does the weight normalization introduce bias and its effect on the performance. <doc-sep>The authors study a stochastic gradient-free learning rule for deep feed forward neural networks, node perturbation. The learning rule in brief is to perturb the activity of each neuron by a Gaussian and then update the corresponding weights in proportion to the sampled difference in the loss function. The authors develop a mean field theory for a two-layered linear network whose target function is given by a linear transformation with additive Gaussian noise. This allows them to solve for a critical learning rate in which the learning process is as fast as possible. They find that even with the critical learning rate, node perturbation is slower than stochastic gradient descent. In particular, it is asymptotically slower by approximately a factor of the output size. In addition, their theoretical results reveal an instability in the error dynamics once a certain threshold is crossed, and the authors introduce weight regularization to remedy this. Finally, the authors numerically test the performance of the regularized node perturbation algorithm on the supervised learning tasks MNIST and SARCOS, and show that the algorithm is indeed capable of learning these tasks. The authors provide original theoretical and numerical results about a particular learning algorithm which are clear and straightforward to understand. From a machine learning perspective, the main drawback of the paper is that they are mostly showing weaknesses of an existing algorithm instead of introducing something new and performant. That said, the weight regularization does successfully stabilize the learning process and it is somewhat interesting that the algorithm is able to learn MNIST and SARCOS with the regularization in place. One problem I have with the SARCOS figures is that there is no benchmark shown; everyone knows that an accuracy of 98% on MNIST is acceptable, but since I'm not intimately familiar with the SARCOS task I would like to see either a SoTA MSE taken from the literature, or at least the MSE of the SGD-trained models alongside the NP-trained models for comparison. It is explained in the text that the target error was 5.0, but it is still not clear to me whether or not this is a good value in the grand scheme of algorithms which solve this task. Another way to make the error values easier to interpret would be to report $R^2$ instead of MSE, but either way some kind of benchmark should be shown. The central justification for the lacking performance of the algorithm is its biological plausibility. This makes sense, but I did not get a very clear message from the authors as to whether or not they really believe this algorithm is used somewhere in the brain. I agree with the author's statement "an algorithm is biologically implausible if it takes an inordinately long time to reach good performance." However, I did not understand the citation of Figures 4E and 5B when they said "its [NP's] performance deficit is smaller when compared to a reinforcement learning rule using the error backpropagation." From what I understood, these figures show that it takes longer to train NP than reinforcement learning or SGD, especially as depth increases. If I understood correctly, it seems this would be in contradiction to what is said in the discussion, and then it is difficult to rationalize the use of NP in the brain. There should be some clarification either in the figures or in the discussion. I also think its worth noting that inverse problems (including SARCOS) are often solved innately by the brain. When a calf (or any quadruped) is born, they don't spend any number of epochs learning what torques are required to move their hooves from one place to the other, walking occurs immediately, suggesting that the capability was genetically encoded into the architecture, not learned. The authors could, of course, defend the claim that NP is used for learning more complex motor tasks, such as those performed by humans, but then it would also have to be clear whether or not the algorithm is really fast enough for this. I understand the motivation of using SARCOS to test the capability of the algorithm to solve a motor task. However, since the paper's story has biology at the center, I think it would be much more convincing if the authors also took one of the tasks from one of their biological citations and replicated it with their algorithms. Even including the purely model-based task from Fiete et al, 2007 would make the argument that NP aids in songbird learning more than a passing remark. Overall, I think a more clear story would be greatly beneficial to the paper and better determine its significance. If a central argument is that the brain *probably* does use regularized NP for motor learning, I would be skeptical and would be interested in further discussion of the biological literature; the citations Kornfeld et al, 2020 and possibly Bouvier et al, 2018 might support this line of argument if the results were better reflected upon in the paper. On the other hand, the story could be that the slowness of the algorithm, even in an idealized setting, demonstrates that the brain cannot be using it except as an auxiliary tool to a more powerful learning method. Societal impact is not very relevant; the results of the paper are far too theoretical for negative societal impact to be speculated upon. Performance limitations were adequately reflected upon.
Authors theoretically analyze and numerically verified statistical properties of node perturbation which is one of the more biologically plausible but slower learning rule. Authors show both the benefits and limitations of the naive node perturbation in terms of learning trajectory. In particular, node perturbation is unstable under practical regimes. They propose a biologically plausible weight normalization scheme which overcomes some of the limitations of the naive version (but introduces some bias). This work advances the theoretical understanding with significant contribution to the neuroscience of learning. The expert reviewers agree that the work is original, clear, and of high quality. I suggest revising the title to indicate both the negative and positive sides of the analysis, perhaps "On the stability and scalability of Node Perturbation Learning" would be better.
SUMMARY: The paper presents a graph neural network (GNN) architecture with learnable low-rank filters that unifies various recently-proposed GNN-based methods. The local filters substitute the graph shift operator (GSO) by a learnable set of parameters that capture the local connectivity of each node in the graph. Moreover, a regularization penalty is proposed to increase the robustness of the model and prevent these local structures to overfit. The paper provides proofs to justify the generality of the approach and how different methods can be seen as a particularization of the proposed scheme. Two theorems are also proved to claim the stability of the GNN architecture against dilation perturbations in the input signal. Several numerical experiments are conducted to empirically test the usefulness of the model. STRONG POINTS: The paper introduces a new GNN-based approach with larger discriminability power. The proposed approach generalizes various previously existent architectures. This is proved in the appendices. A regularization technique is proposed to avoid overfitting to local particularities of the data. Two theorems are introduced to proof the stability of the network against dilation in the input signal. The numerical experiments are extensive and convincing. This is one of the strongest points of the paper. The paper is well-structured and the style is appropriate. It is easy-to-follow and the points are clearly stated. WEAK POINTS: Replacing the GSO with a set of learnable parameters increases the discriminability power of the network, at the cost of sacrificing various properties of common GNN architectures. For example, this technique is no longer permutation equivariant, and transferability to larger networks will no longer be an option, as long as the learnt parameters are graph-dependant. Scalability problems appear when the network grows in size. This, and some possible ways of tackling it, are discussed in the conclusion. Although the theorems offer insights on the robustness of the network against perturbations on the input signal, they are restricted to dilation perturbations (which are proportional to the input signal). This is not commonly the case, perturbations often follow distributions that has nothing to do with the input signals. OVERALL ASSESMENT AND RECOMMENDATION: The paper introduces a new architecture with larger discriminative power that generalizes various state-of-the-art methods. Although the theoretical results are not particularly strong, they are undoubtedly insightful. Then, the empirical performance of this technique is exhaustively validated through several experiments. Thus, in my opinion, this paper should be accepted. RECOMMENDATIONS TO THE AUTHORS: Using matrix notation would be helpful to clarify various equations and capture the attention of a broader range of researchers. Equally important, it will also contribute to establish links between the schemes proposed in the paper and well-established techniques in the field of graph signal processing (GSP). Describing the connection between the operator $B_k$ and classical graph operators (adjacency, Laplacian, normalized Laplacian...) would be clarifying. Consider adding a couple of lines pointing out this relation. <doc-sep>This paper proposed L3Net which is a new graph convolution with decomposing the learnable local filters into low-rank. It can contain both spatial and spectral graph convolution (including ChebNet, GAT, EdgeNet and so on) as subsets. It is also robust to graph noise. Experiments are conducted on mesh data, facial recognition and action recognition, indicating out-performed performance over baselines. Its robustness to graph noise is also tested. In general, the motivation, novelty and validation are good. However, I have the following concerns: Although the authors demonstrate and explain the advantage and disadvantages of L3Net, the authors do not explain the applicability of down-sampling and up-sampling in L3Net. For example, following max-pooling and up-sampling in DCNN, ChebNet can be integrated into mufti-scales for enhancing the performance (see https://arxiv.org/pdf/1807.10267.pdf ), how about formulating L3Net into multi-scales? In the experiments, I saw that three convolutional layers are used in Section 4.1 (4.2, 4.3 and 4.4 seems do not offer the depth information of L3Net). I feel that three is a really shallow network. Why deeper networks are not used? Can the authors kindly comment/validate on this please? In Proposition 1, it seems that a control of “K” and “L” can offer different options between ChebNet, GAT and EdgeNet. In practical training, are these settings of “K” and “L” easy to train/converge, with always good performance and computationally light? Are validation offered to prove this proposition and to compare the different options offered by L3Net to ChebNet, GAT and EdgeNet? If not, can the authors give reasonable comments on this please? It seems the regularization in Section 2.2 is a very important contribution of this paper, however, no ablation study is offered (comparison between with or without regularization). Can the authors kindly validate this please? In Figure 1, the definition of variable \\mu, c, k, M and K are not defined which makes the figure less easy to understand. Could the authors kindly add these on please? In Figure 1, the authors use “left” and “right” to distinguish the four figures, this is kind of confusing. Could the authors use a, b, c and d for the figure label please? The same for Figure 2 where the authors use “plots” and “table” to distinguish different subplots. I think this can also be improved in similar way. Also other figures. In Section 2.1, |V|=n. I feel that “N” would be more appropriate here, as it represents the number of nodes. <doc-sep>#################### Pros: $\\bullet$ The proposed graph convolution method is tested on different problems (object recognition on the spherical mesh, facial expression recognition, action recognition on the face and body landmarks), and real-world datasets. $\\bullet$ The proposed method performs well both under the missing node and feature and graph noise. Particularly the effect of local graph Laplacian regularization is notable. $\\bullet$ The relationship between the proposed L3Net and previous spatial and spectral graph convolutions is theoretically explained. $\\bullet$ The complexity of the L3Net is significantly lower than locally connected GNN, and it is more suitable for small-size graphs such as the face or body keypoints. #################### Cons: $\\bullet$ In face experiments, pixel values around each node were taken as node features, however, in a similar setting in action recognition, they were not used. Considering the temporal data as in CK+, applying a similar setting by changing ST-GCN blocks with L3Net would make it comparable. #################### Minor: $-$ In many parts, typo error in MNIST dataset's name ("MNSIT") . $-$ 2.3 first paragraph, graph "convolutoins"-> convolutions. $-$ 1.1. second paragraph: "Chebyshev polynomial in Chenbet"-> "ChebNet or ChebConv" <doc-sep>The authors present a new definition of graph convolution that is also shown to generalize well-known existing ones. Pros: - interesting new definition of graph convolution - good theoretical contribution Cons: - experiments could have used more widely adopted benchmarks The novel contribution of the paper is sound and the theoretical explanation allows to understand the connections with existing graph convolution definitions. The experiments are well conducted and I appreciate the indication of the standard deviation in the results. They also show significant gains with respect to the other techniques. While the currently reported experiments are adequate, it would have been more interesting to test the method on emerging benchmarking frameworks such as [1] to get better insights on the perfromance on a standardized setting. [1] Benchmarking Graph Neural Networks, Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson, https://arxiv.org/abs/2003.00982
All reviewers expressed consistent enthusiasm on this submission during the review process. No reviewers expressed concerns and objections to accept this submission during discussion. It is quite clear this is a strong submission and deserves accept.
Authors correctly point out the tradeoff between adaptation costs and performance on novel datasets. Motivated by this tradeoff, authors introduce a novel adaptation layer, based on squeeze-and-excitation, that performs task-based feature modulation and is meta-learned in combination with a per-task learned linear head. The proposed method performs well on a variety of large-scale and difficult few-shot adaptation benchmarks at a reasonable computation cost. STRENGTHS: A well-motivated, well-reasoned and widely applicable approach. Results are convincing and the model clearly accomplishes what it sets out to accomplish, in the way that it claims to accomplish it. Paper is clearly written, if not entirely well-organized or -focused (see below). WEAKNESSES: The paper suffers from a misplaced focus in its presentation. Many of the named concepts presented as novel are not, and the truly novel contribution is somewhat limited and discussed only briefly. 1. CaSE is introduced as a novel layer but is identical to SE but for the context pooling (discussed below) and the final activation layer (an implementation detail discussed in supplementary). 2. Related to above, the adaptive mode / inference mode (described as “fundamental” on pg4 line 142) of CaSE is identical to TADAM (TADAM: Task Dependent Adaptive Metric for Improved Few-Shot Learning, NeurIPS2018), and also just a common-sense approach to handling a support vs query set. 3. I hesitate to call the UpperCaSE meta-training scheme particularly novel or coordinate-descent-based. While optimization does switch back and forth between CaSE and head parameters, the head parameters reset with every new batch. In actuality the CaSE parameters are being meta-learned, while head parameters are being set in the inner loop, and UpperCaSE is just a straightforward and common-sense approach to meta-training. This procedure (train linear head to convergence, propagate gradient into meta-learned layers) is also the exact same procedure already introduced by MetaOptNet (Meta-Learning with Differentiable Convex Optimization, CVPR2019). In my eyes the true novelty of CaSE is in taking the FiLM/TADAM approach to task-conditioning, and replacing the redundant task encoder layers with the appropriate intermediate network activations. This crucial difference is discussed only briefly in related work (pg6 line 237-238). I also consider this a somewhat limited conceptual contribution, empirical results aside. In my eyes, this calls for a fairly substantial text revision, where the contribution is mainly a novel _approach_ to efficient adaptation rather than a “new adaptive block” (lines 7-8), and the conceptual linkages to TADAM/FiLM are explored and discussed rather than the similarities/differences relative to SE, which are much less relevant in this context. I recognize this could be a pretty idiosyncratic and overly specific take though – I’ll be curious to see what other reviewers think on the novelty/contribution. Less importantly, CaSE is conceptually quite similar to TADAM, and above proposed revisions aside, TADAM is at least worth a mention in related work, as a FiLM-derivative approach to few-shot learning. Limitations are discussed clearly and fairly. Societal impacts are addressed very briefly, though since the impacts match those of few-shot learning more broadly, this is sensible. <doc-sep>The work proposes a novel method for few-shot image classification. The method is named UpperCaSE and is based on adaptation of the Squeeze-and-Excitation block to learn task contextual parameters. Their approach is hybrid in terms of a combination between meta-learning and fine-tuning of the network. The proposed hybrid approach aims to bridge the gap between fine-tuning approaches which are more accurate, and the meta-learning approaches which have lower adaptation cost. Optimization protocol is based on Coordinate-Descent between the CaSE blocks in the network body (cross task parameters) and the task specific parameters of the head (last linear layer). The approach achieves new sota results for meta-learners on the VTAB+MD and ORBIT. **Strengths** - The paper is written in a clear and easy to follow manner. - The method requires only a single forward pass over the context. - The methods is simple and novel yet achieves very good results compared to other meta-learners. - Experimentation study is comprehensive and also shows the downside on structured tasks. **Weaknesses** - I would have liked to see some ablation study and discussion on the CaSE block architecture choice. I believe that the authors adequately addressed the limitations and potential negative societal impact of their work. I appreciate for being honest about the lower results on structure datasets and the hypothesis that this case might require fine-tuning of the network's body. <doc-sep>This paper mainly focuses on few-shot learning. The authors advocate the importance of an efficient finetuning algorithm for FSL. To the end, they propose a new module based on SE block, which generates scaling parameters using task-specific information. Furthermore, they propose to leverage coordinate-descent in meta-training to solve the problems of instability, vanishing gradients and high memory consumption. 1. The authors conduct extensive experiments to show the effectiveness of their method. 2. The proposed coordinate-descent for meta-learning is interesting and can be a potential plug-in solver for other meta-learning methods. Detailed comments are in the questions. In summary, I find the proposed module similar to the existing method in FSL, which is the main problem of the method.
The reviewers consider the work technically solid, but were concerned about the contextualization of this work in the literature. post-rebuttal, some of the reviewers concerns are resolved.
The paper proposes Fast Lifelong Adaptive IRL (FLAIR), a learning from demonstration framework that aims to maintain a collection of learned strategies that may be mixed in order to model subsequent demonstrations. The method is similar to Multi-Strategy Reward Distillation (MSRD) but does not assume access to a strategy label. Overall, the paper is well-written and provides a thorough experimental evaluation. Strengths: - The paper proposes a new method for constructing a policy mixture for a new demonstration as well as a new objective, Between-Class Discrimination (BCD) that seems to be significantly more effective than existing work at adapting to a sequence of demonstrations. - The paper conducts a thorough experimental evaluation, which includes ablations on FLAIR and even a real robot experiment with a table tennis-playing robot. The paper also includes a very thorough appendix. Weaknesses: - It seems like the effectiveness of the method may depend on the range of demonstrations available--if they are not diverse, then a lack of strategies is modeled, and if they are too diverse, the method may learn too many strategies. - I'm not sure about the importance of the problem setting, and I think the assumptions made in the problem setting could be motivated better. It's not clear to me how many real world scenarios are actually modeled by this problem statement where individual demonstrations must arrive in sequence. For example, a robot learning to play table tennis will likely have access to a library of demonstrations to start instead of requiring each demonstration to arrive in sequence. Questions: - As a baseline, what is the performance when not given demos in a sequential manner, e.g. given demos all at once, with AIRL or GAIL? - It would be helpful to add titles to the plots in Figure 2 and 3. <doc-sep>This paper proposes an LfD framework to allow adapting to different user preferences over how a task is carried out. It uses an initial set of demonstrations to build an initial set of policies that correspond to different strategies. Then, users specify how they would like the task to be carried out by adding a demonstration. This demonstration is used to infer a policy mixture of the base set of strategies, or learn a new strategy if the demonstration is sufficiently different from the current set of strategies. In this way, the proposed method can learn continually. The proposed method is demonstrated on 3 simulation environments and a real world table tennis environment. Strengths The proposed method is an interesting idea that in principle, allows for more strategies to be learned and modeled as more demonstrations are added into the system. Results on the real-world table tennis domain are impressive (especially those shown in the supplementary video). Weaknesses One of the main weaknesses is the experimental evaluation in simulation. The most important aspects of this work, with respect to prior work, appear to be the ability to continually adapt to new strategies (specified by new demonstrations). However, the evaluation of the adherence to strategies is mostly qualitative. It would be very useful to show additional experiments in simulation where there are a collection of ground-truth strategies that are specified by an initial set of humans (similar to the real-world table tennis experiment). Then, quantitative evaluations on how well learned policies adhere to the desired strategy would be possible, instead of the current experiments that rely on unsupervised strategy discovery (through DIAYN, reference 37). From this perspective, it could also be valuable to show results on other domains in simulation where ground-truth strategies can more easily be specified (for example, robotic manipulation, with different grasps possible for objects, or different speeds for trajectories that are executed on the arm). Related to this, it would be nice to have more quantitative metrics for evaluating adherence to desired strategy in Sec 5.1 (e.g. returns under the ground-truth reward function for the strategy, not just the general task reward function). More comments follow: - What if it takes more than one demonstration to show a user-specific strategy? Would the method be able to handle this case? - Sec 4.2 - how many trajectories are needed to estimate the objective and find the policy mixture weights? Do you then need to collect additional trajectories to estimate the KL divergence for line 5 of algorithm 1, once the mixture weights have been found? - Sec 6 - how is RL run on the real robot in a sample efficient way? Does the human need to do manual resets? More details on this process would be helpful. - It's a pretty strong claim to put "crowdsourced demonstrations" in the title without having extensive evaluation with several humans. - There are a few missing references for crowdsourced demonstrations in the paper (https://arxiv.org/pdf/1811.02790.pdf, https://arxiv.org/abs/1911.04052, https://arxiv.org/abs/2202.02005). <doc-sep>This paper presents FLAIR, a new algorithm for lifelong “personalized” inverse RL that can rapidly adapt to new heterogeneous demonstrations. They maintain a set of learned policies corresponding to unique skills in demonstrations encountered so far; new demonstrations are modeled as a mixture of existing policies (if the behavior is captured sufficiently well) or a new policy (if the behavior is not). In three simulated continuous control environments, FLAIR outperforms baselines in adaptability, efficiency, and scalability. A real robot experiment is also performed to evaluate the utility of FLAIR's policy mixture. This is a strong paper that tackles practical problems in learning from demonstration: lifelong deployment and heterogeneous demonstrations due to varying human preferences. The proposed approach is novel and technically sound, with an intuitive procedure (Algorithm 1) and a novel “Between-Class Discrimination” loss function. Experimental results are organized and presented well, demonstrating a win for FLAIR in adaptability, efficiency, scalability, and policy performance over baseline approaches Adversarial IRL (AIRL) and Multi-Strategy Reward Distillation (MSRD). Experiments suggest FLAIR is a new state-of-the-art approach for lifelong IRL from heterogeneous demonstrations. They also evaluate the quality of FLAIR’s policy mixture in a real robot experiment, and the video supplement gives good visual intuition for the experiment. While strong, the paper has some weaknesses. Firstly, the authors claim that the code and data will be open-sourced, but it currently is not and is not available in the supplement; it would be ideal to have these available in order to reproduce simulation results, for example. Secondly, the clarity of the writing and presentation can be improved; in particular, Figure 1a is very difficult to parse and the text could benefit from proofreading. Thirdly, the results seem to be weaker for Lunar Lander and Bipedal Walker than the (very simple) Inverted Pendulum environment, as the respective figures are relocated to the appendix (Figure 2 in the appendix shows weak correlation compared to Figure 2 in the main text) or absent from both the text and appendix (e.g., the counterpart for Figure 4). Fourthly, it is unclear why the real robot experiment is a different experiment than the simulations (i.e., the FLAIR vs. AIRL vs. MSRD comparison); some clarification from the authors here would be appreciated. More minor notes / requests for clarification: - I am not sure why “crowdsourced” is emphasized in the title and “democratize access to robotics” is emphasized in the Introduction, as there is no large-scale data collection from crowdsourced humans in this paper. - Have the authors considered a “smoothing” as opposed to “filtering” approach in Algorithm 1, which perhaps could recompute mixture weights for old demos with newer policies that were not available at the time? - In Section 5.2 under Q6, why does FLAIR recover more than 5 strategies if there are only 5 ground truth policies? Is it approximation error? - Which environment is the data in Figure 3 from? - It would be nice to include visualizations of the different heterogeneous demonstrations in simulation for better intuition in Figure 4 and other parts of the paper.
Phase 1: Strengths: The submission provides a new and technically relevant method. It is well structured and intuitive in its argument. It provides a thorough experimental evaluation including real world robot experiments in a challenging domain. Weaknesses: Some weaknesses are found in the evaluation and multiple reviewers ask about clarifications. In particular, these include the different choices for comparisons and baselines between simulation and real experiments, weaker results on some of the toy domains, as well as quantitative metrics for adapting to different strategies. Phase 2: The feedback has originally been a borderline case with slightly more positive than negative feedback. Many points could be addressed during the review and final reviews are generally positive (3 weak accepts). The reviews point out that it is well structured and intuitive in its argument. It provides a thorough experimental evaluation including real world robot experiments in a challenging domain but also required clarifications around the evaluations and confusion about some of the baselines. I agree with the reviewers and recommend acceptance. Please take the remaining points from the review process seriously and follow up with improvements on open points and promised changes.
This paper shows that even when the posterior is as private as targeted in the beginning, sampling from posterior with SGLS might not be as private as targeted. The authors prove the theorem on Bayesian linear regression problem. They prove that for n big enough sampling from the posterior is (\\epsilon, \\delta) differentially private (DP), but there is a step in which releasing a sample will not be (\\epsilon^\\prime, \\delta)-DP for \\epsilon^\\prime=\\omega(n \\epsilon). This work is quite interesting and important in the sense that SGLD is used in many works in literature and it is before proved that SGLD with specific parameter choices provides (\\epsilon, \\delta)-DP. This paper finds a counter example to the previous finding with correct analysis. However the structure of the paper can be improved. In the title and introduction, it is claimed that SGLD might not provide (\\epsilon, \\delta)-DP for deep learning. But the analysis are made for Bayesian linear regression. It is not clear to me that whether it is generalizable to (Bayesian) deep neural networks or not. One weakness of this paper is the literature review. There are papers that uses SGLD for differentially private deep learning, it will be very useful to cite these works to understand whether these methods provide (\\epsilon, \\delta)-DP eventually or not. It is confusing what is proposed in Section 5. It is mentioned that the bound scale poorly with dimension, but can still be useful for Bayesian sampling in low-dimensional problems. Is the method still proposed for (\\epsilon, \\delta)-DP for deep networks as an alternative? It is important to show SGLD might not always give (\\epsilon, \\delta) differential privacy guarantee. But the text should be improved to clarify the points that I mentioned above. Maybe the title and the introduction should be revised, or some analysis could be added to show this is also applicable to deep learning. <doc-sep>This paper provides one concrete example, showing that revealing one posterior sample generated by SGLD has the risk of a privacy breach when the SLGD sampling iterations number is moderate, while the exact posterior sampling has little risk of a privacy breach. 1. The counterexample constructed is fairly restrictive. It is for a particular model, for a particular data set, for a particular stochastic scheme (i.e., cyclic-SGLD) and for a particular learning rate. Does the same result hold if we use the common sample in each step of the SGLD? So I doubt that this example provides general insights. 2. According to the proofs in the appendix, k in Lemma 4.5 is fairly small. In other words, the privacy breach can occur when the SGLD has only scanned the full data set a very small number of times (less than 10 epochs, as shown in Figure 1). Thus, even for this particular counterexample, I don't think the result is practically meaningful. 3. Section 5 is really an incomplete analysis. Overall, I consider the contribution of the paper is quite restrictive. By definition, it is sufficient to find a pair of neighboring data sets to counterprove the loss of privacy. But the results also depend on the specific setup of the SGLD algorithm, which I believe is not very proper. In the common privacy-preservation algorithm, one typically injects Laplace or Gaussian noise. To show it works, we always need to have some lower bound of noise variance. Similarly, if SGLD preserves privacy, there are potentially some requirements on the algorithm implementation. To counter-prove that, I suppose one needs to show that no matter how one tunes the SGLD algorithm, the privacy breath is inevitable. <doc-sep>This paper studies the privacy guarantee of Bayesian learning using Stochastic Gradient Langevin Dynamics (SGLD). Since the SGLD updates are stochastic, it is often thought the solution can be suitable for privacy-preserving of the data that is used to train the algorithm. Using a counter-example, this paper shows that it is not necessarily correct to assume so. Overall, this paper presents a rigorous analysis of differential privacy of Bayesian learning using SGLD. It uses Bayesian linear regression as a simple example to demonstrate that while differential privacy holds at the beginning of the SGLD updates and similarly at the convergence, but it may not hold during the intermediate steps of SGLD updates. Both the theoretical analysis and the empirical graph in Figure 1 backs their claim. The paper is mainly a theoretical paper and seems to appropriately analyse the differential privacy of SGLD. The claims seems accurate although I could not verify the details of all the proof as it is fairly long. Having said that, the paper can be significantly improved in its writing. At many places, it assumes a lot of background from the reader and uses terms without providing required explanations. For example, On Page 2, when discussing Ma et al. (2019), it mentions about \\epsilon-mixing time bound without providing any clear context or explanation. Also, as per my understanding, there are a couple of statements which seem incorrect: On page 2, when starting to discuss differential privacy, it says “…, a differentially private algorithm promises the data owners that their *utility* will not change, with high probability, by adding their data to the algorithm’s database.” I do not think differential privacy makes any claim on utility. Theorem 1, which is a key result of this paper uses three notations: \\epsilon, \\epsilon’, \\epsilon’’. The role of \\epsilon and \\epsilon’ does not seem clear. It appears there is an error. Are \\epsilon and \\epsilon’ same? On page 4, the parameters n; c; xl; xh; gamma_1 etc are not explained properly. The Figure 1 is referred as Figure 4 – should be corrected. On page 7, the sentence “It then estimates the average slope and throws away the outliers that deviate too much from the average slope.” It is not clear what authors mean by “slope” here? Spelling errors: -On page 4: “known” should be “know” -On page 4: “a well known results” should be “well known results”! -On page 6: “peeked” should be “peaked” -At many places in the text, “i’th”, “j’th” etc use the math symbols without Latex mode. -A lot of places, full stop is missing (both in text and in Lemma statements). This paper analyses the differential privacy of the SGLD algorithm. It uses Bayesian linear regression as an example to demonstrate that while differential privacy holds at the beginning of the SGLD updates and similarly at the convergence, but it may not hold during the intermediate steps of SGLD updates. The results seem convincing. <doc-sep>The paper studies the differential privacy of stochastic gradient Langevin dynamics(SGLD) as MCMC method. The paper shows that approximate sampling using SGLD may result in an unbounded privacy loss in the middle regime, via Bayesian linear regression. Strengths: 1, It is interesting to know that SGLD can result in unbounded privacy loss during the middle of the sampling procedure. 2, Figure 1 clearly illustrates the main idea of the claiming point. Weaknesses: 1, Theoretical results in this paper are based on Bayesian simple linear regression problems as shown in Eq(4) page 3. However, the paper mentions Bayesian neural networks in several misleading places. For example: in the abstract, "This interim region is essential, especially for Bayesian neural networks, as it is hard to guarantee convergence to the posterior" or in introduction "Neither of these cases is suitable for deep learning and many other problems, as one would limit the model’s accuracy and the other is unattainable in a reasonable time" in page 1. The authors should make it clear about their contributions. So readers can position this paper appropriately. 2, Subsections 4.1 - 4.2 are proof sketches to show that approximate sampling of the posterior with Bayesian linear regression by SGLD is not differential private in some steps (Theorem 1). It is better to put them into a new section and explicitly state their relation with Theorem 1. 3, Section 4.3 tries to remove unknown c in eq 5. However, $p(\\theta | W)$ is not the original posterior $p(\\theta | D)$ based on dataset $D$ anymore. What's the relationship between $p(\\theta | W)$ and $p(\\theta | D)$? 4, The manuscript is not ready and needs to be further proofread. For example, comma and period are missing in many places: "In this section, we will consider the differential privacy guarantees provided by taking one sample from the posterior for the Bayesian linear regression problem on domain D". Similarly, Theorem 1, Lemma 4.1 4.4 and 4.5 It also should be "steps" in "but there will be some step in which SGLD will result in unbounded loss of privacy." The paper explores the privacy-preserving performance of SGLD and shows its privacy loss can be unbounded in the middle regime of sampling. The finding is interesting and useful. However, the authors should make their contributions clearly and some sentences are misleading. The paper should also be re-organized and proofread before it can be accepted.
This paper shows that SLGD can be non-private (in the sense of differential privacy) even when a single step satisfies DP and also when sampling from the true posterior distribution is DP. I believe that it is useful to understand the behavior of SLGD in the intermediate regime. At the same time the primary question is whether SLGD is DP when the parameters are chosen so as to achieve some meaningful approximation guarantees after some fixed number of steps T and the algorithm achieves them while satisfying DP (but at the same does not satisfy DP for some number of step T' >T). Otherwise the setting is somewhat artificial and I find the result to be less interesting and surprising. So while I think the overall direction of this work is interesting I believe it needs to be strengthened to be sufficiently compelling.
This paper proposes the use of an evolutionary algorithm to construct decision-based black-box adversarial examples with L0 or sparsity constraints against image classifiers such as CNNs and Image Transformers. The algorithm uses an L2 distance constraint to check the fitness of a solution, and employs several tricks such as differential recombination and mutation to improve the quality of the solution. The experimental results demonstrate that the attack is more effective than the current SOTA sparse attacks, and is almost as effective as white-box attacks given enough queries. ***Strengths*** - The paper is largely clear and well-written. - The experimental results are solid, and experiments are carried out on vision datasets and models of interest. - The attack is both sparse and effective. ***Weaknesses*** - The main issue with this paper is the lack of an intuitive explanation as to why this attack is better at finding sparse and effective adversarial examples than previous work. I would have liked to see a more detailed algorithmic comparison with previous work. Overall this is a solid paper that makes a reasonable contribution to a problem of some interest to the community. +++++++++++++++++++++++++++++++++ Having read the rebuttal, I retain my score. <doc-sep>This work proposes a novel sparse attack method called SparseEvo. Based on evolution algorithm, the SparseEvo searchs a sparse adversarial perturbation in limited query budget. It can significantly reduce the queries compared with the SOTA method, i.e., Pointwise. The paper also conduct the first vulnerability evaluation of a ViT on ImageNet in a decision-based and $l_0$ norm constrained setting. - The proposed methods are well-motivated and novel. The paper is easy to follow for an adequately prepared reader. Prior work is sufficiently discussed. - The experiments are convincing and the experiment results show the effectiveness of the proposed attack. - The amount of detail is good, it seems sufficient to reproduce results. Overall, I think this paper is a good one. <doc-sep>This paper proposes a black-box decision-based spare attack based on the evolution algorithm (called SparseEvo). The authors test their method on two types of classification models and two popular vision datasets: ResNet (CIFAR10 and ImageNet) and Vision Transformer (ImageNet). Through the comparison with Pointwise attack (for efficiency and sparsity) and PGD0 (for success rate), SparseEvo achieves good performance in both success rate and efficiency. Pros: 1. The experimental performance is really good in terms of it being a decision-based sparse attack. 2. Using the L2/L1 distance as the fitness function and using an evolution algorithm instead of some estimated gradients to generate adversarial examples is novel. Cons: 1. The comparison with the pointwise attack in the targeted attack experiments is somehow unfair. SparseEvo relies on a random target image to generate the adversary while the pointwise attack doesn't. It would be better to find a way to let the pointwise attack leverage the target image or adapt another black-box attack for doing the sparse attacks. 2. I am wondering what's the image size used in the ImageNet experiments? Since you only reduce the search space by a factor of the # channels (typically 3). So I am wondering how the scalability of SparseEvo is against the big images. This paper proposes a novel black-box decision-based space adversarial attack method based on the evolution algorithm. The basic idea is to use the L2/L1 distance with the original image as the fitness function to adjust the current images towards the target images. The experimental results are good. I am only concerned a little bit about the comparison in the targeted attack since it is somehow unfair (see in the Main Review). <doc-sep>The paper proposes an evolution-based algorithm to conduct a sparse attack against convolutional deep neural networks and vision transformers. The evaluation results show that the proposed method requires fewer model queries than the state-of-the-art sparse attack Pointwise for both untargeted and targeted attacks. Strengths: The paper shows promising experimental results for the method. As a paper proposing a blackbox method, it also shows comparison with a white-box attack to showcase its superiority (my concerns are described in the weakness part below). Weaknesses: 1. The experiment section can be more comprehensive. The submission only compares with one paper on decision-based sparse attack, and that work only shows experiments on MNIST dataset, not ImageNet. (a) The comparison on ImageNet shown in this submission is not fair (the PointWise method sparsity is always 1, which means it basically fails to create any sparsity): if the comparison were to be made, the submission can instead make some minimal adjustments to the baseline method to make it not completely useless. (b) There are many other decision-based attacks on ImageNet. Although most of them are showing results in L-2 metrics (e.g., BA, HSJA, QEBA, NLBA, PSBA, SignOPT, etc.) and some of them show L-\\infty metrics (e.g., RayS), many of them can be easily adapted to L-0 case with projections based on my experience. The submission can try to compare with these stronger baselines on ImageNet to showcase its method performance. 2. The paper can discuss its relationship/difference with the existing literature more clearly. For example, using evolutionary methods for decision-based attacks is not an invention by this submission: the paper “Efficient Decision-based Black-box Adversarial Attacks on Face Recognition” has proposed one in 2019. Also, as mentioned above, though many existing decision-based attack papers did not show results on L-0 metrics, they can be adapted easily, thus very related with this paper. The paper should consider a more detailed discussion on its related works and justify its novelty in terms of the proposed method. Questions: On the second plot in Figure 6, for the two solid curves, the red curve (SpaEvo (ViT)) even goes lower than the black curve (PGDl0 (ViT)) both at the beginning and the end. Is there an explanation for this observation? Are you using different images for different curves so that the white-box attack PGD is not the upper bound of the attack performance in this plot? Or is the PGD not optimized properly? The “Untargeted-ImageNet” plot in Figure 5(b) is also weird in a similar sense. The paper shows good experimental results, but there are some concerns about the experimental part (whether it's fair and valid). Also, the novelty of the method and the relationship with the literature are not discussed in detail.
This paper introduces a technique to generate L0 adversarial examples in a black-box manner. The reviews are largely positive, with the reviewers especially commenting on the paper being well written and clearly explaining the method. The main drawbacks raised by the reviewers is that the method is not clearly compared to some prior work, but in the rebuttal the authors provide many of these numbers. On the whole this is a useful and interesting attack that would be worth accepting.
Gradient stochasticity is used to analyse the learning dynamics of SGD. It consists of two aspects: norm stochasticity and directional stochasticity. Although the norm stochasticity is easy to compute, it vanishes when the batch size increases. Therefore, it can be hard to measure the learning dynamics of SGD. The paper is motivated by measuring the learning dynamics by the directional stochasticity. Directly measuring the directional stochasticity with the ange distribution is hard, so the paper uses vMF distribution to approximate the uniformity measurement. The paper theoretically studies the proposed directional uniformity measurement. In addition, the experiments empirically show the directional uniformity measurement is more coherent with the gradient stochasticity. 1. As I’m not a theory person, I’m not very familiar with the related work on this line. But the analysis on the directional uniformity is interesting and original. So is the vMF approximation. 2. The theoretical analysis looks comprehensive and intuitive. And the authors did a reasonably good job on the experiments. 3. This paper provides some insights that warn people to pay attention to the directions of SGD. But the paper didn’t provide an answer on how this study can inform people to improve SGD. It’s true that the directional uniformity increases over training and it is correlated to the gradient. But what could this bring us remains unstudied. 4. Can the authors provide any theoretical or empirical analysis on why the directional uniformity didn’t increase in deep models like CNN and why it increases when BN and Res are applied? <doc-sep> Quality and clarity: good. Originality and significance: This paper studies the stochasticity of the norms and directions of the mini-batch gradients, to understand SGD dynamics. The contributions of this paper can be summarized as: a) This paper defines gradient norm stochasticity as the ratio of the variance of the stochastic norm to the expectation of the stochastic norm. It theoretically and empirically shows that this value is reduced as the batch size increases b) This paper empirically finds that the distribution of angles between mini-batch gradient and a given uniformly sampled unit vector converges to an asymptotic distribution with mean 90 degrees, which implies a uniform distribution of the mini-batch gradients. c) This paper uses von Mises-Fisher Distribution to approximate the distribution of the mini-batch gradients. By theoretically and empirically observing that the estimated parameter \\hat \\kappa decreases during training, they claim that the directional uniformity of mini-batch gradients increases over SGD training. The idea of measuring the uniformity of mini-batch gradients through VMF distribution seems interesting. But it is unclear how the study of this stochasticity dynamics of SGD can be related to the convergence behavior of SGD for non-convex problems and/or the generalization performance of SGD. There are additional concerns/questions regarding both theoretical part and empirical part: [1] Section3.3: Assumption that p_i(w_0^0) =p_i(w_1^0) = p_i is not reasonable when theoretically comparing \\hat \\kappa(w_1^0) and \\hat \\kappa(w_0^0). The concentration parameter \\hat \\kappa(w) should be estimated by the sum of the normalized mini-batch gradients "\\hat g_i(w)/||\\hat g_i(w)||" . Instead of using mini-batch gradient, this paper uses the sum of "p_i-w" by assuming that "p_i(w_0^0) -w" is parallel to "\\hat g_i(w)", which is ok. However, when comparing \\hat \\kappa(w_0^0) and \\hat \\kappa(w_1^0), we say \\hat \\kappa(w_0^0) = h(\\sum p_i(w_0^0) - w_0^0) ) and \\hat \\kappa(w_1^0) = h(\\sum p_i(w_1^0) - w_1^0) ). It is not reasonable to use the same p_i for p_i(w_0^0) and p_i(w_1^0) because p_i(w_0^0) -w_1^0 is definitely not parallel to \\hat g_i(w_1^0). [2] Section 3.3: Assumption \\hat g_i(w_t^{i-1}) \\hat g_i(w_t^0) is not convincing. With this assumption, the paper writes w_1^0 = w_0^0 - \\eta\\sum_i \\hat g_i(w_0^{i-1}) = w_0^0 - \\eta\\sum_i \\hat g_i(w_0^0) = w_0^0 - \\eta \\sum_i p_i-w_0^0. These equalities are not persuasive. Because, \\sum_i \\hat g_i(w_0^0) is the full gradient g(w_0^0) at w_0^0. In other words, these equalities imply that from w_0^0 to w_1^0 (one epoch), SGD is doing a full gradient descent: w_1^0 = w_0^0 -\\eta g(w_0^0), which is not the case in reality. [3] Experiment: Batch size should be consistent with the given assumption in the theoretical part. In theoretical part, \\hat \\kappa(w_1^0) < \\hat \\kappa(w_0^0) is based on the assumption that |\\hat g_i(wt^{i-1}| \\tat for all i, with *large mini-batch size*. But in the experiment, they prove \\hat \\kappa(w_1^0) < \\hat \\kappa(w_0^0) by using small-batch size which is 64. The authors should either provide experiments with large batch size or try to avoid the assumption of large batch size in theoretical part. [4] The CNN experiment; It is better to add a discussion why the \\kappa increases in the early phase of training. [5] The experiment results show, by the end of training, all models FNN, DENN and CNN have very large value of \\kappa which is around 10^4. This value implies that the mini-batch gradients distribution is pretty concentrated, and it is contradictory to the statement in the introduction which is "SGD converges or terminates when either the norm of the minibatch gradient vanishes to zeros, or when the angles of the mini-batch gradients are uniformly distributed and their non-zero norms are close to each other''. It is also contradictory to the experiment in 3.2 which implies the mini-batch gradient are uniformly distributed after training. [6] The notations in this paper can be improved, some notations are using "i" for batch index, some notations are using "i" for one data sample. Some notations in Section 3.3 and 3.1 can be moved to Section 2 Preliminaries. It will be clearer to define all the notations in one place. Typos: -Section 3.1: first paragraph, E\\hat g(w) -> E[\\hat g(w)]; - Paragraph before Lemma2: \\hat \\kappa increases -> \\hat \\kappa decreases; - Paragraph after Theorem2: double the directions in "If SGD iterations indeed drive the directions the directions of minibatch gradients to be uniform".<doc-sep>Summary: This work provides an analysis of the directional distribution of of stochastic gradients in SGD. The basic claim is that the distribution, when modeled as a von Mises-Fisher distribution, becomes more uniform as training progresses. There is experimental verification of this claim, and some results suggesting that the SNR is more correlated with their measure of uniformity than with the norm of the gradients. Quality: The proofs appear correct to me. Clarity: The paper is generally easy to read. Originality & Significance: I don't know of this specific analysis existing in the literature, so in that sense it may be original. Nonetheless, I think there are serious issues with the significance. The idea that there are two phases of optimization is not particularly new (see for example Bertsekas 2015) and the paper's claim that uniformity of direction increases as SGD convergence is easy to see in a simple example. Consider f_i(x) = |x-b_i|^2 quadratics with different centers. Clearly the minimum will be the centroid. Outside of a ball of certain radius from the centroid all of the gradients grad f_i point in the same direction, closer to the minimum they will point towards their respective centers. It is pretty clear, then that uniformity goes up as convergence proceeds, depending on the arrangement of the centers. The analysis in the paper is clearly more general and meaningful than the toy example, but I am not seeing what the take-home is other than the insight generated by the toy example. The paper would be improved by clarifying how this analysis provides additional insight, providing more analysis on the norm SNR vs uniformity experiment at the end. Pros: - SGD is a central algorithm and further analysis laying out its properties is important - Thorough experiments. Cons: - It is not entirely clear what the contribution is. Specific comments: - The comment at the top of page 4 about the convergence of the minibatch gradients is a bit strange. This could also be seen as the reason that analysis of the convergence of SGD rely on annealed step sizes. Without annealing step-sizes, it's fairly clear that SGD will converge to a kind of stochastic process. - The paper would be stronger if the authors try to turn this insight into something actionable, either by providing a theoretical result that gives guidance or some practical algorithmic suggestions that exploit it. Dimitri P. Bertsekas. Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. ArXiv 2015.
The paper presents a careful analysis of SGD by characterizing the stochastic gradient via von Mises-Fisher distributions. While the paper has good quality and clarity, and the authors' detailed response has further clarified several raised issues, some important concerns remain: Reviewer 1 would like to see careful discussions on related observations by other work in the literature, such as low rank Hessians in the over-parameterized regime, Reviewer 2 is concerned about the significance of the presented analysis and observations, and Reviewers 2 and 4 both would like to see how the presented theoretical analysis could be used to design improved algorithms. In the AC's opinion, while solid theoretical analysis of SGD is definitely valuable, it is highly desirable to demonstrate its practical value (considering that it does not provide clearly new insights about the learning dynamics of SGD).
The authors combined existing ophthalmology datasets, and introduced additional biomarkers / labels. The authors identified ML tasks that are relevant to patient care, and benchmarked classification performance on image data, biomarkers, and multimodal inputs. - The authors identified good ML tasks relevant to Ophthalmology patient care, and how they can be incorporated into patient care - The authors showed good summaries of the labels / biomarkers created for the data. - It would be nice if the authors can elaborate more on the clinical significance of the biomarkers, why were these markers chosen, and what are the value ranges of the biomarkers and their implications. - It is unclear about the grader’s qualification, and how the biomarkers are acquired (is there only one grader for each scan or are there multiple graders for each scan) - There are standard deviations for the balanced accuracy. Why aren’t they included for other metrics? - It is unclear if there are any domain shift between the data collected from two different studies. They study different conditions, which are also the labels of the classification task. Any domain shift between the datasets would compromise the classification result. It is also unclear why these two datasets are selected, and what are the clinical impact of such a combined dataset. - Table 1 is confusing, it is unclear if there is any relationships between the merged rows in image modalities and label modalities. - Time series data in the dataset correspond to visits, which have an average of 16 data points, and may have different frequencies and intervals. It us unclear how these data contribute to the classification of patients and there are no benchmarks included in the paper based on the time series data. <doc-sep>Authors presented a longitudinal multi-modal dataset comprising of 2D fundus image, 3D OCT scans, clinical labels and biomarkers collected from patients undergoing treatment for Diabetic Retinopathy or Diabetic Macular Edema. Authors have also presented baseline results for tasks such as DR/DME detection, disease classification, biomarker detection and clinical outcome prediction. - Longitudinal data covering multiple modalities - Facilities research in several different directions: disease detection and classification, treatment progression as well as understanding the relationship between multiple modalities. - Authors explicitly state this, data is collected from one geographical location and could be biased. - Sample size in terms of the number of patients. <doc-sep>This paper presents OLIVES, an OCT and near-IR fundus dataset that includes clinical labels, biomarker labels, disease labels, and time-series patient treatment information from associated clinical trials. The dataset contains the information of 96 eyes averaged over a period of at least two years, with each eye treated for an average of 66 weeks and 7 injections. Benchmark experiments, benchmark models, and baseline results are presented. This dataset contains 96 eyes and an average of 16 visits per patient, and 1268 fundas eye images. Figure 1 clearly illustrates the clinical practice described in section 1. This dataset is collected over a long period of time. The long spanning of time series data allows future researchers to perform experiments on predictive models. Good level of details on data collection and hyperparameters used. The authors have discussed related work in different aspects. All mentioned research was properly referenced. Compared with existing datasets, OLIVES contains a comprehensive set of modalities and is large enough in volume to be leveraged by ML algorithms. According to the paper, this is currently the largest and most diverse dataset of its kind. The entire paper is hard to follow for reviewers who are not experts in biology because of the extensive use of abbreviations of biological terminologies. I understand that this paper is targeted toward an audience in the domain of biology/medicine. Still, to facilitate interdisciplinary research, it would be great if the authors could include in their appendix the corresponding full names of the abbreviations used in the paper. I would suggest the authors reorganize section 4.1. Table 2 presents experiments with increasing balanced accuracy, but section 4.1 presents different tasks in different orders, which makes readers hard to follow. Would be great if the authors could indicate which ML model is used for which task in the tables. Table 3 is unclear at first glance. It would be clearer if the authors could discuss the first three models in detail in the corresponding section. Also, it would be better to mention that table six is in the appendix. At first glance, I thought the authors forgot to present table six in the paper. Also, in the last section of section 4, there is no “figure c.3” in appendix “c.3”. The overall paper needs more careful review. In the discussion section, the authors should consider elaborating more upon the ethical implications of this study. <doc-sep>The paper provides medical data from different modalities with potentially positive impact on medical research and treatments. The authors train different ML models to analyze the ability of the presented data to detect the relevant diseases ( DR/DME ), as well as predicting the effects of the successive treatment and the final occular state. They explain the technical details of their experiments and their outcomes. The paper provides medical data from different modalities with potentially positive impact on medical research and treatments. The authors train different ML models to analyze the ability of the presented data to detect the relevant diseases ( DR/DME ), as well as predicting the effects of the successive treatment and the final occular state. They explain the technical details of their experiments and their outcomes. The paper & the pesented dataset seem good, grounded and valuable to me, but it is hard for me as someone without any medical background to evaluate the medical analysis and justifications made in this paper. It is unclear to me, why they trained the vision models used to test the abilities of the dataset with a ResNet 18 backbone, which is pretty small and old compared to 2022 ML standards. <doc-sep>The authors provided an Ophthalmic dataset with OCT and near-IR fundus images, including clinical labels, biomarker labels, disease labels, and time-series patient treatment information from associated clinical trials. The authors introduced the OLIVES dataset to bridge the gap between existing ophthalmic datasets and the clinical diagnosis and treatment process. The Paper is well written and correctly addresses the problem statement. The Paper has introduced a dataset with three modalities and shown its scope in the field of ML. 1. The size of the dataset introduced is small. 2. The data is collected from two trials, PRIME and TREX. The authors have not mentioned the differences, which may/may not affect the model evaluation with collected samples. 3. The significance of the clinical features such as BCVA etc., should have been better explained to draw the comparison across the modalities. 4. Clinical labels and biomarkers are associated with each eye. How is the relation across the two modalities developed for the datasets? Per my understanding, there should be a 'Patient id' with 'left' and 'right' eyes and corresponding clinical labels and biomarkers associated with each sample (eye). 5. Are the mentioned three modalities correspond to the same patient? This means there are three samples across three modalities for each patient 6. Results in terms of sensitivity and specificity are missing, which are important for evaluating the ML model for disease diagnosis. 7. In Table 3, many inputs used to train the model have shown a random accuracy for binary classification. This proves the insignificance of these features and contradicts the author's claims. Similar results are found in table 7 in the supplementary. 8. Data collected from a single centre might encourage data bias.
The reviewers struggled to find a consensus for this paper. Concerns about applicability of the dataset due to domain shift, issues with the data collection, size of the dataset, and clarity of the paper were raised. At the same time, I believe that despite its size, the value of longitudinal data for diagnostics is extremely valuable to the community. And the authors have made efforts to improve the readability, therefore I recommend accept.
This paper extends the Wasserstein Autoencoder (WAE) work by splitting the divergence on the variational marginal into 2 terms, akin to what was done in TC-VAE. This enables directly controlling the explicit contribution of the total correlation term, which is likely to contribute to disentanglement more directly. They explore 2 variations of their model, based on different estimators of the TC term (TCWAE-MWS, using minibatch-weighted sampling; TCWAE-GAN, using a density ratio trick). Overall, I found this work to be a nicely complete exploration of a simple extension of an existing framework. They mostly rederive existing methods in the WAE framework, but results are promising and the paper addresses several datasets, compares to baselines well and seems well executed. It seems to lack comparison and discussion to a paper which seems directly related [Xiao et al 2019]. But I feel this is still a worthy piece of research to showcase at ICLR. Questions/comments: 1. A cursory search indicated the following paper which also addresses disentanglement with the Wassertein Total Correlation: [Xiao et al 2019]. They use another estimator of the TC, instead opting for the Kantorovich-Rubinstein formulation. 1. Can you comment on how their work relates to this current paper? 2. A direct comparison would be rather interesting, but might be out of scope for a rebuttal. 2. Reconstructions for the TCWAE-MWS appear rather bad (Figures 13-19 in the Appendix ), but Figure 1c doesn’t seem to reflect that, which is slightly surprising. 1. Could you comment on this discrepancy? 3. Relatedly the TCWAE-GAN disentanglement doesn’t seem particularly exciting (metric-wise), would you still recommend using it instead of TCWAE-MWS? 1. It is still a clear improvement over vanilla WAE, so there’s value to the work in this current state; but I’d wonder when one would prefer choosing this versus TCVAE? 4. It might be appropriate to discuss 2-Stage VAE [Dai et al 2019] and associated family of models, which currently obtain really good results on more complex datasets. References: * [Xiao et al 2019] https://arxiv.org/abs/1912.12818 * [Dai et al 2019] https://arxiv.org/abs/1903.05789 <doc-sep>This submission proposes to add a KL term to the Wasserstein auto-encoder objective in order to improve its disentanglement capabilities. This combines the idea of Hoffman & Jonhson (2016) of using a marginal KL term, with the Wasserstein auto-encoder framework. Challenges regarding the estimation of the KL term are also addressed with two previous works. This results in a two regularization parameter objective, whose superiority to existing approaches (using a single parameter) is not clear. Strengths: WAE with a disentanglement term was as far as I now not attempted before, the authors offer two well justified techniques to do it. Weaknesses: (1) The work is very iterative, existing approaches are only combined, (2) Superiority to WAE without this term is not surprising, and I failed to see a clear superiority to competing unsupervised disentanglement approaches. (3) Given the emphasis on the Wasserstein distance of the original approach, it is also a bit disappointing to resort to a KL term for disentanglement. (4) Most importantly, comparison to simpler alternative KL (non-marginal) losses is absent as far as I can tell. That was for me the most interesting appeal of the paper. Overall, I tend to think the paper would require a more exhaustive investigation of disentanglement approaches, contextualized to the Wasserstein distance and issues raised regarding marginal versus non-marginal divergences. I recommend rejection. On this last point: It remains unclear to me whether the original hypothesis of the paper (page 3), that the index-code MI term of the KL divergence may be detrimental to disentanglement, is supported by the current study, and thus whether the extra technicalities required to eliminate it are worth the effort. Perhaps the authors could elaborate on that with an alternative objective close to the classical KL term, and thus easier to optimize? <doc-sep>This paper addresses disentanglement in the latent space of autoencoders. To this end, it combines ideas from four existing papers, namely the reconstruction loss of the Wasserstein autoencoder, the regularization term decomposition from the total correlation autoencoder, and entropy estimation using minibatch-weighted sampling or the density-ratio trick. This combination certainly makes sense, as it brings together methods that have previously been shown to work well in isolation. The main part of the paper is devoted to an empirical evaluation of the new autoencoder training procedure. The new method is compared against various baselines in terms of L2 reconstruction error and three disentanglement scores on four toy datasets. In addition, latent space traversals on 3Dchairs and CelebA are shown to qualitatively demonstrate the disentanglement capabilities of the proposed methods. Unfortunately, the description of the experiments is not very precise. * The role of the hyperparameter gamma remains unclear. In the ablation study, the authors simply set gamma=beta without further explanation, and in the comparison, they just state "we first tune gamma" and "for gamma >1, better disentanglement is obtained", again without further explanation. * In the comparison experiment, they report results for the values of beta that achieve "an overall best ranking on the four different metrics" without explaining what an "overall best ranking" is. Choices like this must not be taken lightly, as the analysis in "Why rankings of biomedical image analysis competitions should be interpreted with care" (Nature Communications 9: 5217, 2018) impressively demonstrates. * The experiment in figure 2 seems to have three degrees of freedom (the data instance x, the latent index i, and the size of the modification in direction z_i). However, only two degrees of freedom are shown, and it remains unclear from the caption and associated main text, which ones. Moreover, I cannot deduce justification for the statement "all methods .. learn to disentangle, capturing four different factors" from the figure -- I do not see any obvious disentanglement. The bigger problem with the paper, however, is the question: What have we learned from these experiments? The rankings in table 1 are pretty inconsistent between different metrics, and the corresponding figure 3 appears to be cherry picked, as the ScreamdSprites is the dataset where the proposed methods perform best. I also do not agree with the claim that "TCWAEs achieve good disentanglement" on real-world datasets. Figure 4 shows severe entanglement between unrelated factors. For example, the size feature for the chairs also changes the type of chair. All features in the CelebA examples have a tendency also to change the background appearance. The gender feature dramatically influences person identity in the MWS results, whereas it does not change the gender at all in the GAN variant. Substantial variations in person identity are also visible in most other examples. In summary, while the paper provides numbers, it lacks new insight. In light of mathematical proofs indicating that the true generative factors are generally unidentifiable in non-linear unsupervised settings (cf. the work of Aapo Hyvärinen and others), I am skeptical that heuristic trial-and-error investigations of disentanglement like the present one will yield interesting results. In a sense, this is also acknowledged by the authors, who merely state in the conclusion that "our methods achieve competitive disentanglement on toy data sets" -- that's not much, given the effort that went into the experiments.<doc-sep>Summary: The paper is motivated by the need for a better trade-off between the reconstruction and disentanglement performance of an autoencoder. The proposed solution is to use KL as a latent regularizer in the framework of Wassestain autoencoders, which allows for a natural interpretation of total correlation. The paper reads well, all related work and relevant background concepts are nicely integrated throughout the text. The experiments are exhaustive and the results show competitive performance wrt disentanglement while improving reconstruction/modeling of the AEs. If a dataset is of dynamical nature, how difficult would it be to extend the current version of TCWAE to dynamical systems? Do the authors have any intuition/hint on what should change to make their method applicable to dynamical setups? Significantly changing the probabilistic model or modifying only the and encoder/decoder architecture could suffice? Minor: - Consider changing the naming of the baselines either in tables or figures to make them consistent Chen et al (2018) -> TCVAE Kim & Mnih (2018) -> factorVAE.
There were both positive and negative assessments of this paper by the reviewers: It was deemed a well written paper that explores cleanly rederiving the TC-VAE in the Wasserstein Autoencoder Framework and that has experiments comparing to competing approaches. However, there are two strong concerns with this paper: First, novelty appears to be strongly limited as it appears a rederivation using known approaches. Second, two reviewers were not convinced by the experimental results and do not agree with the claim that the proposed approach is better than competing methods in providing disentangled representations. I agree with this concern, in particular as assessing unsupervised disentanglement models is known to be very hard and easily leads to non-informative results (see e.g. the paper cited by the authors from Locatello et al., 2019). Overall, I recommend rejecting this paper.
This paper proves a theoretical limitation of narrow-and-deep neural networks. It shows that, for any function that can be approximated by such networks, its level set (or decision boundary for binary classification) must be unbounded. The conclusion means that if some problem's decision boundary is a closed set, then it cannot be represented by such narrow networks. The intuition is relatively simple. Under the assumptions of the paper, the neural network can always be approximated by a one-to-one mapping followed by a linear projection. The image of the one-to-one mapping is homeomorphic to R^n, so that it must be an open topological ball. The intersection of this open ball with a linear hyperplane must include the boundary of the ball, thus it extends to infinity in the original input space. The critical assumptions here, which guarantees the one-to-one property of the network, are: 1) the network is narrow, and 2) the activation function can be approximated by a one-to-one function. The authors claim that 2) captures a large family of activation functions. However, it does exclude some popular activation families, such as the polynomial activation, which were proven effective in multiple areas. As a concrete example, the simple function f(x1,x2) = x_1^2 + x_2^2 has bounded level sets, but it can be represented by a narrow 2-layer neural network with the quadratic activation. Overall, I feel that the result is interesting but it depends on a strong assumption and doesn't capture all interesting cases. It is also not clear how this theoretical result can shed insight on the empirical study of neural networks. <doc-sep>This is a very nice paper contributing to what I consider a relatively underexplored but potentially very promising research direction. The title of the paper in my opinion undersells the result which is not only that "deep skinny neural networks" are not universal approximators, but that the class of functions which cannot be approximated includes a set of practically relevant classifiers as illustrated by the figure on page 8. The presentation is extremely clear with helpful illustrations and toy but insightful experiments. My current rating of this paper is based on assuming that the following concerns will be addressed. I will adjust the score accordingly after authors' reply. Main: - A very similar result can be found in Theorem 7 of Beise et al.'s "On decision regions of narrow deep neural networks" from July 2018 ( https://arxiv.org/abs/1807.01194 ) Some differences: - The other paper considers connected whereas this paper considers path-connected components (the former is more general). - The other paper only considers multi-label classification, this paper is relevant to all classification and regression problems (the latter is more general). - The other paper requires that the activation function is "strictly monotonic or ReLU" whereas this paper allows "uniformly approximable with one-to-one functions" activations (the latter is more general). The result in this paper seems slightly more general but largely similar. Can you please comment on the differences/relation to the other paper? - Proof of Lemma 4: "Thus the composition \\hat{f} is also one-to-one, and therefore a homeomorphism from R^n onto its image I_{\\hat{f}}". Is it not necessary that \\hat{f} has a continuous inverse in order to be a homeomorphism? I do not immediately see whether the class of activation functions considered in this paper implies that this condition is satisfied. Please clarify. Minor: - Proof of Lemma 5: It seems g is assumed to be continuous at several places (e.g. "... level sets of are closed as subsets of R^n ..." seems to assume that pre-image of a closed set under g is closed, or later "This implies g(F) is a compact subset of R ..."). Perhaps you are assuming that M is a set of continuous functions and using the fact that uniform limit of continuous functions is continuous? Please clarify. - On p.4: "This is fairly immediate from the assumptions on \\varphi and the fact that singular transition matrices can be approximated by non-singular ones." Is the second part of the sentence using the assumption that the input space is compact? Please clarify. - Second line in Section 5: i < k should probably be i < \\kappa.<doc-sep>This paper shows that deep "narrow" neural networks (i.e. all hidden layers have maximum width at most the input dimension) with a variety of activation functions, including ReLU and sigmoid, can only learn functions with unbounded level set components, and thus cannot be a universal approximator. This complements previous work, such as Nguyen et. al 2018 which study connectivity of decision regions and Lu et. al 2017 on ReLU networks in different ways. Overall the paper is clearly written and technically sound. The result itself may not be super novel as noted in the related work but it's still a strict improvement over previous results which is often constrained to ReLU activation function. Moreover, the proofs of this paper are really nice and elegant. Compared to other work on approximation capability of neural networks, it can tell us in a more intuitive way and explicitly which class of functions/problems cannot be learned by neural networks if none of their layers have more neurons than the input dimension, which might be helpful in practice. Given the fact that there are not many previous work that take a similar approach in this direction, I'm happy to vote for accepting this paper. Minor comments: The proof of Lemma 3 should be given for completeness. I guess this can be done more easily by setting delta=epsilon, A_0=A and A_{i+1}=epsilon-neighborhood of f_i(A_i)? page7: the square brackets in "...g(x'')=[y-epsilon,y+epsilon]..." should be open brackets. page7:"By Lemma 4, every function in N_n has bounded level components..." -> "..unbounded..."
The paper shows limitations on the types of functions that can be represented by deep skinny networks for certain classes of activation functions, independently of the number of layers. With many other works discussing capabilities but not limitations, the paper contributes to a relatively underexplored topic. The settings capture a large family of activation functions, but exclude others, such as polynomial activations, for which the considered type of obstructions would not apply. Also a concern is raised about it not being clear how this theoretical result can shed insight on the empirical study of neural networks. The authors have responded to some of the comments of the reviewers, but not to all comments, in particular comments of reviewer 1, who's positive review is conditional on the authors addressing some points. The reviewers are all confident and are moderately positive, positive, or very positive about this paper.
This paper describes an approach to fine-tuning large language models which can improve zero-shot accuracy on unseen tasks. Overall well-written with compelling results, this paper describes a new language model (FLAN) and shows how it improves upon the zero-shot task performance of previous language models such as GPT-3. While the paper is lacking some additional analysis, I am hesitant to recommend extremely compute-intensive ablations due the large size of the model (137B parameters). Strengths: - Considers a reasonably wide set of 62 datasets; although the inherent arbitrariness in dataset clustering was listed as a limitation, the clusters look quite reasonable to me, and the removal of overlapping datasets (e.g., "Reading Comprehension w/ Commonsense") seems appropriate. - Results are better than a strong Base LM baseline, as well as existing state-of-the-art models (GPT-3) - Overall the approach is intuitive and conceptually compelling - Highly relevant to ongoing work on language modeling, prompt tuning, and zero-shot learning Weaknesses: - From these experiments, it is unclear whether models are actually "learning to follow instructions" or just learning a very large space of tasks from the fine-tuning procedure. In other words, even though prompt variance is reported at inference time, the models could potentially perform just as well with nonsense or missing prompts during fine-tuning. As far as I can tell, no experiments that rule out this possibility exist. - Although qualitatively useful, the analysis in 4.1 does not conclusively show that the number of instruction tuning clusters aids performance, or that this trend is likely to continue with more clusters. Most of the gain could be acquired by tasks which are most difficult, or most similar to the heldout task, and this analysis cannot disprove such an interpretation. A proper analysis would consider more heldout tasks and permutations of training data, but presumably this is prohibitively expensive. - The paper is missing important details about hardware usage and training time - Some possible issues which might be resolved by the additional questions below Additional Questions: - "For each dataset, we manually compose ten unique templates that use natural language instructions to describe the task for that dataset." Do you have unique prompts for each dataset or only for each dataset cluster? Based on a cursory look at the supplementary material, I would assume the latter. - I didn't fully understand the justification for the OPTIONS token. Are the fine-tuned models successfully putting (almost) all of their probability mass on the corresponding options? How is the Base LM evaluated (if it's not fine-tuned, presumably it doesn't learn how to handle these options)? - Figure 6A: why does the untuned model see worse performance with more parameters? Nits: - Figure 1 (Bottom) is possibly misleading, since AFAICT zero-shot FLAN underperforms few-shot GPT-3 on the majority of tasks - Not clear what "turning the task around" means for some tasks, or why this is a useful type of prompt diversity I give this paper a strong recommendation, in spite of some missing ablations. <doc-sep>The paper explores a simple and effective method to improve zero-shot performance of pretrained language models. Authors take a 137B parameter pretrained model and finetune it on multiple tasks verbalized via natural language instruction templates. As the result, the instruction-tuned model performs well on un-seen tasks with the zero-shot setting. Pros: 1. The problem addressed has high practical value: it tries to make large pre-trained language model more accessible to a range of NLP tasks. The "instruction tuning" idea will significantly reduce the cost for task-specific fine tuning, labeled data and prompt engineering compared to other approaches. 2. The method is simple and easy to implement. Authors carefully design the experiment to minimize the leakage between the fine-tuning and inference data. Given that, it still shows superior performance on different types of NLP tasks. The result on specific task can be further improved when adapting with "prompt tuning" on labeled data, which shows that the instruction-tuning process does not drop much task-specific knowledge from the original pretrained model. 3. The analysis presented in the main paper and the appendix is thorough enough. Authors also discussed about the limitation of model when downstream tasks are more similar to language modeling tasks. Cons: There are still a few questions that can be addressed to make the analysis comprehensive. 1. Have authors try to use the FLAN prompts on GPT3 or BaseLM and how does the performance look like? 2. Since instruction tuning will adjust all the parameters in the original pre-trained language model, there is a question what about what is the potential impact of this tuning process? Will it drops any knowledge of any tasks, which will be a disadvantage when the task's labeled data is available? In the Analysis C in the appendix, it will be good to have results for tasks other than classification such as summarization or question answering; and also to have a baseline where the BaseLM model is fine-tuned directly with the task labeled data (without prompt/soft-prompt). Overall, the paper proposed an interesting idea and showed strong empirical results, hence I vote for accepting. <doc-sep>The paper creates a dataset of over 60 NLP tasks described via instructions (using templates for each task) and finds this boosts zero-shot performance on unseen tasks. ---- Detailed comments: ---- - "For each dataset, we manually compose ten unique templates": Why not have templates per task cluster instead of per dataset? it is likely a relatively minor effect given the results from Appendix B but it seems like it could slightly prevent overfitting - The ablation in 4.1 was great (number of clusters). Nit: I would have tried to move the (datasets per cluster/templates per dataset) ablation to the main body as well and shortened Section 3 - The 4.2 (scaling laws) ablation is perhaps the most interesting of all. - In figure 6A, why was performance not increasing for untuned models w.r.t model size? This seems to contradict findings from Brown et al where larger models did better on essentially all tasks. Were there perhaps some poor datasets that happened to be in the held-in split (since the held-out tasks don't seem to have the same trend)? ---- Appendix: ---- I liked the section B ablations (as implied above). That more templates per dataset didn't help is particularly interesting and suggests some questions. You hypothesize that more templates doesn't help because "models at such scale do not easily overfit to a finetuning single task" - but my intuition is for an opposite explanation -- that the models at such scale easily memorize a small number of templates! One may even wonder if the instruction nature of the templates is helping at all. From what I can tell, Appendix C on prompt tuning (which is very interesting) is maybe the primary evidence the instructions are important. I think more could be done here, some ideas, probably there are better ways to test: - Have templates that leave out "instructions": I would guess it wouldn't affect held-in task performance much, but would affect held-out tasks. - Consider HellaSwag/PiQA/etc, where FLAN underperformed few-shot and even zero-shot. One might hypothesize that if using a (subotimal) template that is less natural for language modeling, that zero-shot performance would suffer, but that FLAN performance wouldn't - One might hypothesize that the "turn the task around" templates help more than the other more straightforward templates that don't swap information between the prompt and response. - Easy but probably not great thing to try: held-out tasks with wrong/useless templates A final thought: It's not obvious that using as many training examples per dataset as possible is optimal, given that the model could overfit to dataset-specific spurious correlations. This could be another area to investigate ---- Misc: ---- - UnifiedQA seems potentially worth citing as prior work Overall, the paper's idea is powerful (but of somewhat limited novelty) and the results are good (but not great). Its greatest strength IMO was the ablations. My biggest complaint is that it's not completely clear the instructions themselves are important at all - I suggest a few more experiments, though they don't seem crucial. <doc-sep>The paper proposes a simple method, "instruction-tuning", to improve the zero-shot learning capability of large language model, which 1) annotates prompts for a wide range of tasks and then 2) fine-tunes the model to "answer/respond to" those prompt. The empirical results are impressive: after instruction-tuning, the 0-shot performance is better than GPT-3 (0-shot, sometimes few-shot) on a wide range of datasets; nevertheless, on datasets with formats already similar to language modeling, the performance gain is negligible or even negative. The paper also made a few other observations 1) performance benefits from the number of task clusters 2) instruction-tuning is only beneficial when the model size is larger enough, and 3) few-shot learning still helps. While the method is a simple and straightforward scaling up of concepts and ideas from prior works (e.g. Zhong et al, Adapting ...; Mishra et al, cross-task generalization ...), the empirical results are thorough and impressive (outperforming GPT-3 with a slightly smaller model). The analyses also helps us understand when this method would work and inform us about future research directions. Below are my concrete questions and comments: **Additional Tasks Results (3.4)** In Appendix A.1, the paper mainly draws conclusions based on comparisons between GPT-3 and FLAN, which I do not think are fair: GPT-3 and FLAN differ in model size and pre-training data distribution. Instead, I think Base LM vs. FLAN might be a better comparison between “Off the shelf LM” and “instruction-tuned” model (though it won’t change the conclusion). It is also worth pointing out in the main paper that for most of the additional tasks, even though it does not lead to higher accuracy, the performance of FLAN is still at least comparable (e.g. <1% worse and difference generally negligible) to Base-LM 0-shot. The only outlier seems to be ReCorD where the performance drops significantly after instruction-tuning, and this probably deserves some discussion. Also I might have missed it - for the Base LM 137B zero-shot result, is it on the average template, or the best template? **Number of Task Clusters (section 4.1)** For Figure 5 can you add the untuned model to the curve with the x-axis=0 (0-task cluster)? This can help us understand how much even 1 cluster (e.g. summarization) may help. **Explanation for scaling (section 4.2)** It is an insightful empirical result that instruction tuning only works when model size reaches 68B. However, I am not entirely sure about the potential explanation of “model capacity”. There might be two potential explanations to this phenomena: 1) “model capacity”: as the paper has mentioned, smaller pre-trained models do not have enough model capacity and underfit the instruction tuning data, and 2) “better OOD generalization”: better quality pre-trained models have higher OOD generalization ability (and OOD accuracy) and they are less likely to “overfit” to in-distribution data. I personally find the second explanation more convincing. For example, Sahn et al. (https://arxiv.org/abs/2110.08207) finds that even models with only 11B parameters can generalize to unseen tasks, using T5 (MLM) and a larger set of prompts. The use of MLM objectives (might) improve the pre-training quality, while more prompts reduce the “overfitting to in-domain data” issue. I appreciate the fact that the author explicitly states the model capacity hypothesis more as a conjecture rather than a solid explanation. It’d be great if the authors can support the explanation further with more empirical evidence. On the other hand, however, since the results from Sanh et al came out only 2 weeks ago, I would not change the score based on the response to this question. **In-context Few-shot vs. Fine-tuned Few-shot (Section 4.3)** Can the authors compare “fine-tuning/prefix-tuning an instruction tuned model with 16 examples” (appendix C but with only 16 examples) with “in-context prompting” (in 4.3 of the main paper), similar to Chen et al. (https://arxiv.org/abs/2110.07814 )? This would further inform us how we should use the few-shot learning examples for larger language models: put it in-context, or fine-tune? Again, since the comparison of Chen et al. came out only 2 weeks ago and the paper limit is 9 pages, I would not change the score based on the response to this question. **Others** Results of Appendix C are interesting and potentially impactful - this might imply that instruction-tuned models will become the new “base” model for the pretraining-finetuning paradigm. Is it possible to briefly mention it in the main paper as well (and redirect the readers to the appendix to see the full results)? It might be too late to change the name, but “Finetuned LAnguage Net” (FLAN) is uninformative, since it does not capture any unique aspect of this method. What does “LAnguage” mean here, "natural language instruction" or "language model"? If it is the former, then directly including the word “instruction” might be better; and hopefully it’s not the latter, since even fine-tuned BERT on SST-2 counts as a fine-tuned language model ... **Typo** Intro: Instruction tuning is “a” simple method that, as …. Conclusion: Moreover, our work supercedes recent work such “as” While the method is not new, the empirical results are strong and comprehensive. Though I disagree on the interpretation of some empirical results, overall the additional analyses bring us further insights on what method works for very large language models (i.e. > 100B dense model). I highly recommend the paper to be accepted to ICLR 2022.
This paper examines the extent to which a large language model (LM) can generalize to unseen tasks via "instruction tuning", a process that fine-tunes the LM on a large number of tasks with natural language instructions. At test time, the model is evaluated zero-shot on held out tasks. The empirical results are good, and the 137B FLAN model generally out performs the 175B untuned GPT-3 model. All reviewers voted to accept with uniformly high scores, despite two commenting on the relative lack of novelty. The discussion period focused on questions raised by two reviewers regarding the usefulness of fine-tuning with instructions vs. multi-task fine-tuning without instructions. The authors responded with an ablation study demonstrating that providing instructions at during tuning led to large gains. Overall the paper's approach and detailed experiments will be useful for other researchers working in this fast moving area in NLP.
This paper focuses on the attack that tries to recover text data from the gradients, this type of attack is a threat particularly for federated learning where the central server may recover the private client data through gradients. Based on previous work that optimises the input to minimise the distance between gradients, this paper further proposes to alternate continuous optimization and discrete optimization that is guided through a language model prior. The discrete optimization is claimed to help obtain text data that is more like fluent language. The resulting approach, LAMP, greatly outperforms previous approaches on three binary-classification benchmarks. ### Strengths: 1. The proposed approach is novel and technically sound. The motivation and contribution are clear – from the examples in Table 2 and quantitative improvements, it does seem like previous approaches fail to yield grammatical text while the proposed discrete optimization step seems to help it a lot. 2. The empirical results are strong especially with batch size > 1. The ablation study is appreciated as well. 3. The paper is well-written. ### Weaknesses: 1. *(minor)* The proposed approach adds several additional hyperparameters to tune, for example, $\\alpha_{lm}, \\alpha_{reg}, n_c, n_d, n_{init}$. This surely adds complexity of the approach and may make the proposed attack difficult to be practically applied. While I appreciate the detailed hyperparameter paragraph in Line 268-274, I think it would be better to report the hyperparameter selection range as well, so that the readers could have an idea how much effort is imposed to tune these hyperparameters. 2. *(major)* I am a bit worried that the model comparison is only conducted on a randomly selected set of 100 sentences. 100 sentences sounds too few for me, and I am not sure how robust the model rank is based on only 100 random examples. I feel this point should be justified properly, either through constructing a larger test set, or using different random seeds to generate different sets of 100 test examples and testing on each of them. 3. *(minor)* While Line 239 mentions to illustrate the generality of the proposed approach with respect to model size, I don’t think this paper really made that goal by only using tiny and base sizes of BERT – at least BERT-large should be included to have a relatively complete coverage. I understand that the authors may be limited by resources to use larger models, which is fine. I just wanted to point out the current experiments are not sufficient to indicate generality with respect to model sizes (this is rather a minor point anyway). 4. *(major)* Line 257-259 mentions the baselines use 2500 iterations while the proposed approach uses 2000. Is *it* in Algorithm 1 2000 (and what are the values of *n_c* and *n_d*)? How did you choose these numbers? Do the baselines fully converge? I would like to see more justifications to show that the comparison is fair, because LAMP employs a nested for loop (while the baselines do not?) in optimization and I feel the number of total optimization steps (and the cost/time) in LAMP is actually larger than the baselines, right? 5. *(minor)* Line 272 mentions that LAMP additionally adopts a two-step initialization procedure that seems important to me. I would like to see the ablation results on this two-step init in Table 3 to know how much of the improvement over baselines is from this initialization. ``` After author response: The author response addressed most of my concerns, and I would like to increase my score to 7 given that the authors will update the paper accordingly as promised. ``` Yes <doc-sep>The authors propose a model for recovering user data from gradient updates in a federated learning system for text classification. They achieve this by alternating continuous gradient based optimization with discrete heuristic based token-reshuffling. The authors show that the proposed model outperforms methods that use only gradient based updates to the tokens. The primary contribution of the paper is the alternation between gradient-based updates and token reshuffling for reconstructing user text. I am unaware if such an approach has been attempted for adversarial attacks in text. Hence, this can be assumed to be novel. The paper is well-written and easy to follow. The ablation studies confirm the importance of token-reshuffling for learning a good attack. The ablation studies also show the importance of L1+L2 loss (although this was proposed in a different paper). From a novelty perspective, this alternation between token reshuffling and continuous updates for attacking text classifier appears to be novel. However, I am not an expert in this field. Some parts of the paper are unclear. For instance, the authors mention that they use Adam for learning the embeddings for each input during the continuous optimization phase. However, they do not mention how they compute the gradient of the loss with respect to the input embeddings x*. The gradient computation requires second order derivatives which hasn’t been discussed in the paper at all. It is unclear how token-reshuffling with continuous gradient-based updates influences the descent direction. Specifically, I am wondering if token-reshuffling is only performed at the end of all gradient-based updates, how will the performance be affected. The limitations haven’t been discussed. <doc-sep>This paper proposes a novel strategy to attack pretrained models for text models which aims to reconstruct the private user data used to finetune the model through federated learning. The algorithm takes the finetune gradient as well as the target label as the input, then search for the private user input sentence. The general idea is a iterative two-step process, where the first step is a continuous optimization to search for a embedding that leads to similar gradient, and the second step is a discrete optimization that uses GPT models to find a output sentence with the lowest perplexity. #### Thanks for the authors' response! I think it might be worth it to clarify the main novelty of the paper in the main method section. It would also be nice to clearly discuss the limitation at the end of the paper, as researchers not familiar with this line of research might benefit from some background information about the social impact/limitation of the method. strengths: 1. the method is very interesting and novel. It also addresses an interesting problem in the pretrain-finetune paradigm that is very popular now. 2. the writing is mostly well structured and easy to understand. 3. the final performance also seems good compared to previous methods. The ablation study and examples are pretty nice addition to the results. weakness: 1. the paper didn't really clarify the main difference between the proposed method and the prior work, such as TAG. Therefore, it is hard to tell what are the exact novelty the paper adds and the effect of the novelty. My understanding is that the random transformation and the use of GPT models allow the output more natural. However, it is really hard to tell from the current structure of the paper. 2. the method can only be applied to classification where label is known, which seems pretty limited. It might be more natural to have unlabeled data for federated learning setting, as one might not be able to annotate the user text. The authors did not address limitations and potential negative impact. There are several points worth mentioning: 1. Works on adversarial attack could be exploited by hackers. 2. The method is only limited to classification data.
This paper describes a novel method to recover the input text based on the computed gradient. This is important in the context of federated learning, which promises to enable learning through gradient sharing while keeping the input text secret. The findings of the paper demonstrate that gradients are sufficient to recover significant parts of the input text questioning the federated learning premise at least in the context of large language models. The approach in novel and technically sound. Empirical results are convincing. The paper is well-written and clear. Given current trends to growing model size, it will be great if the paper can further scale the experimental results to larger models.
This paper investigates a semi-supervised continual learning (SSCL) setting and proposes a new method called DistillMatch for this setting. The major contributions are: (1) The authors carefully design a realistic SSCL setting where object-object correlations between labeled and unlabeled sets are maintained through a label super-class structure. And then, they develop the DistillMatch method combining knowledge distillation, pseudo-labels, out of distribution detection, and consistency regularization. (2) They show that DistillMatch outperforms other existing methods on CIFAR-100 dataset, and ablation study results are shown also. However, there are some downsides that should be considered before its publication. (1) In abstract the authors claim that they can significantly reduce the memory budget (of labeled training data) by leveraging unlabeled data (perhaps with large volume). This motivation seems to be contradictive. (2) From a methodological viewpoint, the proposed DistillMatch method is just a combination of existing methods (listed as in above). So where is the novelty of this "new" method? (3) In experiments, the chosen baseline algorithm is very weak. There are some strong baseline methods such as GEM, A-GEM, and ER. So I wonder to know the real improvements over state-of-the-art methods for continual learning. (4) The label super-class structure existed in CIFAR-100 has been used in their experiments. But this is not very common for other more realistic datasets such as miniImageNet. If there is no super-class structure, we don't know how to apply the proposed DistillMatch method. In summary, I think this semi-supervised continual learning setting is interesting, but the proposed DistillMatch method can not persuade me that this method is a novel significant contribution to this problem. So at present time I believe there is much room for the authors to improve their method before publication. <doc-sep>- Summary: This paper proposes class-incremental learning with unlabeled data correlated to labeled data, and a method to tackle it. The task can be considered as a variant of [Lee et al.], which has no assumption on the unlabeled dataset, while this paper assumes the correlation between labeled and unlabeled dataset explicitly. The proposed method is inspired by state-of-the-art class-incremental learning, semi-supervised learning, and out-of-distribution (OoD) detection methods: local distillation [Li and Hoiem], OoD detection [Hsu et al.], consistency regularization and pseudo labeling (or hard distillation) [Sohn et al.], and loss balancing based on class statistics [Lee et al.]. Experimental results support that the proposed method outperforms prior works in the proposed task. - Reasons for score: 1. Extending continual learning to the semi-supervised setting is natural, given that the extension to self-taught learning has already been considered in [Lee et al.]. However, I cannot agree that semi-supervised learning is more realistic than self-taught learning, which is emphasized throughout the paper 18 times. In an early work of [Raina et al.], self-taught learning is proposed to make the scenario of learning with unlabeled data "widely applicable to many practical learning problems." [Oliver et al.] also argued that "(unlabeled data from out-of-distribution) violates the strict definition of semi-supervised learning, but it nevertheless represents a common use-case for semi-supervised learning (for example, augmenting a face recognition dataset with unlabeled images of people not in the labeled set)." I am not saying that semi-supervised learning is unrealistic, but the argument in this paper sounds overclaimed. I believe both semi-supervised and self-taught learning are realistic in some cases. I also recommend to provide real world scenarios that the proposed task (correlation between labeled and unlabeled data exists and no memory for coreset is available) is useful in practice. 2. The proposed method is not novel, which is essentially the combination of state-of-the-art methods in relevant tasks. But I do not discount this much, because this work would be valuable as the proposed task is interesting but not investigated before. However, the name of task might need to be changed, because a similar name, "semi-supervised incremental learning" is already taken by a kind of semi-supervised learning, which incrementally incorporates unlabeled data to training. 3. Though the improvement over prior class-incremental learning methods is impressive, the overall performance is still too low. In fact, the scale of the experimental setting is too small, so I doubt it is scalable. All experiments are bounded on CIFAR-100, and even only 20% of training data are used as labeled one. Frankly, in this small-scale setting (in both number of data and image resolution), keeping all data is just fine, as the coreset size is negligible compared to the model size. I recommend to experiment in large-scale settings, e.g., on ImageNet. Also, I recommend to compare the oracle setting as well, which keeps all previous training data. 4. In addition to small-scale experimental setting, the architecture is larger than the prior work [Lee et al.]: WRN-28-2 vs. WRN-16-2. In the worst case scenario, it is possible that the best performance of the proposed method is simply from the complexity of their learning objective, i.e., all methods overfit to training data, but the proposed method did not have enough updates to overfit to them. 5. In Figure 3, why do GD and DM not have a coreset? I think there is no reason to give an unfair constraint to them. I recommend to draw curves with respect to increasing number of coreset for those methods as well. 6. Could you provide results on the self-taught learning setting like [Lee et al.]? It would also be interesting to see the performance of the proposed method in the setting. 7. Hyperparameter sweep results provided in Table 4 are either minimum or maximum of the range, so you could improve the performance by enlarging the range. - Minor Comments: 8. Subscripts of theta often are dropped. Is theta equal to $\\theta_{n,1:n}$? 9. "the parameters of no more than three models" -> I believe it is four, because you need to temporarily store gradients during training. 10. $\\hat{q}$ is not a probability vector, which makes eq. (2) mathematically do not make sense. 11. Citation format issue: you can use \\citet for noun and \\citep for adverb. 12. typo on page 5: statoe -> state 13. Table 4: what is TPR here? threshold for consistency regularization? [Raina et al.] Self-taught Learning: Transfer Learning from Unlabeled Data. In ICML, 2007. [Li and Hoiem] Learning without forgetting. In TPAMI, 2017. [Oliver et al.] Realistic Evaluation of Deep Semi-Supervised Learning Algorithms. In NeurIPS, 2018. [Lee et al.] Overcoming catastrophic forgetting with unlabeled data in the wild. In ICCV, 2019. [Hsu et al.] Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR, 2020. [Sohn et al.] Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In NeurIPS, 2020. **After rebuttal** I'd like to thank authors for their efforts to address my concerns. They have addressed most of them, so I increased my score from 5 to 6. However, there are two concerns that couldn't be resolved during the rebuttal period: (1) I am still not sure if the proposed task is practical. At glance it looks realistic, but I couldn't find a detailed scenario that can only be solved by the proposed task. Any real world scenario I can think of is closer to [Lee et al.], which is a prior work of this paper. Authors provided an exploring robot example in the thread of responses, but I think [Lee et al.] fits better for the provided one. I recommend authors to find a concrete use-case in real-world applications, which can only be solved by the proposed setting (or at least [Lee et al.] is not applicable; in the revised intro, you may emphasize that there are some real-world problems that [Lee et al.] is not applicable but yours is). R1 and R4 seem to have a similar concern. (2) the scale of experiment is too small. As CIFAR-10/100 have a limited number of data for your purpose, you can borrow some data from tinyimages (FYI, CIFAR-10/100 are a subset of 80M tinyimages) or focus on ImageNet. I am okay with the lack of novelty on the proposed method. For a newly proposed task, I think proposing a simple and effective baseline is good enough. However, because of the two concerns above, I cannot strongly agree with its acceptance. <doc-sep>The paper presents a novel semi-supervised continual learning (SSCL) setting, where labeled data is scarce and unlabeled data is plentiful. The proposed framework is built on pseudo-labeling, consistency regularization, Out-of-Distribution (OoD) detection, and knowledge distillation in order to reduce the catastrophic forgetting in the proposed setting. The paper is in general clear and well-written. The contributions are clearly highlighted and the proposed approach is conveniently compared with other state of the art methods, demonstrating its superiority. Positive aspects: - the definition of a realistic, semi-supervised setting for continual learning - a novel approach for continual learning in order to cope with 'catastrophic forgetting' - the proposed approach is memory efficient, since it does not need exemplars to replay past tasks Negative aspects: - the OoD implemented in this paper rejects the unknown samples. In other words, all unknown samples are considered a single class. It would have been a plus to distinguish between several unknown classes and somehow introduce them in the framework - the lack of recabilibration step after a number of tasks (in the case of pseudo-labeled samples), could lead to an undesired error propagation which is not quantified in the paper However, I have some questions: 1. What is the relationship between the 'fi' and 'theta' models (section 4)? Are they completely separate or there is a relationship between them? For instance, when 'theta' is extended with a new task, is 'fi' extended accordingly? Or is 'fi' trained off-line from the beginning (with all tasks)? 2. There are some different source of errors: distilation, pseudo-labels... Do you perform any kind of system re-calibration? After how many tasks? I mean, do you make a study of error propagation of pseudo-labeled data? Or at some point do you have a human-in-the-loop to correct mis-classification? What is the mis-classification error of pseudo-labeled samples? 3. Do you assume that labeled and unlabeled data come from different distributions or you have a single distribution which is divided in labeled and unlabeled data at the beginning of the process? 4. Does your scenario foresee that when learning a new task T, all the previous tasks are represented (1..T-1) in the unlabebeld data or only a subpart? (i.e. kind of selective replay) 5. When the number of tasks increases, the number of unlabeled data per task remains constant or is scaled accordingly (i.e. reduced) ? 6. Would be interesting to test your approach in a real-world scenario, i.e. robot navigation.<doc-sep>This paper comes up with a novel scenario where the unlabled data are available as well as labeled data in the continual learning scenario. ### Overall - Based on my understanding, the major contribution is the proposal of a task scenario, aka, experimental setting. The novelty of DistillMatch is an incremental modification of previous work. - The task setting sidesteps the learning with non-stationarity problem than solving it. - Further, this setting potentially makes the task easier for the proposed method. To verify whether this is true, more information are needed. - The presentation of the paper needs polishing, I listed a few points below. ### Pros - The novel scenario of semisupervised continual learning is proposed. The argument is that in several realistic scenarios old data are often re-observed without label (The funiture labeling example). Therefore instead of storing a coreset, one may make use of the unlabeled data for pseudo-rehearsal/distillation. It is reasonable to make use of it when this assumption is true. - With the setting the author proposed, the DistillMatch method is able to perform better than previous methods. ### Cons 1. The novelty mostly comes from the task scenario, the DistillMatch method is incremental. 2. Although SSCL is a new scenario, and the author argues it is more realistic. IMO taking this assumption sidesteps the problem of continual learning rather than solving it. The central problem of continual learning IMO is to learn under non-stationary distribution, the assumption made in this submission makes the distribution more stationary. 3. It is true that this assumption should be utilized when available. However, the only dataset used is manually constructed from CIFAR100, contradicting the initial motivation to move towards a more realistic scenario. 4. There's a lack of information on how the compared methods are adapted to the new scenario. I searched the supplementary but failed to find a detailed documentation. With the given information, it is hard to tell whether the comparison is fair. My concerns are following, **increasing from 3 -> 4 as this point is resolved in the rebuttal** - In the RandomClasses setting, it is stated that no coreset is used, if the compared methods depends on coreset to replay, it would be unfair. If that's the case, the only conclusion we can draw is that replay is better than no replay, which seems trivial to me. - GD depends on internet crawled data, is it replaced with the unlabeled data since it is available in the experiment setting? If not, then I think it is just the setting that favors DistillMatch. - With the above said, I suggest the author to list clearly the objectives, replay buffer sizes or even pseudo code for each of the compared method and their own method in a table, which will help the reader identify what major component in the proposed method is making the contribution. Regarding the quality and clarity, I found myself confused and making guesses sometimes while reading it. To list a few: - introduction paragraph 2, ... to determine which unlabeled data is relevant to the incremental task ..., I guess the incremental task means learning the newly observed data, but then for rehearsal we'll pick the unlabeled data which is from the distribution of past tasks. - section 1, ... save up to 0.23 stored images per processed image over naive rehearsal (compared to Lee) ..., here seems Lee et al is the naive rehearsal. But then "which only saved 0.08" confuses me, seems to be saying Lee saves 0.08 compared to naive rehearsal. - section 3, ... where data distributions reflect object class correlations between, and among, the labeled and unlabeled data distributions ... not enough information to infer what "reflect" and "object class correlation" means here. - section 4, ... Let S_{n-1} denote the score of our OoD detector for valid classes of our pseudo-label model ... what is the "valid classes" needs to be clarified. As I understand it, S_{n-1} measures how likely the unlabeled data is in the distribution of past tasks. - Super class / Parent class are not defined clear enough.
This paper proposes a semi-supervised setting to reduce memory budget in replay-based continual learning. It uses unlabeled data in the environment for replaying which requires no storage, and generates pseudo-labels where unlabeled data is connected to labeled one. The method was validated on the proposed tasks. Pros: - The semi-supervised continual learning setting is novel and interesting. - The proposed approach is memory efficient, since it does not need exemplars to replay past tasks. Cons: - The scale of experiment is small. It lacks evaluation in real world environment. - The novelty is limited, because it is a combination of existing technologies: pseudo-labeling, consistency regularization, Out-of-Distribution (OoD) detection, and knowledge distillation. - The comparison might not be fair due to different settings. The authors addressed the fairness and scalability with additional experiments and leave some suggestions of reviewers for future work. R3 had a concern on the error propagation of pseudo-labels which I also share. The authors agreed that this is a challenge for all CL methods. In summary, the reviews are mixed. All reviewers agree that the semi-supervised continual learning setting is novel and interesting, and some have concerns on scalability and novelty of the method which I also share. So at present time I believe there is much room for the authors to improve their method and experiments before publication.
<doc-sep>This paper aims to construct a GAN that can be applied to non-i.i.d. federated data. To achieve this aim, the authors propose an extension of Bayesian GAN called expectation propagation prior GAN (EP-GAN), which obtains a partition-invariant prior using expectation propagation. In particular, the authors introduce a closed-form solution for efficiency. The effectiveness of the proposed method is demonstrated using non-i.i.d data, including the toy data, image data, and speech data. [Strengths] 1. As far as I know, applying a GAN to non-i.i.d federated data is not actively studied in the previous studies and is an interesting research topic. 2. The proposed method is solid and mathematically grounded. The detailed derivatives are also provided in the supplementary materials. 3. The effectiveness of the proposed method is demonstrated using various data, including toy data, image data, and speech data. [Weaknesses] 1. Improving the calculation speed of EP is included in the main contribution. However, its validity is not empirically demonstrated. At the bottom of Section 4.1, the difference in the calculation order is discussed; however, I cannot understand how significant the difference is in practice. In particular, I am curious whether the calculation in EP is dominant in the total framework, including the GAN training. 2. It seems that in the baseline models, simple parameter averaging is applied to all the layers when FedAvg is used. However, it can be easily improved by introducing client-specific parameters using a conditional module (e.g., conditional batch normalization), which is used in a typical conditional GAN. The comparison with such a baseline is interesting. 3. In practice, it is assumed that the number of clients is considerably large. However, in the experiments, the number of clients is relatively small (the order of 10). Therefore, in the current manuscript, I consider that the effectiveness in a practical setting is not sufficiently demonstrated, although some benchmark performance is provided. 4. In Appendix F.3.4, a statistical significance test on the inception score is provided; however, a statistical significance test on the FID is not presented in the current manuscript. If the authors intend to emphasize the utility of Mixture-EP-ProbGAN, this test also should be conducted. This paper addresses an interesting problem and proposes a reliable method that is mathematically grounded. However, I still have some questions regarding the experimental evaluation. I expect that the authors clarify them in the rebuttal. <doc-sep>This paper proposes a method for learning Generated Adversarial Networks (GANs) for non-i.i.d data in a federated learning setting. This is accomplished with the use of a partition-aware prior via an Expectation Propagation (EP) algorithm embedded into a Bayesian GAN setting. Additionally, it proposes a closed-form solution for EP updates aiding efficient federated learning. The claims are substantiated with experiments on both synthetic and real data. The paper is well written and the use of EP prior to this purpose is a fine idea. However, I would like the Authors to address the following questions. 1. While Figure 1 is a good way to motivate the problem, it would be good to supplement all figures with some quantifiable metrics (at least for a few ones). It is difficult otherwise to ascertain the gains. 2. In the Related work section, the two large paragraphs on Federated Learning and Bayesian GANs seem to be disconnected. It would be good to have a connecting paragraph would be good. 3. The small paragraph on Federated Learning and deferring it to the appendix is not a good idea. Some details on Fed-Learning here will be apt. 4. I am not an expert on EP but I am curious - why can't the approximated factor come from a non-exponential family? 5. Related to the previous question, why are Gaussian distributions used for approximate factors? What bearing does this have on the entire method? 6. I didn't quite follow the need for a sigmoid in Eq. 5. 7. Is this method generic enough to be applied for other Bayesian GAN settings beyond the ones considered here? 8. The other related question is, can this EP-prior help in the case of i.i.d data as well? 9. The other important question I have is - It is not clear from the paper why should the proposed method aid in handing i.i.d data in the federated learning setting. There is empirical evidence but can there be a more principled way of describing the same? 10. It would be good to include an i.i.d case too in Table 1, if possible. 11. More baselines could be added for the speech experiment. In addition, the task is not well defined in section 5.3 This paper addresses a very useful problem of federated GAN learning for non-i.i.d data. It is moderately novel to be considered for publication in ICLR. <doc-sep>The goal of this paper is to train a Bayesian GAN on non-i.i.d. federated data. Specifically, the authors propose to adopt the newly-introduced expectation propagation (EP) prior, being partition-invariant, to address the non-i.i.d. federated challenge. Experiments on synthetic and real datasets are conducted. The writing should be improved significantly; the current manuscript is quite challenging to understand what's really going on. In the Introduction, how and where do you ``identify the mode collapse problem of Bayesian GANs under non-i.i.d. cross-silo unsupervised learning scenarios?'' What's the novelty over existing methods for training GAN under federated learning settings? e.g., [1-2] In the paragraph before Eq 1, $q(\\boldsymbol \\theta)$ is not a distribution, i.e., it's not normalized, right? Similar questions for the following $q_{ep}^{t}(\\boldsymbol \\theta_G)$. In Eq 5, how will you train the auxiliary neural network $f$ in federated learning settings? Also In Eq 5, it seems one should specify an $f$-function for each $\\theta$ parameter of the GAN generator? How expensive is the proposed method, both in space and time? The notations starting from Eq 5 are quite confusing. Eqs 6-7 are not easy to follow. In the paragraph following Eq 14, I cannot see clearly why ``Theorem 4.1 shows that we are able to analytically approximate the prior of the global data distribution with the datasets stored on different clients while following the cross-silo federated learning settings?'' The notation $J_G$ is not defined in Eq 16. [1] FedGAN: Federated Generative Adversarial Networks for Distributed Data [2] Training Federated GANs with Theoretical Guarantees: A Universal Aggregation Approach The writing should be improved significantly; the current manuscript is quite challenging to understand. The contents in Section 4.1, i.e., the main novelties, are challenging to follow. Important comparisons with existing federated-GAN training methods are believed missing. <doc-sep>The authors targeted federated generative modelling in an unsupervised setting. Specifically, the work is built on top of Bayesian GANs. In order to aggregate the information from different clients, the authors proposed to use expectation propagation (EP). It makes sense. Despite being a well established Bayesian inference algorithm, EP operating on the neural network parameters can suffer from intractability. The authors presented a low complexity solution. The experiment results showed improved FID and IS over multiple baseline methods. However, the overall performance is quite poor on the rather simple dataset CIFAR10, owing to the scalability issue of Bayesian models. Federated learning with non-i.i.d. data partitions is particular challenging, as naive averaging does not work well. EP offers an information aggregation framework which can deal with different data partition styles in a unified way. In order to apply EP, it requires to have Bayesian GAN as the basis model at each client. While Bayesian inference is a powerful framework, the scalability issue of Bayesian GANs can potentially hinder the use of EP-GAN. For instance, the oracle baseline model still has FID on CIFAR10 above 25 while the best performing GAN on CIFAR10 is below 6. Moreover, the gap can potentially become even larger when the resolution is higher. On the algorithm side, eq. (5) lacks a justification on why it suffices the quality of likelihood modelling besides being simple and thus permitting the closed-form EP update. For EP-ProbGAN, how does the newly introduced EP prior affect the guarantee claimed by ProbGAN. In the experiment part, despite being introduced in the text, the performance of the baseline model BayesGAN (2) was actually not reported in Table 1. Furthermore, as reported by the authors of ProbGAN, the NS loss outperformed Wasserstein distance and LS for both ProbGAN and Bayesian GAN. However, the Table 1 did not consider the top performing case. Also, only having one natural image dataset is probably not enough, e.g., ProbGAN also considered STL-10 and ImageNet. Furthermore, EP-GAN variants on i.i.d. N=2 outperform oracle. What is the potential reason behind? Overall, the problem is trending, challenging and highly relevant. Within the Bayesian framework, the use of EP for federated learning is reasonable. Of course, complexity remains as a critical issue. The authors proposed some low-complexity solution and empirically showed the benefits of using EP over existing schemes, which however are not really developed for non-i.i.d. scenarios. Furthermore, the baseline models do not take the top performance configuration, which lead to my general concern on how strong the baselines are. On the algorithm side, Bayesian models need to deal with priors regardless of EP. Therefore, having or adding an EP prior in Bayesian models seems to be straightforward. The closed-form update is definitely interesting, but the authors shall (empirically) analyse its fidelity.
This paper presents a Bayesian GAN approach designed for a federated learning setting. In contrast to recent Bayesian GAN approaches that use Gaussian priors or iteratively-updated priors on GAN parameters, this paper proposes a more complex prior motivated by expectation propagation, dubbed as EP-GAN, and uses this formulation to construct a federated GAN. The paper claims that this prior better captures the multimodal distribution structure of the non-iid heterogeneous data across the different clients. The paper looks at an interesting problem, i.e., federated training of GANs, which is indeed a problem that has received a lot of interest lately. The paper received mixed reviews. The reviewers raised several concerns, some of which included (1) weak baselines, (2) not considering what happens when we switch to more advanced GAN models, (3) performance of the approach when the number of clients is large, and (4) lack of clarity in the presentation. The authors responded to some of these concerns and it is commendable that they reported some additional results during the discussion phase. However, after an extensive discussion among the reviewers and between reviewers and authors, and after my own reading of the manuscript, concerns still lingers over many of the above-mentioned points. Another concern is the overly complex nature of the approach as compared to other recent federated GAN approaches which raises the question as to whether the actual improvements warrant the complexity of the proposed approach. From the report experiments, the improvements appear to be rather slim. Considering these aspects, unfortunately, the paper in its current shape does not seem ready for acceptance. The authors are advised to consider the feedback from the reviewers which will strengthen the submission for a future submission.
The present paper proposes a fast approximation to the softmax computation when the number of classes is very large. This is typically a bottleneck in deep learning architectures. The approximation is a sparse two-layer mixture of experts. The paper lacks rigor and the writing is of low quality, both in its clarity and its grammar. See a list of typos below. An example of lack of mathematical rigor is equation 4 in which the same variable name is used to describe the weights before and after pruning, as if it was computer code instead of an equation. Also pervasive is the use of the asterisk to denote multiplication, again as if it was code and not math. Algorithm 1 does not include mitosis, which may have an effect on the resulting approximation. How are the lambda and threshold parameters tuned? The authors mention a validation set, are they just exhaustively explored on a 3D grid on the validation set? The results only compare with Shim et al. Why only this method? Why would it be expected to be faster than all the other alternatives? Wouldn't similar alternatives like the sparsely gated MoE, D-softmax and adaptive-softmax have chances of being faster? The column "FLOPS" in the result seems to measure the speedup, whereas the actual FLOPS should be less when the speed increases. Also, a "1x" label seems to be missing in for the full softmax, so that the reference is clearly specified. All in all, the results show that the proposed method provides a significant speedup with respect to Shim et al., but it lacks comparison with other methods in the literature. A brief list of typos: "Sparse Mixture of Sparse of Sparse Experts" "if we only search right answer" "it might also like appear" "which is to design to choose the right" sparsly "will only consists partial" "with γ is a lasso threshold" "an arbitrarily distance function" "each 10 sub classes are belonged to one" "is also needed to tune to achieve"<doc-sep>The paper proposes doubly sparse, which is a sparse mixture of sparse experts and learns a two-level class hierarchy, for efficient softmax inference. [+] It reduces computational cost compared to full softmax. [+] Ablation study is done for group lasso, expert lasso and load balancing, which help understand the effect of different components of the proposed [-] It seems to me the motivation is similar to that of Sparsely-Gated MoE (Shazeer et al. 2017), but it is not clear how the proposed two-hierarchy method is superior to the Sparsely-Gated MoE. It would be helpful the paper discuss more about this. Besides, in evaluation, the paper only compares Doubly Sparse with full softmax. Why not compare with Sparsely-Gated MoE? Overall, I think this paper is below the borderline of acceptance due to insufficient comparison with Sparsely-Gated MoE. <doc-sep>In this paper the authors introduce a new technique for softmax inference. In a multiclass setting, the idea is to take the output of a NN and turn it into a gating function to choose one expert. Then, given the expert, output a particular category. The first level of sparsity comes from the first expert. The second level of sparsity comes from every expert only outputting a limited set of output categories. The paper is easy to understand but several sections (starting from section 2) could use an english language review (e.g. "search right" -> "search for the right", "predict next word" -> "predict the next word", ...) In section 3, can you be more specific about the gains in training versus inference time? I believe the results all relate to inference but it would be good to get an overview of the impact of training time as well. You motivate some of the work by the fact that the experts have overlapping outputs. Maybe in section 3.7 you can address how often that occurs as well? Nits: - it wasn't clear how the sparsity percentage on page 3 was defined? - can you motivate why you are not using perplexity in section 3.2?
This work proposes a new approximation method for softmax layers with large number of classes. The idea is to use a sparse two-layer mixture of experts. This approach successfully reduces the computation requires on the PTB and Wiki-2 datasets which have up to 32k classes. However, the reviewers argue that the work lacks relevant baselines such as D-softmax and adaptive-softmax. The authors argue that they focus on training and not inference and should do worse, but this should be substantiated in the paper by actual experimental results.
This submission considers a good problem but contributes a little. The critical aspect of considering the convergence property for over-parameterized implicit networks is to show the non-singularity of the feature matrix $Z^*$, which is the fixed point of the non-linear equation $Z = \\sigma(AZ+\\phi(X))$, since we treat the final output as $Y= WZ^*$. This is a challenging and open problem for the community of theoretical implicit models. However, the submission considers a different output---$\\hat{Y}= UZ^* + V\\phi(X)$; hence there is no difficulty, and it is meaningless to get the smallest singular values as $\\Theta(\\lambda_0^{1/2})$, which is the same as previous over-parameterized explicit networks $\\phi(X)$ and cannot show any difference between implicit and explicit DNNs. Unfortunately, the submission just got the results in this way. 1. The only difference between this submission and the previous works on explicit DNN convergence in the sense of proof roadmap is the additional proof for the existence of a fixed point at the initialization. However, constructing a shrink operator (which guarantees the existence of the fixed point) is not a complex task. We can even guarantee the well-posedness by setting $A(0)=0$. In fact, as the authors discussed in the submission, we need to prove that the operator $\\sigma(A\\cdot+\\phi(x))$ is shrink during training rather than at initialization. For guaranteeing this, the scaling factor may depend on the other term, such as step size, rather than only $m$, since we need to bound the difference $A(t)-A(0)$. The author needs to deal with the existence more carefully. 2. For the convergence speed, it is the same as the previous ones. Hence, it further verifies that the convergence guarantee comes from the explicit additional term $V\\phi(X)$---two-layer over-param ReLU DNN, instead of the implicit feature $Z^*$. A straightforward guess is that all the results still hold when we set $A=0$, or set $U = 0$, or even drop $Z^*$, i.e., $\\hat{Y}= V\\phi(X)$. 3. When proving the non-singularity of $H$, the submission says that it utilizes a different data assumption---no two data are parallel to each other. However, the same setting and almost the same linear convergence results are given in [1]. 4. More importantly, the current convergence guarantee for over-params DNNs can be divided into two categories in the sense of activation settings-- ReLU and sufficient smooth activation function. For proving the PL-inequality, one relies on the smoothness of activation to provide the lower bound, while the others prove the flipping feature is small and the overall bound can hold during training. Confusingly, this submission mixes these two roadmaps, utilizes the routine for a smooth activation function in the ReLU setting, which may cause the problem for the conclusion of some auxiliary lemmas. [1] Gradient Descent Provably Optimizes Over-parameterized Neural Networks. consider a important problem, but heavily rely on the the results in the previous work. <doc-sep>This paper theoretically analyzes the optimization of deep ReLU implicit networks. It first shows the well-posedness of the problem, i.e., the existence and uniqueness of the equilibrium point, then proves that under over-parameterization, both continuous and discrete GD have global convergence in a linear rate, and the approach is similar to the standard proof for DNN. To be honest I am not familiar with both theorems and applications of implicit networks. Seems that it is empirically successful but lacks theoretical understanding, then I think this paper provides a good starting point. Under the form (1) and (2), the paper first shows the existence of the equilibrium point given $\\|A\\|$ is bounded. Then the proof of the convergence is similar to DNN: 1. Write down the dynamics (15) and show that one of the terms in $H$ has lower-bounded eigenvalues. (The following calculation heavily relies on the form (2) of $\\phi$. Is this commonly used in applications?) 2. For sufficiently large $m$ (over-parameterization), the random initialization $G(0)$ is close to the infinite-wide $G^\\infty$. 3. The lower bound of $G(t)$ gives the linear convergence rate, then the fast convergence indeed guarantees that $G(t)$ is not far from $G(0)$ during the trajectory. Despite that the approach is sort of standard (and the dynamics seems to be simpler than DNN since all the layers share the same weights?), the proof is not trivial and the theorem is good as it gives the first theoretical optimization result for implicit networks. The paper also implements numerical experiments on several standard image dataset to show the effectiveness of the implicit networks. **PS:** Thank the authors for the detailed response! The paper proves the convergence of the optimizing nonlinear implicit networks. The proof techniques follow the standard approach for DNN, and I think it is a good starting point for the theoretical analysis of implicit networks. <doc-sep>The paper presents a proof of exponential convergence to global optimality in the over-parametrization settings for an implicit model with scaled weights parameters. Although existing work has established similar proofs for feedforward explicit neural networks, such methods don't work with non-linearly activated implicit models where the well-posedness issue poses challenges to the training process. The authors shows that by scaling the weights, well-posedness can be ensured. The convergence result is obtained first on continuous settings and is then extended to discrete settings. Numerical experiments on real datasets confirms the finding. [Strength] - The paper studies the very important problem of convergence of training for implicit models. The problem is non-trivial even given recent advances in relavent proofs for explicit forward-feeding because of the well-posedness issue in implicit models which presents because implicit models can be seen as infinitely deep neural networks. - The authors show that by puting a proper simple scaling factor to the weights, the well-posedness property can be maintained throughout the training process with no extra regularization or projection steps. This enables the proof of training convergence for implicit models. - Thorough mathematical proofs for both the continuous setting and the practical discrete setting are given in the paper to support the results which are then varified by numerical experiments. [Weekness] - There is a typo in the notations section. I suppose it is lambda_max(A) <= ||A|| since A is not assumed to be positive semidefinite? The paper sets the foundation for the training theories for implicit models. Though some common techniques are employed to in the derivations, the authors successfully tackle the key issue of well-posedness to make the convergence result possible. The reviewer believes this result is significant for implicit models which have become increasingly popular in the community. <doc-sep> In this paper, the authors theoretically analyze the convergence of gradient descent for an implicit neural network with infinite layers with ReLU activation. The authors show the unique fixed point of the infinite-layered mapping when the weight matrix $\\boldsymbol{A}$ has a properly bounded spectral norm. Using implicit differentiation, the authors show the partial gradient at the fixed point. Furthermore, the authors show the linear convergence rate by proving the strictly positive-definite of the Gram matrix $\\boldsymbol{G}(t)$ (and $\\boldsymbol{H}(t)$). Pros. 1. This paper makes a clear contribution to proving the convergence of gradient descent for an implicit neural network with ReLU activation with infinite layers and finite width. I think that using implicit differentiation for the partial gradient at the fixed point is interesting, enabling the proof of convergence by showing the strictly positive-definite of the Gram matrix $\\boldsymbol{G}(t)$. 2. To ensure the strictly positive-definite of the Gram matrix $\\boldsymbol{G}(t)$, the required number $m= \\tilde \\Omega(n^2)$ is competitive or better than recent results for the finite-layered neural network. In addition, the result in this paper hold for infinite layers. 3. The paper is well organized and clearly written. Cons. 1. The gradient $\\nabla _\\boldsymbol{A}L $ and $\\nabla _\\boldsymbol{u}L $ involves equilibrium point $\\boldsymbol{z}$. However, it is not easy to achieve the equilibrium point explicitly. How to compute the gradient for training? Does it need an approximation or a solver for the equilibrium point? It seems to me that a solver demands a high time cost. Does it scale to a large-scale problem? It is interesting to discuss the relationship (advantage/disadvantage) compared with neural networks with explicit proximal mapping architecture. e.g., (Lyu et al. 2021) has a similar NN architecture $ \\boldsymbol{y}_{t+1} = h( \\boldsymbol{D}^\\top_t\\boldsymbol{x} + (\\boldsymbol{I}-\\boldsymbol{D}^\\top_t\\boldsymbol{D}_t)\\boldsymbol{y}_t )$ with $\\boldsymbol{y}_{0} = \\boldsymbol{0}$ When sharing weight $ \\boldsymbol{D}_t= \\boldsymbol{D}$, and set $ \\tilde \\gamma \\boldsymbol{A} = \\boldsymbol{I}-\\boldsymbol{D}^\\top \\boldsymbol{D} $, it seems to be a finite-step updated NN instead of the fixed point $\\boldsymbol{z}^*$ in Eq.(3). 2. ReLU function $f(x) = max(0,x)$ is not differentiable at point $x=0$. How does this influence the continuous time ODE analysis for the linear convergence? Minor Typos. In the proof of Lemma 2.2 in Appendix A.1, it should be $\\sigma (\\tilde \\gamma \\boldsymbol{A} \\boldsymbol{z}^{l-1} + \\phi )$ instead of $\\sigma (\\tilde \\gamma \\boldsymbol{A} \\boldsymbol{z}^{l-1} - \\phi )$. Lyu et al. Neural Optimization Kernel Towards Robust Deep Learning. Overall, I think this paper makes a clear contribution to proving the convergence of gradient descent for an implicit neural network with ReLU activation with infinite layers. So I recommend acceptance.
This paper shows gradient flow of ReLU activated implicit networks converges to a global minimum at a linear rate for the square loss when the implicit neural network is over-parameterized. While the analyses follow the existing NTK-type analyses and there are disagreements among reviewers on the novelty of this paper, the meta reviewer values new theoretical results on new, emerging settings (implicit neural networks), and thus decides to recommend acceptance
- Overall comments This paper propose a principled method to generate "hard" positive and negative samples based on conditional likelihood for contrastive learning of seq2seq models, and it shows significant improvements in training conditional text generation tasks compared to naïve approach with random negative samples. Overall, the idea is interesting, and the experiments are well-conducted. However, I still have some detailed questions regarding to the method and experiment as follows: - Methods: (1) I am a bit confused with Eq(2). What is $\\bf{M}$? Do you mean $x_i$ is the source sentence, $y_i$ is the corresponding target sentence? Is it meaningful to "match" the hidden representation between source and target sentence especially for tasks such as summarization? Also training with Eq(2) did not involve any decoding process, nor supervising how to decode a sentence. Some form of MLE training (also noted in Eq (9)) seems to be unavoidable which in some sense still relies on teacher forcing.. (2) The proposed method to create positive/negative examples is related to virtual adversarial training (VAT) in NLP: *Miyato, Takeru, Andrew M. Dai, and Ian Goodfellow. "Adversarial training methods for semi-supervised text classification." arXiv preprint arXiv:1605.07725 (2016).* It would be nice to include for discussion or comparison. (3) For Sec 3.3 & 3.4: (a) How do we know the perturbed hidden states $\\bf{H}$ still lay in the manifold of valid sentences? It is possible the hidden states may not be corresponded to any sentences. (b) Using the conditional likelihood over the original target sentence to measure the negative samples may also be misleading. For example, it is also possible to get a very different sentence with the same semantic meanings with the target sentence. (c) What is $\\hat{y}$ and $\\bar{y}$ in Eq (6) and (7)? Are they different target sentence? Where are they from as the proposed methods did not seem to include decoding. - Experiments (1) It seems that all experiments are initialized with T5. Does it mean that the proposed method only works with large scale pre-training? It would be more important to show results with training from scratch. (2) The results on WMT16 RO-EN do not seem to be too low especially with T5 pre-training which makes the improvement difficult to tell. (3) For many tasks, the improvements of the proposed method are actually marginal. It may improve the paper by include discussion of statistical significance. (4) There are also methods such as Reinforcement learning which also aims to overcome the problem of teacher forcing. It should be also discussed in experiments. <doc-sep>This paper presents a method for conditional text generation tasks that aims to over the "exposure bias" problem through contrastive learning where negative examples are generated by adding small perturbations to the input sequence to minimize its conditional likelihood, and positive examples are generated by adding large perturbations while enforcing it to have a high conditional likelihood. Experimental results on machine translation, text summarization and question generation show the effectiveness of the proposed approach. My only concern is that compare to MLE, the improvements either on Table 1 or on Table 2 are relative small. The study in the paper by Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, Laurent Charlin, Language GANs Falling Short, ICLR 2020 shows that the "exposure bias" problem for text generation by MLE appears to be less of an issue, and simple "temperature sweep" in the softmax significantly boosts the performance and gives pretty good results that beat all language GANs. So I think in the experiments, all results should be compared using the trick of "temperature sweep". Moreover, if diversity is an issue, the results should be compared in the quality-diversity space as did in Language GANs Falling Short paper. Hopefully the authors can address my concern in the rebuttal period. <doc-sep>======================== Paper Summary: This paper proposes to add contrastive learning to the sequence-to-sequence generation problem. More specifically, the authors apply a contrastive loss on the globally pooled hidden representation of the generated hidden states. The key novelty is to apply adversarial gradients to obtain both hard negative and hard positive examples. The proposed method can improve a state-of-art pretrained transformer model (T5) on 3 tasks: machine translation (WMT16 En-Ro), abstractive summarization (XSum), and question generation (SQuAD). ========================== Overall review Although the proposed method seems to be effective and new, the concerns outweighs the contributions in my opinion. I am leaning towards rejection for now. Please try to address my concerns during the rebuttal period. Pros - The idea of using adversarial gradients to generate hard negative/positive is novel, at least for contrastive learning and sequence generation problems. - Improvement is demonstrated on a strong pre-trained transformer model (T5). - This method is experimented on 3 tasks and could possibly be extended to any seq2seq generation. Cons - The sdfgclaim of solving the ‘exposure bias’ is somewhat exaggerated. - The proposed method is somewhat straight-forward and lacking theoretical insights/guarantees. - The method is only applied on a small version of T5, which is limited. How about other pretrained (potentially larger) models? How about non-pretrained models such as randomly initialized Transformers/LSTMs? ========================== Detailed Review The authors claimed to mitigate the exposure bias problem for sequence generation. However, the original `exposure bias’ problem refers to not seeing incorrectly generated text tokens as training input, which leads to train-test mismatch. In this work, the model does not see any self-generated negative tokens as input, but only pooled adversarial hidden states. It does not mitigate train-test mismatch at all. Therefore, the current presentation may be misleading. It might benefit the paper to also compare and contrast to adversarial training for NLU such as SMART/FreeLB. Moreover, this work does not provide new theoretical insights. The hard negatives/positives do not have theoretical guarantee. It is not clear to me why Eqn (6) & (7) will be a distant positive. If g = f, then H = H_bar. Moreover, in MT and Sum., adversarial step size eta and epsilon are set the same. This is inconsistent with the intuition of near negative and distant positive claimed in the paper. ========================== Other Questions / Suggestions - In Eqn. (2), why not use 2-layer MLP as in SimCLR? - In experiments, maybe add a CLAPS w/o negative so that readers would know which is more important. - Why not train as how you generate Table 3 example? This will then better solve the train-test mismatch (exposure bias), although maybe at a cost of slow training. - Some human evaluation on a larger set of generated examples would help. For example, how many hard negatives are actually being recognized as negative by human. <doc-sep># Summary Proposes contrastive learning method for conditional text-generation. Here we maximize similarity (of representations) between source and target sequences (positive) while minimizing similarity with false targets (negative). Additional positives and negatives are created in the sequence representation space by adding perturbations to decoder (output) hidden states to minimize/maximize conditional likelihood p(y|x). It is shown this works a lot better than the naive contrastive approach of sampling random non-target sequences. The full model is based on T5-small (Raffel et al) and combines contrastive objective with regular MLE objective by simple addition. Modest improvements over T5-small are observed on Translation, Summarization, and Question-generation seq2seq tasks. # Pros 1. Diversity of seq2seq tasks, with consistent improvements over baseline T5-MLE (small). 2. Possibly improves the exposure bias issue of regular MLE seq2seq training. 3. Complementary to seq2seq MLE training and can be used to improve it in general, not just text generation. # Cons 1. The improvements are consistent but appear to be modest. It is unclear whether the improvements would persist on the larger T5 model sizes. Would it be possible to study this (e.g. medium size)? 2. Please add SOTA results in the tables for the various tasks for reference. 3. Please discuss effect on training/inference speed. 4. Since this is generation, more non-cherry-picked example decodes would be informative to have in the appendix. 5. Even better would be some basic human evaluation of generated outputs to verify whether meaningful quality improvements are made. 6. Scheduled Sampling (Bengio et al) should be discussed and perhaps compared as it is a well-known method for addressing exposure bias. 7. Should discuss relationship to Virtual Adversarial Training (Miyato et al) # Clarifications 1. Are all the models initialized with T5-MLE or are they trained from scratch on C4 for the same number of steps as T5-MLE?
This paper proposes a new method for conditional text generation that uses contrastive learning to mitigate the exposure bias problem in order to improve the performance. Specifically, negative examples are generated by adding small perturbations to the input sequence to minimize its conditional likelihood, while positive examples are generated by adding large perturbations while enforcing it to have a high conditional likelihood. This paper receives 2 reject and 2 accept recommendations, which is a borderline case. The reviewers have raised many useful questions during the review process, while the authors has also done a good job during the rebuttal to address the concerns. After checking the paper and all the discussions, the AC feels that all the major concerns have been solved, such as more clarification in the paper, more results on non-pretrained models, and small-scale human evaluation. On one hand, reviewers found that the proposed method is interesting and novel to a certain extent, the paper is also well written. On the other hand, even after adding all the additional results, the reviewers still feel it is not super-clear that results would extend to better models, as most of the experiments are conducted on T5-small, and the final reported numbers in the paper are far from SOTA. As shown in Table 1 & 2, the AC agrees that the final results are far from SOTA, and the authors should probably also study the incorporation of CLAPS into stronger backbones. On the other hand, the AC also thinks that T5 is already a relatively strong baseline to start with (though it is T5-small), and it may not be necessary to chase SOTA. Under a fair comparison, the AC thinks that the authors have done a good job at demonstrating its improvements over T5-MLE baselines. As a summary, the AC thinks that the authors have done a good job during the rebuttal. On balance, the AC is happy to recommend acceptance of the paper. The authors should add more careful discussions to reflect the reviewers' comments when preparing the camera ready.
The proposed method can be a very useful tool to identify, in an unsupervised fashion, similar characteristics in groups of nuclei and can serve as an important tool in potentially defining morphology-driven ground truths. 1. The evaluation of the presented methodology is, for the most part, qualitative in nature - Various similarity evaluation metrics have been proposed in computer vision that can be leveraged here. 2. How do the authors propose to evaluate an unseen patch and fit it to one of the N identified clusters? 3. Though the task is mostly motivated by 'lack of enough training samples, it is not clear if the identified groups are actually driven by the underlying nuclei morphology or some other characteristics. Additionally, how important are these features in the actual diagnostic process? <doc-sep>The proposed method is well described and easy to follow. In the experiments, the authors used various types of tumor, which present that method can be applied for many organs/cancer types. This method can be used for future research on cell biology/ cancer biology. The main weakness is a limited novelty, but the paper is interesting and can be useful for the research community. Other comments: 1. the images presented on figures are significantly too small, nothing is visible, 2. captions used on figures should be readable, now they are too small. <doc-sep>- The proposed approach is novel and can be very helpful for the research community in the field. - The paper is well-written and easy to follow in all parts. - State-of-the-art methods were used in different parts of the proposed method (for segmentation, embedding and clustering) - The results were only qualitatively analysed. It would be interesting to investigate the performance of the presented method on the publicly available datasets that provide nuclei segmentation and classification masks. Examples of such datasets set can be found below: MoNuSAC dataset: https://monusac-2020.grand-challenge.org/ CoNSeP dataset: https://warwick.ac.uk/fac/cross_fac/tia/data/hovernet/
Three knowledgeable reviewers recommend accept and maintained their rating after the rebuttal and discussion. All of them agreed that the paper will be very interesting for the research community in the field. The authors addressed the points raised by the reviewers during the discussion and updated their manuscript. Moreover, the authors said that they will share code and trained model for their submission. I think that this paper will be a good contribution to MIDL 2021. Authors should address the main points in the reviews when preparing a final version.
The paper argues that the main reason (or a good reason) for the "meaningfulness" of a gradient with data manifold. The authors perform a set of controlled experiments with different feature attribution methods. Finally, they theoretically show that alignment of the gradient with data manifold has nothing to do with generalizability. * The main question in the paper is interesting: Does explainability has something to do with alignment of the explanation map with data manifold. * The paper's central claim about meaningfulness is not quantifiable (at least with experiments in this paper) and, as a result, not falsifiable. The figures shown in the paper can only show that the gradient-based explanation is more aligned with data manifold than random, but the main argument about the meaningfulness of a method vs another method is very subjective. * Also, there is no clear trend between different gradient-based explanation methods. Perhaps the only trend is that explanation methods are more aligned with data manifold than the gradient, which is interesting but hardly conclusive about the paper's central claim. * The authors observe the alignment increases and decreases with epochs. This phenomenon can happen for many reasons, and I am not sure how it has anything to do with explainability. * Perhaps, the most exciting part of the paper is the negative theoretical results at the end: alignment with data manifold has nothing to do with generalizability. However, this theory has not much to do with the central claim. * The paper reads a set of subjective observations about the meaningfulness of explanation and relationship with data manifold + tangential theory. The paper does not have a coherent story. However, the central question is interesting. <doc-sep>This paper studies the following hypothesis: "Gradient-based explanations are more meaningful the more they are aligned with the tangent space of the data manifold". The work has three contributions: (i) an autoencoder-based approach to estimate data manifolds of five datasets in order to evaluate the alignment between explanations and tangent space of the data; (ii) analysis of the alignment between explanations and tangent space during training; (iii) theoretical analysis to show that generalization does not imply alignment with data manifold. **Strengths** - The paper is well-written; the empirical and theoretical results are easy-to-follow. The evaluation metric and data manifold construction are clearly explained. The hypothesis is well-posed and relevant to the high-level problem of evaluation metrics for instance-specific explanation methods. - The generative approach to create datasets with a completely known manifold structure is interesting. By training a new model on this dataset (with known manifold), this approach sidesteps the possible mismatch between the "true" manifold and the estimated manifold. It would be great to have some discussion on Algorithm 1 (tangent space computation with this approach). The discussion of why a reconstructive approach is needed for high-dimensional datasets (k/d argument) is insightful as well. - The experiments on real-world datasets are quite thorough. Section 3 evaluates four different gradient-based methods (and the random vector baseline) on five real-world datasets, using multiple architectures. The results consistently show that raw gradients are worse than gradient-based methods such as smoothgrad. Also, figure 2 clearly shows how the out-of-manifold component of explanations looks less "meaningful" than the on-manifold component of explanations. - The experiment on the fraction of gradient in tangent over the course of training is novel and interesting. The observation that the fraction of gradient in tangent space increases rapidly and then slightly deteriorates is quite surprising. However, It would be good to sanity-check whether this phenomenon holds on larger non-MNIST datasets. **Weaknesses (in order of significance)** - Insufficient evaluation of explanation meaningfulness/correctness/quality. To test the proposed hypothesis, it is necessary to test whether explanations that are better aligned with the data manifold are more "meaningful". While there are multiple experiments to test data manifold alignment, the paper uses qualitative visual inspection to evaluate explanation meaningfulness/correctness/quality. It is now well known that qualitative visual inspection is subjective and misleading. Explanation methods that are known to output visually sharp saliency maps often fail basic sanity checks [R1]. Instead of visual assessment, evaluation metrics such as ROAR [R2] (not cited) and DiffROAR [R3] can be used to quantitatively test and compare the correctness / quality of explanation methods. The current evaluation vis-a-vis explanation quality is insufficient to reliably test the hypothesis. - Limited novelty: First, [R4] (not cited) and [R5] (cited but not in this context) use generative models such as VAE-GANs to obtain a "learned" data manifold in order to evaluate whether gradient-based adversarial perturbations and raw gradients (resp.) are close to the data manifold / tangent space. Second, similar to results in Section 5, [R5] show that adversarially robust model's raw gradients are better aligned with the data manifold. [R3] show that robust model's raw gradients have better explanation quality. - Section 4 (when and why are gradients aligned with the data manifold) shows that (i) adversarial training improves alignment between explanations and data manifold and (ii) evaluates the effect of training with random labels. However, the section title is misleading because it does not study the "why" aspect. For example, there is no discussion on why adversarial training improves alignment. - Section 5 (generalization does not imply alignment) does not justify the choice of the dataset or 1-dimensional manifold design that is used in the theoretical analysis. What is the design principle behind this synthetic dataset? Is it representative (to some extent) of natural data distributions / real datasets considered in previous sections? **Clarifications and questions** - Why is the hypothesis restricted to gradient-based explanations? Can explanations not based on gradients (e.g. occlusion-based saliency maps) be meaningful if they are orthogonal to the data manifold? - The results in Section 4 suggest that integrated gradients and input times gradient are better than raw gradients, as they are better aligned with the tangent space. This seems to possibly contradict previous findings [R1,R6] that show that unlike raw gradients, integrated gradients and input x gradients fail basic sanity checks. - "If a gradient-based explanation approximately lies in tangent space...contribute to prediction" (section 1): This statement is a bit unclear. Based on how I understood it, I am not sure that it is fully correct. If an explanation approximately lies in the tangent space, it may still lack fidelity w.r.t. the prediction rules learned by the model. For example, it is possible that an explanation that lies in the tangent space can highlight some component (e.g. texture of object in image) that is different from the components (e.g. shape and location of object) of the image that the model employs for its predictions. --- [R1] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M. and Kim, B., 2018. Sanity checks for saliency maps. arXiv preprint arXiv:1810.03292. [R2] Hooker, S., Erhan, D., Kindermans, P.J. and Kim, B., 2018. A benchmark for interpretability methods in deep neural networks. arXiv preprint arXiv:1806.10758. [R3] Shah, H., Jain, P. and Netrapalli, P., 2021. Do Input Gradients Highlight Discriminative Features?. arXiv preprint arXiv:2102.12781. [R4] Stutz, D., Hein, M. and Schiele, B., 2019. Disentangling adversarial robustness and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6976-6987). [R5] Kim, B., Seo, J. and Jeon, T., 2019. Bridging adversarial robustness and gradient interpretability. arXiv preprint arXiv:1903.11626. [R6] Yang, M. and Kim, B., 2019. Benchmarking attribution methods with relative feature importance. arXiv preprint arXiv:1907.09701. Overall, the weaknesses of the paper outweigh its strengths. While the hypothesis is well-posed and the experiments are thorough, some major weaknesses are: (i) insufficient/misleading evaluation of explanation correctness/quality, (ii) limited novelty vis-a-vis methodology and results on robust models, (iii) missing discussion on why robustness improves alignment, justification of synthetic dataset and theory, and connection to previous work on sanity checks. <doc-sep>The paper constructs a synthetic classification task with a known manifold structure by training the classifier with data from a variational autoencoder with a low-dimensional latent space. The paper argues that the components of image gradients that lie in the tangent space of the data manifold are semantically meaningful, whereas the part orthogonal to the image manifold is nonsensical. The experiments in the paper support this hypothesis to an extent. This is an interesting, although not unexpected, conclusion. The paper is well written and the experimental design is explained in detail. Much of the evaluation relies on informal observation of gradients, and the examples appear to be carefully picked. In Figure 2, many of the digits are slightly ambiguous and exhibit artifacts that highlight the explanation. The qualitative difference between on-manifold and orthogonal components appears consistent and convincing. In Figure 3 it is difficult to say if the measured fraction of on-manifold component correlates with quality of explanation, except maybe for the top rows with CIFAR10. Judging if the lower rows make sense would require expertise in diagnosing pneumonia or retinopathy, which I believe to be rare in the reviewer pool. I am somewhat concerned by how the relative ordering of various explanation methods changes between different variants of MNIST. How certain can we be that this ordering is not due to random chance? I would expect that training the same VAE multiple times with different random initializations could result in quite different latent spaces, and these might favor different explanation methods. In other words, how robust are the measured distributions such as those shown in Figure 2 and 3, and the related explanation figures, to the inherent randomness in training? This should be measured in order to assure the reader that the differences are real and consistent. Formula 1 calculates the cosine of the angle between the vectors v1 and grad. I don’t think it is appropriate to call this a “fraction”, because a similar computation between v2 and grad (corresponding to the sine of the angle between v1 and grad) and this fraction do not sum to 1. Squaring the formula would correspond to the length of projection of v1 onto grad relative to length of grad, and would seem like a perhaps more appropriate choice. The adversarial training test in Section 4.1 is very interesting and convincing. I have no opinion of the usefulness of the theoretical result in Section 5. Section 3.1 heading has a typo "graysacle". The paper constructs a clever setup to test its central hypothesis and provides some convincing results that the hypothesis holds true. However, there is no analysis of stochastic variation in the quantitative results, so they may not hold water as well as the central tenet that the gradients in line with the tangent space of the data manifold are qualitatively different from gradients orthogonal to it. <doc-sep>This paper makes the hypothesis that gradient-based explanations are "meaningful" if they are aligned with the tangent space of the underlying data manifold. Experiments in the paper compute this alignment for various explanation methods and adversarially trained models, and conclude that smoothgrad, integrated gradients and adversarially trained models generally produce gradients that are more aligned with the tangent space. 1) **No definition of "meaningful" explanation makes core hypothesis unverifiable**: The main drawback of this paper is that it fails to quantify or define what it means for an explanation to be "meaningful", which is central to the hypothesis presented in the paper. Without such a definition, it is impossible to verify the correctness of the hypothesis. For example, is an explanation more meaningful if is more structured? If so then it is plausible that for highly structured domains such as images, the tangent space is also similarly structured and hence "meaningful". However, when the underlying domain itself is unstructured, what constitutes a meaningful explanation? Note that it is perfectly fine if "meaningful" explanations exist only for highly structured domains, and it is important nonetheless to define these terms precisely to be able to verify their correctness. It is also unclear if this "meaningfullness" is distinct from the so-called "faithfullness" condition, where the explanation method must accurately reflect model behaviour? For instance, an explanation can be highly structured but be unrelated to the underlying model. How do we guard against these issues in a coherent definition of "meaningful" explanations ? 2) **No quantitative metrics to measure saliency map quality**: Similar to point 1, the paper does not compute any quantitative metrics regarding the quality of saliency maps, besides the alignment of saliency map with the tangent space. The hypothesis in the paper is that tangent-space-aligned saliency maps are higher in quality, and the experiments in the paper demonstrate simply that some saliency maps are more aligned to the tangent space than others, but the question remains - are they of higher quality according to some well-defined metric? Unfortunately the experiments do not answer this question and this is yet another major drawback of the paper. 3) **Clarification regarding gradient direction within the subspace**: In the setting proposed in the paper, the tangent space is a k-dimensional subspace of R^d. However the gradient corresponds to a single direction within this subspace. It is unclear to me whether any specific direction within this subspace must be preferred or any direction is equally good? Some discussion on this would be illuminating. 4) **Missing highly related reference**: The paper misses reference to a highly related work - Srinivas & Fleuret, "Rethinking the role of gradient-based attribution methods for model interpretability", ICLR 2021. Both papers are similar in that they hypothesize that discriminative models seem to have a generative modelling / manifold learning component which ensures that the gradients are related to the underlying data distribution. However they also present different hypotheses in the sense that Srinivas&Fleuret state that model gradients are "interpretable" if they are aligned to the gradients of the data distribution, whereas this paper posits that gradients are aligned to the tangent space of the data manifold. The above paper also shows that pre-softmax gradients can be arbitrarily structured, which seems related to section 5 of the current paper "generalization does not imply alignment with the manifold". Overall I think this is an important point to discuss and compare the two hypotheses presented in both papers. On a related note it would be nice to present visualizations like Figure2 where the gradient components are presented in the normal space and tangent space, but for image datasets such as CIFAR10/100. 5) **Nice experimental approach**: On a more positive note, I like the approach taken by this paper to verify its hypothesis - by explicitly generating data that lies on a manifold. I also like Figure 4, which shows how such alignment changes during training. This seems to point to some form of approximate manifold learning being performed by the model implicitly. Figure 5 is also very interesting, as it shows the dramatic shifts that adversarial training can produce. I'm wondering whether similar observation holds true for simple gradient norm regularization, which is also shown to boost robustness? Overall, while I certainly think that this paper makes a hypothesis that is interesting and is at least partly true, it does not make its definitions (see point 1) and experiments (point 2) precise, which makes it impossible to prove or disprove the hypothesis. I would be willing to accept this work only when the paper makes a more clearly stated hypothesis, and designs similarly clear experiments.
This paper studies the following hypothesis that gradient-based explanations are more meaningful the more they are aligned with the tangent space of the data manifold. The reviews are negative overall. The general feeling is that the paper reads like a set of subjective observations about the meaningfulness of explanation and relationship with data manifold + tangential theory. There isn’t a coherent story.
It is a well written paper that flows well. I think the community will find it interesting, as its application has not been well explored (yet) with deep generative models. The paper also demonstrates a theoretical framework for doing so, that many readers probably will find interesting, too. My main concern with this paper is the low precision from the latent space classification experiment. In the Conclusion section, the authors state ‘However, the latent representation that the MoPoE learns when learning to reproduce the data is still meaningful in the sense that it can be separated into the different classes the data belong to.’ However, a precision below 0.5 effectively means the classification predicts more false positives than true positives; therefore I am not sure their claim is justified, meaning how useful is the latent representation their model encodes? Also, I am not sure that picking frontal and lateral slices from the CXR images qualifies as different modalities? (if I understood the meaning of the F and L images - it is not explained in the paper). Why not simply use two modalities, 3D CXR and text reports?
The paper addresses a challenging yet important issue of multimodal learning (images and reports) with deep generative models. The reviewer is fairly convinced by the proposed method (based on ICLR paper) and the potential for the application. However, the results are currently of somewhat performance. The work will likely stimulate fruitful discussions and can therefore be accepted as short paper.
The paper presents various upper bounds (Rademacher complexity based) and lower bounds (fat shattering dimension based) on single hidden layer neural networks. These bounds shed light on the question whether the bounded spectral norm of the weight matrix is sufficient for having width independent uniform convergence guarantees. Moreover, there are similar discussions for Frobenius norms and input dimension dependence. The paper considers both generic and convolutional neural networks. Contributions of the paper are summarized in 8 theorems: * Theorem 1 shows that the fat shattering dimension is scaled with the network width for nonsmooth activation function, if only the spectral norm is bounded. * Theorem 2 generalizes Golowich et al 2018 and Neyshabur et al 2015 to show that Frobenius norm bound is sufficient to bound the sample complexity for Lipschitz activation functions (width independent bound) using Rademachar complexity analysis. * Theorem 3 shows that the fat shattering dimension for Frobenius norm bounded networks is input dimension dependent (for smaller input dimensions). * Theorem 4 shows that the spectral norm is sufficient to bound the sample complexity for polynomial activation functions. * Theorem 5 extends the result to the multilayer case for polynomial activation functions of type $z^k$. Lemma 4 and 5 are crucial for the proof of Theorem 4 and 5. * Theorem 6 shows that the spectral norm is sufficient for convolutional networks with linear last layer. * Theorem 7 shows that Rademacher complexity bound for convolutional networks with pooling has logarithmic dependence on width. * Theorem 8 shows, using fat shattering dimension, that the logarithmic width dependence is unavoidable. **Strengths:** From technical perspective, the paper provides a valuable contribution by clear exposition of the role of spectral and Frobenius norm in uniform convergence results. The paper is well written. Given the sheer volume of works on explaining generalization error for neural networks, the authors do a good job connecting the results to the existing works for instance in Remark 2 on Bubeck et al 2021 or Remark 3 on implicit regularization. The paper contains many theoretical results of interest, combines many ideas from previous works and provide new proof techniques notably Theorem 4 and 5 (to the best of my knowledge). The proofs are well presented and are sound. Overall, this is a great paper and I enjoyed reading it. **Weaknesses:** * Although this might seem like a gratuitous comment and request, I think having numerical support, even a toy example, can be good to support the theory. * I have some concerns about O_\\Phi term in Theorem 4 (see below). * I feel that the authors do not adequately comment about implications of the current result for a general theory of generalization for deep learning. Covering all existing works are of course not possible, however, works like Nagarajan et al 2019, or Zhang et al ICLR 2017 are widely discussed. It is at least expected that the authors clarify in which regime their bounds are applicable and do not suffer from examples of those papers. As mentioned by the authors themselves, there is a debate whether uniform convergence can explain generalization in neural networks mainly supported by some carefully designed experiments. It is not clear how one should understand the bounds of this paper in light of papers like Nagarajan et al 2019. <doc-sep>This paper studies the norm-based uniform convergence bounds for two-layer neural networks. Their results give a tight understanding of spectral-norm-based uniform convergence. In particular, they proved that spectral norm is not sufficient for general settings to get a uniform convergence result. However, for NNs with certain smoothness conditions or some convolution structures, the spectral norm is sufficient. Overall, this paper is well-written and clear. The authors show that, in general, bounding the spectral norm cannot lead to size-independent guarantees. This negative result is quite interesting and insightful. However, I have a concern about the significance of the results. Since the size of the parameters is known through the training, it seems unnecessary to get a size-independent guarantee. Moreover, the spectral-norm-based convergence result seems hard to apply both empirically and theoretically, which may limit the application of the convergence result. The authors have addressed their work's limitations and potential negative social impact. <doc-sep>Based on bounded norms of the weight matrices, the authors investigated the upper and lower bound sample complexity for neural networks with one hidden layer in this research. They demonstrated that in contrast to bounding the Frobenius norm, however, bounding the spectral norm generally cannot result in size-independent guarantees - although it is not surprising that the spectral norm is insufficient to get width-independent sample complexity bound - the paper and its theoretical analysis are very important. The constructions did, however, highlight two situations in which the lower bounds can be avoided and a spectral norm control is sufficient to provide width-independent guarantees. The first scenario is when the activations are sufficiently smooth, and the second is in certain situations involving convolutional networks. In general, this is a very important paper as deep neural networks still lack a very basic understanding of their behavior. Strengths: 1. A very solid theoretical paper - the theorems and proofs are exciting. 2. The paper deals with very important and required questions in the field of deep learning, not many papers focus on the theoretical side of deep learning - I find this paper a very important step towards better understanding DNNs. Weaknesses: Although it is clear that the paper is a theoretical paper, however, I have the following minor comment. The writing could still be improved, giving some intuitive explanation and details as to what and why each theorem holds and what each theorem mean -- it took some time to understand the theorems and the details. some simple numerical experimentss would help justify the theory. <doc-sep>This paper studies the sample complexity of simple neural network model classes. The focus is on upper and lower bounds under various norm constraints on the network coefficients. It is shown that a spectral norm constraint is not sufficient to obtain bounds that are independent of the network width. To prove this, the authors estimate the fat shattering dimension of the considered hypothesis class. The surprising aspect of this result is that it does not hold if the activation is the identity but crucially depends on the nonlinearity. As a second contribution the authors show that a Frobenius norm constraint yields hypothesis classes whose sample complexity is independent of the network width. Finally, the authors identify two interesting settings where a spectral norm constraint is sufficient for getting bounds independent of the width. I think that this is an interesting paper, well written, and adding to the current body of knowledge on understanding the generalization behaviour of deep learning algorithms. In terms of originality I am not sure if the paper introduces new proof techniques; maybe they can elaborate the novelty of their technical and mathematical contributions a bit more? This has been adequately addressed
The paper proves a novel, tighter bound norm-based bound for the generalization error of two-layer networks. All the reviewers agree that this is an important theoretical result and should be accepted.
The paper proposes a hypergraph neural network model exploiting a double attention mechanism in the message passing scheme. The overall architecture is designed to process sub-hypergraphs once representations are computed for the nodes and edges. The learning objective includes a regularization term based on the hypergraph laplacian. The proposed model is evaluated on disease classification based on gene-genetic pathways data, showing higher F1 values with respect to a set of competitors. Finally, due to the attention mechanism intepretions in terms of gene pathways (hyperedges) can be derived from the model outputs. Strengths - the model incorporates a dual attention mechanism applied to nodes and hyperedges respectively, exploiting the same attention context vector. This desing choice is claimed to prevent overfitting reducing the number of parameters - the architecture includes an attention module to derive subgraph representations. This scheme allows the application of the mode in a inductive setting - the model compares favourably with respect to the considered competitors on two benchmarks in genetic medicine Weaknesses - The presentation of the model is tightly interconnected with the proposed application in genetic medicine, making it appear less general - Given the focus on bioinformatics, the paper is hard to follow for readers not completely familar with this topic. - The effect of the number K of layers is not investigated (maybe I missed it, but the number of layers used in the experiments is not reported). It is known that (Convolutional) Graph Neural Network suffer from oversmoothing when the depth of the network increases. This may hinder the results in some applications since uniform representation of nodes are developed. Attention may perhaps limit this phenomenon, but on the other side node/edge regularization may produce add a related effect. The ablation study show a positive effect of regularization, but some discussion/analysis should be provided. The authors discuss some limitations that need futher work in section of the paper both for the model architecture and the specific application considered for the evaluation. As listed in the weaknesses, a more general description (not tailored for the considered task in genetic medicine) would have improved the presentation of the proposed hypergraph neural network architecture. <doc-sep>The authors propose a GNN approach to learn embeddings of sub-hyper-graphs. The approach has an explicit treatment of hyper edges, e.g., it does not resort to clique expansion, and makes use of a regulariser based on the hyper graph laplacian. The application chosen is that of disease prediction for patients (modelled as sub hyper graphs) given a pathway network (modelled as a hyper graph with genes as nodes and sets/pathways as hyper edges). Pos Explicitly treating hyper edges as first class citizens in the GNN modelling is of interest, since in this was hyper edges can be the subjects of notions of regularisation or attention. Neg The relative importance of the various ideas introduced is not clear, i.e., a better experimental design with clearer baselines and an ablation study is warranted. More specifically: 1. is the regularisation effective or of importance? (ablation case) 2. is the proposed architecture much different from using a standard graph neural network with attention on a pre-processed hyper graph? In particular the pre-processing could consist in representing an hyper graph as a bi-partite graph where hyper edges are materialised as the nodes of one part and the genes are the nodes of the other part. (experiment) Note that the whole discussion regarding strong duality would follow automatically in this case. 3. is the introduction of WSA (the weighted subgraph attention) needed? what happens if we replace the whole subgraph treatment by Mji directly? I.e., what if we consider a subgraph simply the sum of the nodes (genes) that are of interest (with mutations) for each patient (subgraph), that is, we could learn directly the embedding of the nodes/hyperedges for the classification task when they are simply summed up for each patient. (experiment/ablation) yes. <doc-sep>This paper suggests sub-hypergraph representation learning, a niche problem related to subgraph and hypergraph learning. To tackle this problem, the authors propose the SHINE (SubHypergraph Inductive Neural nEtwork) model, which consists of three modules: strongly dual attention message passing, hypergraph regularization, and weighted subgraph attention. Experiments on two real-world datasets demonstrate the superiority of SHINE on performance (against baselines including GNNs for hypergraphs) and interpretation (using attention). ## Strengths The authors present a novel and niche problem of sub-hypergraph representation learning which has not been explored in the GNN community. A specific example (cancer patients as subgraphs of genes in hypergraphs) can be a practical application for this task. The performance improvement by the authors’ approach is significant. ## Weaknesses However, I think this paper is not ready for publication for the following reasons. First, the technical novelty of SHINE is limited. This model consists of several parts, and each of them is a slightly modified version of existing approaches. Using the attention to both nodes and (hyper) edges is presented in HyperGAT, and the authors are aware of it. Nevertheless, the idea of (strongly) dual attention is aligned with the dual form of hypergraphs, and I can see the novelty of this paper here. However, explicit regularization by Laplacian (Hypergraph regularization) [1] and pooling by attention weights (Weighted Subgraph Attention) [2] are well-known methods in using GNNs. In this case, SHINE's novelty is limited to Strongly Dual Attention, and a more detailed analysis of this part is required. Second, related to the first paragraph, there are no rigorous ablation studies on the architecture. As many submodules make up the model, it is necessary to study where the performance gain comes from. In the supplementary material, only the ablation study on hypergraph regularization is presented, and the study on dual attention message passing is presented by comparison with HyperGAT. However, there are other differences in attention forms between HyperGAT and SHINE, and comparing these two does not provide a fully controlled ablation study of dual attention message passing. I recommend authors retain all parts except parameter-sharing in the attention. In addition, the performance comparison between SHINE with/without WSA and other GNNs with/without WSA also should be presented. Third, it is skeptical that model interpretation by attention is an exclusive strength of SHINE. There are learned attentions in other attentional models like HyperGAT. Can these models provide interpretation at the same level as SHINE? Can you compare interpretations between models? Does SHINE give more precise explanations than other models? Lastly, there are missing baselines for subgraph classification; in particular, the SubGNN can be a strong baseline. Of course, SubGNN is not designed for hypergraphs, but it is straightforward to create graphs from hypergraphs such as clique expansion. The transformation from hypergraphs to graphs is done only once before training; thus, it has a low overhead. Comparing SHINE and GNNs-for-subgraphs can justify that these specific problems in this work should be represented as a hypergraph. ## References - [1] Learning with Hypergraphs: Clustering, Classification, and Embedding - [2] GATED GRAPH SEQUENCE NEURAL NETWORKS The authors do not address the potential negative societal impact of their work. This paper targets a high-level machine learning problem called sub-hypergraph representation learning; however, all datasets are related to a particular area, genes, pathways, and diseases. There could be a potential societal impact that should be considered in real-world applications in this area (e.g., privacy). It would be nice if the authors addressed this point. <doc-sep>Hypergraph neural networks can exploit multi-way connections in relational datasets but they are underexplored in domains such as genetic medicine. In this paper, a hypergraph attention-based message passing neural network is proposed for sub(hyper)graph-level tasks, e.g., * genes: nodes, * pathways: hyperedges, * patients: subgraphs, * predict cancer type of patient: task. Experiments on genetic medicine datasets demonstrate the effectiveness of the proposed method SHINE: SubHypergraph Inductive Neural nEtwork. **Originality** Even though the paper explores an underexplored research topic in an interesting domain (subgraph representation learning for hypergraphs in genetic medicine), the methods proposed are incremental extensions of existing methods and not novel combinations of existing techniques. Specifically in section 3.3., the ideas of hyperedge attention over nodes and node attention over hyperedges with parameter sharing are incremental extensions of well-known hypergraph attention networks. By viewing the nodes and hyperedges as two types of vertices of a (bipartite) heterogeneous graph, the ideas of strongly dual attention mechanisms would be incremental extensions of existing attention-based methods for heterogeneous graphs, e.g., see "A Survey on Heterogeneous Graph Embedding: Methods, Techniques, Applications and Sources" \\ **Quality** The authors have discussed interesting weaknesses of their work (in addition to highlighting the strengths). Moreover, baseline comparison (Table 3), interpretability analysis (Table 4), and ablation study (Table 3 in supplementary) support the claims made in the paper empirically to an extent. However, formalising the key differences with existing similar methods (e.g., HyperGAT in lines 159-169) and confirming the differences with convincing (synthetic/real-world) experiments, e.g., on a dataset chosen cleverly to show clear failure of HyperGAT but success of SHINE, would improve the paper's quality. \\ **Clarity** The paper is well organised. Details on datasets and hyperparameter tuning could help an expert to reproduce the results of the paper and build effective models (those with the best hyperparameters) from scratch. A discussion on computational complexity and an algorithm/pseudo code would further enhance the clarity of the paper. \\ **Significance** It is unclear from the paper why modelling genetic medicine datasets with hypergraphs, despite being a natural choice, is the best choice compared to straightforward alternatives. More specifically, it is unclear why a (bipartite) heterogenous graph with genes: nodes of type 1, pathways: nodes of type 2, patients: (sub) heterogeneous graph would not be a reasonable choice. The paper can be improved by positioning and comparing with set-based methods for exploiting hyperedges in hypergraphs, e.g., You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks, In ICLR'22. The authors have addressed the limitations and potential negative societal impacts adequately.
The paper proposed a GNN that explicitly treats hyperedges, and makes use of strongly dual attention, hypergraph regularization, and weighted subgraph attention. The proposed method shows better performance than existing baselines on two genetic medicine datasets. Explainability is also demonstrated. Reviewers originally raised many concerns on presentation (too specialized for the target application), lack of ablation (effectiveness of each proposed component is not clearly shown), novelty (combination of small modifications of existing methods), and explainability (existing methods can do the same). The authors made an amazing job to address most of the concerns: They reported additional ablation results and baseline results, and showed that the proposed method still performs better, and each proposed component plays a significant role. Two reviewers have been convinced by the author's response, while the other two have not, insisting that the novelty issue remains, and with the limited novelty, more careful investigation is required for publication. This is a borderline paper, and I recommend acceptance because I think adjusting existing methods to target applications is important research even if the modifications are small. The proposed method significantly outperforms existing baselines (including the ones reviewers suggested), and the additional ablation study shows each of the proposed components is effective. On the other hand, I also sympathize with the reviewers with negative evaluations on the following comments: "formalising the key differences with existing similar methods (e.g., HyperGAT in lines 159-169) and confirming the differences with convincing (synthetic/real-world) experiments, e.g., on a dataset chosen cleverly to show clear failure of HyperGAT but success of SHINE, would improve the paper's quality." "The paper can be strengthened by positioning strongly dual attention in SHINE with different attention mechanisms in heterogeneous graph neural network literature (some are listed below): Heterogeneous Graph Attention Network, In WWW'19 HetGNN: Heterogeneous Graph Neural Network, In KDD'19 Metapath enhanced graph attention encoder for HINs representation learning, In BigData'19. MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding, In WWW'20 Heterogeneous Graph Transformer, In WWW'20. There is no need to empirically compare and run them as baselines but explaining the key differences conceptually to make hypergraphs a more compelling choice for genetic medicine than heterogeneous graphs can strengthen the paper." I hope the authors would make a bit more effort to incorporate these suggestions in the final version.
To tackle the unsupervised skill discovery problem, the authors attempt to maximize the mutual information between (latent) skills and states $I(\\tau; z)$ by using the nonparametric particle-based entropy estimation for $\\mathcal{H}(\\tau)$ and noise-contrastive estimation for $\\mathcal{H}(\\tau|z)$. They evaluate their method and the baselines on the Unsupervised Reinforcement Learning Benchmark (URLB), where the agents are pre-trained without extrinsic rewards and fine-tuned for downstream tasks with extrinsic rewards and fewer environment steps. **Strengths** * The experiments are done on URLB, which provides a good evaluation scheme for unsupervised RL. Well-established and common evaluation schemes are important for assessing methods empirically. * The empirical performance, which is provided with relevant statistics, is good compared to multiple baseline methods. * The analysis on the effect of different choices of multiple hyperparameters (Fig.6). **Weaknesses** * This work's originality is somewhat limited. Particle-based entropy maximization with state representations trained using a contrastive learning scheme has been explored in APT (Liu & Abbeel, 2021a). Using the same entropy maximization form for skill discovery was done by APS (Liu & Abbeel, 2021b). * The motivation for using noise-contrastive estimation is not entirely clear. While the authors make a comparison with CPC as a lower bound of mutual information, CPC is not very commonly used in skill discovery and the usual variational lower bound can already be tight if the variational approximation $q$ approximates the true distribution $p$ perfectly. * Theorem 1 is not technically correct because of the particle-based entropy term in $F_{CIC}$, which is also supported by the authors in that they introduced the weighting hyperparameter $\\alpha$ due to that it doesn't consider the proportionality constant (Sec.4.2). * The authors employ $I(\\tau; z) = \\mathcal{H}(\\tau) - \\mathcal{H}(\\tau|z)$ as the decomposition of the mutual information (the 2nd line in Sec.4.1), but they use $q(z|\\tau)$ instead of $q(\\tau|z)$ for the rest of Sec.4.1. * I think the derivation of the noise-contrastive estimator in Sec.4.1 needs more details. For instance, if there are noise samples, where do the correct samples come from? * Using the notation $\\tau$ for something other than trajectories can be misleading. * Lack of empirical analysis with different values of $\\alpha$. While the empirical results basically show that CIC can outperform multiple baselines on URLB with an appropriately tuned value of $\\alpha$, I am mainly concerned about the correctness of the claims and the novelty of the work. <doc-sep>The paper builds upon the DIAYN idea (Eysenback 2018) that an agent could develop skills in an unsupervised environment by finding a set of skills that collectively visits the whole state space but encourages each skill to cover a different subspace and later use one of these skills to simplify the learning of a downstream task. In this paper skills are learned in an unsupervised way using a mutual information based objective. The mutual information between latent states T and skill vector Z, I(T;Z), is decomposed as I(T;Z)=H(T)-H(T|Z) which the paper argues leads to explicit maximization of diversity of latent states T as well as distinct skills with focused effects by penalizing with H(T|Z). More diversity equals better exploration of more distant states leading to more interesting behaviors. The paper explains this decomposition also allows them to user higher-dimensional skill vectors that improve representational capacity and downstream performance. The paper develops a method to calculate the terms of this decomposition and formally shows it is a lower bound for the true mutual information. The latent state entropy H(T) is estimated using an unnormalized k-nearest neighbor method requiring an ad hoc scaling factor \\alpha and the conditional entropy H(T|Z) is calculated using an NCE supervised neural net. The paper illustrates the method by using their loss function to pretrain a DDPG architecture on unsupervised scenarios and then showing a benefit on downstream tasks. The proposed method, CIC, shows a larger IQM stabilized expert normalized score compared to state of the art methods. Experiments also show that high-dimensio The empirical evaluation is rigorous in comparison to common practice (120 runs and IQM stabilization) with strong SOA and baselines. Liked the structured review of prior art that organizes work in an interesting thematic way. I liked the empirical decomposition of reward into entropy and discriminator terms. It is interesting that latent state entropy is important early in learning and task discrimination is more important later on. Useful to know that higher dimensional skills are important (64D performed best). Technically I(T;Z) = H(T) - H(T|Z) = H(Z) - H(Z|T) so theoretically it should not matter which way it is decomposed. The way the terms are calculated, however, could have a significant effect on the practical performance. It isn’t clear what a “simple grid sweep of skills over the interval [0,1]” means. Early stopping does not “leak information” so much as stop wasted exploration in irrelevant parts of the state space? Typo: “we use particule” -> particle The text refers to optimality gap, but the figures see to use expert normalized score which I assume is algorithm performance / DDPG baseline performance. Did not fully understand the implications of the noise and projection argument, but it seems like an important and worth while design decision to investigate. The paper proposes a new algorithm for unsupervised behavior learning that is rigorously shown to be more effective and clearly argues for its design choices through supplementary experiments. <doc-sep>The paper proposes an algorithm (Contrastive Intrinsic Control) for unsupervised skill discovery by maximizing mutual information between skill latents and state transitions. The proposed algorithm is a refinement over existing methods [2]. It uses a contrastive method to estimate conditional entropy and measures entropy on state-transitions as opposed to simply states as done in previous methods [2]. The proposed algorithm shows good performance gains compared to existing competence-based skill discovery algorithms. Further, the paper also contains a rather extensive empirical evaluation of various skill discovery algorithms on the recently proposed URLB benchmark. Strengths: The empirical evaluation carried out in the paper is extensive with many baseline algorithms for skill-discovery from each class (knowledge-based, data-based, competence-based) on the recently proposed URLB benchmark. Further, evaluation metrics (IQM, Optimality gap etc.) used to measure performance are adopted from the recommendations in [1]. The discussion and analysis on the reasons for failure of current competence-based skill discovery algorithms. Ablation experiments (in Figure 6) help justify the design choices made by the CIC algorithm. Weaknesses: The proposed algorithm seems like a variation on an existing algorithm [2]. It refines some of the practical design choices used in the general framework of competence-based algorithms for unsupervised skill discovery. However, the authors have shown the differences between the various algorithms (in Table 1) and discussed their pros & cons which is useful to contextualize their contributions. A few questions I have for the authors on the high-level motivations behind their algorithm design and would like some clarification on: 1] If we intuitively think of the notion of a skill as a form of abstraction of long-term behavior. For example motion primitives like walking, flipping etc. as shown in the paper which occurs over maybe a few tens or hundreds of steps. So, why do several skill discovery algorithms use only highly localized information in the state space (various tau instantiations such as single states or in this case single state-transitions (s, s’)) to infer its corresponding skill latents? Isn’t it more intuitive to infer these latents from more “global” quantities like entire episodes of policy rollouts. Could the authors comment on this? 2] To maximize the mutual information between state transitions and skills as defined In the CIC, we need to maximize the first term ($H[\\tau]$) and minimize the second term ($H[\\tau | z]$). This conditional entropy would be minimal when the corresponding distribution $p(\\tau | z)$ is sharp/narrow (ideally like a delta-like density function) over the state-transition space. This seems rather counterintuitive to me. If we think of a single skill latent, say walking, shouldn’t the density $p( \\tau | Z=z_{walk})$ have a high value for all the possible state-transitions of the walking primitive, which would be a rather wide distribution. Wouldn’t the pressure to keep this distribution as narrow as possible over the state-transition space given a single skill latent lead to several latents codes which essentially cover the same underlying behavior (redundant copies of different walking styles). Wouldn’t this be undesirable for generalization on downstream tasks which would require composing these unsupervised skills? 3] The authors argue for the need for increasing the dimensionality of the skill latents to ensure skill can be decoded back to a diverse set of behaviors. Couldn’t this be achieved by largely retaining the small latent spaces used for skills in prior work and using a more expressive policy decoder. This could allow for greater representation flexibility when the skill latents are decoded back to the action-space and ensure that skill latents give rise to a diverse set of behaviors. Writing/Presentation: The paper on the whole is well-written and easy to follow. I found the phrasing in this sentence rather confusing and ambiguous. “If the set of behaviors outnumbers the set of skills, this will result in degenerate skills -- when one skill maps to multiple different behaviors”. What does “behaviors” refer to in this context -- action trajectories? If so, isn’t it expected that the set of skills would be much smaller (essentially a compressed representation) than the total number of “unique” action sequences. Unless, the authors A few minor typos I found are below: “Why most competence-base algorithms …. ” -> “competence-based algorithms … ” “Both CIC and CIC use ...” -> do you mean CIC and APS? In Algorithm 1 -> “Contrastive Intrinisc Control” -> “Intrinsic” [1] Agrawal et.al, “Deep Reinforcement Learning at the Edge of the Statistical Precipice”, NeurIPS 2021. [2] “APS: Active Pretraining with Successor Features”, ICML 2021. Although the proposed algorithm CIC is a variation on an existing algorithm [2], it shows impressive performance gains over several existing algorithms on a large suite of continuous control tasks from the URLB benchmark. These large-scale empirical evaluations and analysis of several algorithms for unsupervised skill discovery methods on a standard benchmark such as URLB would benefit the community. <doc-sep>This paper tackles the problem of unsupervised pre-training of a (code-conditioned) policy to improve the performance of downstream RL tasks. In line with previous works in unsupervised skills discovery, it proposes a method, called CIC, to maximize a variational lower bound to the mutual information between the code and the visited states. The lower bound is obtained through the combination of non-parametric state entropy estimation and a contrastive predictive coding loss for the conditional entropy. The paper provides an empirical analysis of CIC over a set of continuous control domains. I report below some detailed comments and concerns that the authors might address in their author response. MUTUAL INFORMATION OBJECTIVE 1) There is now a bunch of works targeting the mutual information between a code and the visited states to the purpose of unsupervised RL (some of them are summarized in Table 1 of the paper). Every work is proposing a lower bound to the mutual information. I was wondering if there is a formal way to compare the lower bound of CIC with previous works. Which one is the tightest? Do the authors believe that getting the tightest bound of the MI is really the goal in this setting, or even maximizing the exact MI would not be necessarily better than other methods (i.e., a good inductive bias matters more than the MI approximation)? 2) Since the work motivates the approach as an approximation of the MI, as it is common in previous works as well, I was wondering if the authors also considered a direct non-parametric estimation of the MI (e.g., https://journals.aps.org/pre/pdf/10.1103/PhysRevE.69.066138) instead of independent estimates of the state entropy and the conditional entropy. Especially, such a direct estimation would not require the additional hyper-parameter $\\alpha$. METHODOLOGY 3) Can the authors clarify what is the entropy term $H(\\tau)$ denoting? Is it the entropy of state-to-state transitions, or the entropy of the joint probability of two states within a trajectory, or something else? Especially, what is the intuition behind using $\\tau$ instead of $s$? 4) The method is built upon a non-specific base algorithm (DDPG) that was originally developed for standard RL, i.e. RL problems where the reward does not change over time but it comes from a consistent reward function. Do the authors experienced any instability working with the non-Markovian intrinsic rewards? Do they believe that the methodology could benefit from an objective-specific algorithm, i.e., an algorithm carefully designed to work with this kind of intrinsic reward? EXPERIMENTS 5) How can we rule out the possibility that CIC is just a better way to pre-train a DDPG agent in these settings w.r.t. other baselines? This would be significant anyway, but do the authors believe that the same results would generalize to different base algorithms (say TRPO, SAC, A2C...)? 6) Moreover, DDPG is known to be quite strong on continuous control tasks. Do the authors believe that the combination of DDPG and CIC would be successful in different settings (e.g., visual discrete domains such as Atari games) as well? Or perhaps the base algorithm should be selected to accomodate the specific domain? 7) CIC is quite similar to APS (Liu and Abbeel, 2021), as they both employ non-parametric entropy estimation and a discriminator loss, based on contrastive predictive coding and successor representations respectively. However, in the reported results CIC is way better than APS. Do the authors think it is the different discriminator loss the main cause for this performance gap, or there is some other factor at play? Can they confront the discriminator rewards of APS and CIC (as in Figure 5)? 8) From my understanding, the empirical results are not directly comparable with the URLB (Laskin et al., 2021) despite a very similar setting. I see that the benchmark is very recent, and thus should be considered concurrent to this work, but I believe that reporting a direct comparison with their results would further strengthen the empirical analysis. 9) The results section seems to imply that a key factor under the improvement over previous competence-based methods is the ability of CIC to cope with a larger skill space. Can the authors clarify why previous methods are prevented to work with a comparable skill space? Can they also provide a comparison with previous methods when working with the same (potentially lower-dimensional) skill space? MINOR - Adaptation efficiency paragraph: Fig. 3 is reported instead of Fig. 4; - The normalized score of Fig. 6 does not seem to match the one of Fig. 4; - It is not easy to track the different baselines in Fig. 4 (top). What is this plot representing? To my understanding, the main selling points of this paper are: - It tackles the very relevant problem of unsupervised pre-training for reinforcement learning; - The methodology is clear, and a quite natural extension of previous works in unsupervised skills discovery literature; - Strong empirical results, CIC seems to advance significantly the state-of-the-art performance of unsupervised pre-training in continuous control domains. Instead, potential shortcomings are: - The novelty seems limited, as CIC is essentially similar to APS (Liu and Abbeel, 2021) with a different discriminator loss (which has been employed for unsupervised skills discovery before); - It is not completely clear from the paper what are the specific factors that lead to such a performance improvement over previous works. Whereas the reported empirical progress might be a sufficient reason for acceptance, my current evaluation is just slightly positive in consideration of the mentioned concerns. I do not think the limited novelty is a crucial problem here, if the authors could better clarify in their response how the CIC methodology is so successful, I will consider raising my score to a clear accept.
The paper addresses the question of skill discovery in reinforcement learning: can we (without supervision) discover behaviors so that later (when supervision is available via a reward signal) we can learn faster? The paper proposes a new contrastive loss that an agent can optimize for this purpose, based on a decomposition of mutual information between skills and transitions. The reviewers praised the extensive experimental evaluation and good empirical results, as well as the analysis of failure modes of related algorithms. Unfortunately, there appeared to be errors in the derivation and implementation. (These include typos in derivations that made them difficult to follow, as well as uploaded code that didn't match the experimental results.) While the authors claim to have fixed all of them, the reviewers were not all completely convinced by the end of the discussion period. In any case, these errors caused confusion during review; so, whether the errors are fixed or not, it seems clear that there hasn't been time for a full evaluation of the corrected derivations and code. For this reason, it seems wise to ask that this paper be reviewed again from scratch before being published.
The paper is well written and clearly explained. The previous literature is adequately discussed and the experimental results are clear. I think that the subject of interclass relationships is a relevant and important one, since there are many conditions with very subtle differences in their appearance on imaging and few authors make use of this information. The problem being solved does not seem to be particularly common or clinically relevant (see detailed comments below) and the authors do not address this or speculate about how the method could be generalized to more common/relevant tasks. The dataset description is a bit unclear (see detailed comments). It is always better to include a data table for clarity. <doc-sep>In general, the idea of incorporating interclass relationships is quite novel in the medical domain and might be very interesting in some areas as an alternative to standard classification. Additionally, designing a framework to learn the SORD relationships directly from data is challenging and I am impressed that the authors could make the network converge with such a large optimization within the loss function. The collection of such a large dataset is definitely worth mentioning. Following the open science spirit of MIDL, I would highly recommend making the data publicly available and to include a link in the document. Unfortunately, I have several concerns about the manuscript, and the main one brings me to the very basic assumption of the presented work: Regarding IV CT phase classification, it is not clear to me why some phases are more similar to each other than others are. If the contrast bolus is in the arteries, they have a high contrast, and the veins have a high contrast when the bolus arrives there. I do not see why misclassifications between these images should be counted less harsh and why this should help the network to gain performance. Or, to reframe it: I doubt that there is some causal relationship between the different phase images that guides the network towards better generalization! Or is the bolus spreader that much over the vascular system that there are some "inaccurate states"? And why could that not be addressed via data curation? To be fair, your results show the superior performance of the SORD approach. But this brings me to my second concern that is closely related to the first one: I am missing a discussion of the results! As stated above, I can imagine that the performance increase might not follow the class relationships but may be induced by adding additional gradient information to each class. As each datapoint now spans multiple classes, the SORD approach works like some form of oversampling or data augmentation, and hence provides higher robustness on small datasets. I am sure you have a great argument against this hypothesis, but this is exactly what I am missing in the discussion. Especially for such unintuitive results, an adequate discussion is a must! Furthermore, I have several major concerns: -The aim of the presented paper is a bit ambiguous and partially unclear. The title claims a very general methodological development, which should incorporate in-depth analysis of the performance and hyperparameter choices on several standard datasets. However, all results are computed on a dataset called “proprietary”, leaving the impression that the goal of the study is to apply a known computer vision method to IV CT phase classification. Please clarify the storyline! -In 5.1, you state that “the ordinal permutation which contributes the lowest training loss would be assigned the largest weight during training”. I do not see why the model should increase any of the lambdas as the loss function is basically a weighted sum with the lambdas as weights. In fact, the best strategy for the model to decrease the loss is to set all lambdas to zero. How do you make sure that this will never happen? Did you assess the lambdas after training? Please comment on that! -Unfortunately, the structure of the dataset remains unclear. The claimed 264,198 samples in the training set do not match 90% of the full dataset, consisting of 334,079 samples. It is not clear why only subsamples of the validation and test data were used. Additionally, it is not described whether the splits cared for patient distinction and/or center distinction through the sets. Hence, it is unclear whether data of the same patient could be in the training and the validation set, for example. Furthermore, I do not understand why the ground truth labels in the training dataset could be assessed by simply reading the DICOM header while the test data was “manually labeled by an expert radiologist”. If the header information is available, I would assume that this data is optimal and hence the expert is not necessary and, in the worst case, introduces human errors to the ground truth. Please revise section 6.1 and make the data used in this study clear! -Even though the paper is well written in general, the mathematical notation is partially insufficient and some variables appear without explaining what they are. Details on this can be found in the “Detailed Comments” section. <doc-sep>* There is substantial interest in encoding ordinal relationships between classes into the training process of neural networks, e.g., when training networks that classify images according to some kind of severity grade, which is essentially a hybrid between a classification task and a regression task. This paper investigates an elegant approach for circular dependencies. * The experiments demonstrate that encoding prior knowledge about the order of the classes improves the performance, especially when a limited amount of training data is available. * Next to explicitly encoding a specific circular relationship of the classes, the paper also investigates variants of the approach where the relative weights of the classes are learned as part of the training process. * The approach is validated with almost 200 CT scans. * The experiments with learning the optimal encoding are somewhat limited, only a single experiment was performed. Results for different training set sizes and different values for the parameter s would have been valuable. <doc-sep>The paper addresses a common challenge faced when training models with labels which are coarse and could benefit from domain knowledge to more correctly define them. ................................... The method seems overly complicated in it's presentation. The evaluations could include more common baselines to justify the complication of this method. The evaluations are not very convincing. ......
This is a borderline paper -- while the underlying idea is good and relevcant, the authors don't do a very good job of selling it; their experiments are performed on a very specific task with limited clinical relevance. The reviewers had a number of questions regarding experimental setup, which were largely answered in the rebuttal.
This paper discusses recurrent networks with an update rule of the form h_{t+1} = R_x R h_{t}, where R_x is an embedding of the input x into the space of orthogonal or unitary matrices, and R is a shared orthogonal or unitary matrix. While this is an interesting model, it is by no means a *new* model: the idea of using matrices to represent input objects (and multiplication to update state) is often used in the embedding-knowledge-bases or embedding-logic literature (e.g. Using matrices to model symbolic relationships by Ilya Sutskever and Geoffrey Hinton, or Holographic Embeddings of Knowledge Graphs by Maximillian Nickel et al.). I don't think the experiments or analysis in this work add much to our understanding of it. In particular, the experiments are especially weak, consisting only of a very simplified version of the copy task (which is already very much a toy). I know several people who have played with this model in the setting of language modeling, and as the other reviewer notes, the inability of the model to forget is an actual annoyance. I think it is incumbent on the authors to show how this model can be really useful on a nontrivial task; as it is we should not accept this paper. Some questions: is there any reason to use the shared R instead of absorbing it into all the R_x? Can you find any nice ways of using the fact that the model is linear in h or linear in R_x ?<doc-sep>This is a nice proposal, and could lead to more efficient training of recurrent nets. I would really love to see a bit more experimental evidence. I asked a few questions already but didn't get any answer so far. Here are a few other questions/concerns I have: - Is the resulting model still a universal approximator? (providing large enough hidden dimensions and number of layers) - More generally, can one compare the expressiveness of the model with the equivalent model without the orthogonal matrices? with the same number of parameters for instance? - The experiments are a bit disappointing as the number of distinct input/output sequences were in fact very small and as noted by the authr, training becomes unstable (I didn't understand what "success" meant in this case). The authors point that the experiment section need to be expanded, but as far as I can tell they still haven't unfortunately. <doc-sep>My main objection with this work is that it operates under a hypothesis (that is becoming more and more popular in the literature) that all we need is to have gradients flow in order to solve long term dependency problems. The usual approach is then to enforce orthogonal matrices which (in absence of the nonlinearity) results in unitary jacobians, hence the gradients do not vanish and do not explode. However this hypothesis is taken for granted (and we don't know it is true yet) and instead of synthetic data, we do not have any empirical evidence that is strong enough to convince us the hypothesis is true. My own issues with this way of thinking is: a) what about representational power; restricting to orthogonal matrices it means we can not represent the same family of functions as before (e.g. we can't have complex attractors and so forth if we run the model forward without any inputs). You can only get those if you have eigenvalues larger than 1. It also becomes really hard to deal with noise (since you attempt to preserve every detail of the input, or rather every part of the input affects the output). Ideally you would want to preserve only what you need for the task given limited capacity. But you can't learn to do that. My issue is that everyone is focused on solving this preserved issue without worrying of the side-effects. I would like one of these papers going for jacobians having eigenvalues of 1 show this helps in realistic scenarios, on complex datasets.
Paper has an interesting idea, but isn't quite justified, as pointed out by R2. Very minimal experiments are presented in the paper. pros: - interesting idea cons: - insufficient experiments with no real world problems. - no rebuttal either :(.
In the era of deep learning, pre-trained models have been regarded as intellectual properties of AI companies. Thus, protecting these models has been more and more important. To achieve this aim, this paper proposes a non-transferable learning (NTL) method to capture the exclusive data representation in the learned model and restrict the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: 1) For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, the NTL-based ownership verification provides robust resistance to state-of-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. 2) For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. The NTL-based authorization approach instead provides a data-centric protection, which is called applicability authorization, by significantly degrading the performance of the model on unauthorized data. In general, this paper contributes a novel method to the field, and experiments verified the success of the proposed method. Pros: + The research direction is promising and important in the real world. Nowadays, AI companies will train their own deep models with abundant labelled data that costs a lot of resources. Thus, it is a good timing to research how to protect these models, which have become very important and practical. + This paper proposed a method that can be effective solutions to both model verification and authorization, which is general and is promising to be applied in other applications. + This paper is easy to follow. Experiments are enough to support the claims made in this paper. A plus should be that experiments are conducted with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. Cons: - The presentation should be improved. The first paragraph in intro is too long. It is better to divided it into several paragraphs to better demonstrate the key points of this paper. - I am not sure if it is necessary to list the contributions in the introduction. Such contributions have been described clearly in intro and abs. It seems that you do not need to restate them. - Key related works are missing. For an AI company, they need to be aware of many adversarial attacks, such as reprogramming attacks, model-inversion attacks. These works are also related to IP protection of deep learning. It would be better to conclude these attacks as related works as well. Some discussions should be also added for general readers of ICLR. - Some notations should be changed. For example, we will not use X or Y to present distributions, instead, we will use them to represent random variables. It is better to use \\sP_X to represent the distribution corresponding to a random variable X. It is unnecessary to use GMMD, you can use MMD(P,Q; k), where k is a Gaussian kernel (you can follow the notations from recent deep kernel MMD papers). - How many times do you repeat your experiments? I did not see error bar/STD values of your methods. This should be provided to verify that the experimental results are stable. - If we consider to add bandwidth to your kernel function, how does the kernel bandwidth affect your results? In general, considering the significance of the researched problem, this paper can be accepted by the ICLR2022. However, some points should be clarified and strengthened in the revision. I would like to strongly support this paper if my concerns can be fully addressed. <doc-sep>This paper introduces the idea of "non-transferable learning", which is roughly what the name indicates. The authors explain the value of this as a security/IP protection tool to protect the model from being used on unauthorized data. In addition, this presents a kind of attack against domain adaption works that try to improve generalization bounds without access to source data. Basically, the authors design a clever technique for learning nuisance-dependent representations. Such a representation can be made to perform accurately for a particular source domain, but poorly for another target domain. Furthermore, the authors design a GAN type technique for generating samples outside the source domain to serve as a kind of generic target domain. This is obviously important, as one cannot know to which target domain the model would be later adapted to. This is a very interesting paper, although I have to say I'm not an expert in this topic at all. Most of the paper is really nicely written and is pretty easy to follow. The experimental verification is clear and detailed, but mostly limited to small images, so it's hard to say how it actually performs in some real-life scenarios. Couple questions come to mind: - Can you imagine uses of this to other kinds of models, e.g., language models, or is this mainly meaningful for image data? - It sounds like an NTL representation by nature is highly vulnerable to training data privacy attacks, like membership inference. Have you considered if one could use the NTL representation to particularly efficiently generate samples from (something close to) the training data distribution? Non-transferable learning is an interesting idea to explore, and this is the first step in that direction. I can imagine that there will be a lot of follow-up ideas both for attacking this, as well as improving upon it. I would definitely recommend accepting this for ICLR. <doc-sep>Protecting the intellectual property of the trained models has received appealing attentions. Existing researches to protect intellectual property fall into two major categories: ownership verification and usage authorization. To this end, the authors propose to utilize non-transferable learning to achieve both the goal of ownership verification and usage authorization. Extensive experiments on several representative datasets validate the effectiveness of the proposed method in terms of ownership verification. Generally, this paper proposes a novel idea to address a practical problem in real-world applications, which could inspire many readers to follow it and have an important influence on the community of computer vision. I support the acceptance of this paper for a better ICLR conference. This paper could be significantly improved via addressing the following issues: 1. In Table 1, what is the number of training epoches when transfering MT to MM? Did you try to increase the epochs of fine-tuning? If you train for enough epochs, the model would eventually reach the original accuracy. The sensitivity analysis regarding the epoches of your fine-tuning is necessary when compared to training from scratch and the transfer learning from the original model to the target task. 2. The training complexity of using your NTL approach and the GAN training should be introduced in this paper? The computing time of the MMDs during each time step is at least twice your training time? 3. The propsoed methodology is well presented. However, the differences between the proposed model and realted SOTA works should be presented clearly. 4. Comparing Table 2 and Table 3, it can be seen that sometimes the source-only method shows greater performance compared to the target-specific method. The reasons why would this happen are interesting since providing the target-domain target should be more accurate when removing some part in the generalization space. However, the experiments seem does not agree with it. 5. A future research section should be added in the revision. This paper proposes an interesting question and gives the corresponding solution. I recommend the acceptance of this paper.
The paper addresses two important aspects of deep learning: model transferability and authorization for use. It presents original solutions for both of these problems. All of the reviewers agree that the paper is a valuable contributions. Minor concerns and critical remarks have been addressed by the authors during the discussion.
After looking over authors responses, I've decided to increase my rating of the paper. The main concern I original had was sufficiently motivating the need for this specific dataset (compared to existing alternatives like ACE). The authors (in the comments below) have articulated qualitatively how ACE is insufficient, and demonstrated with experiments that generalization from ACE pretraining to this new dataset is poor. ==== EDIT ==== The authors present a corpus of news articles about protest events. 10K Articles are annotated with document level labels, sentence-level labels, and token-level labels. Coarse-grained labels are Protest/Not, and fine-grained labels are things such as triggers/places/times/people/etc. 800 articles are Protest articles. This is very detailed work & I think the resource will be useful. The biggest question here is: If my focus is to work on protest event extraction, what am I gaining by using this corpus vs existing event-annotated corpora (e.g. ACE) that aren’t necessarily specific to protest events? I’d like to see experiments of models run on ACE evaluated against this corpus & an analysis to see where the mistakes are coming from, and whether these mistakes are made by those models when trained on this new corpus. --- Below this are specific questions/concerns --- Annotation: Just a minor clarification question. For the token-level annotations, how did you represent multi-token spans annotated with the same label? For example, in “stone-pelting”, did you indicate “stone”, “-”, and “pelting” tokens with their own labels or did you somehow additionally indicate that “stone-pelting” is one cohesive unit? Section 4: Mild nitpick; Can you split the 3 annotation instruction sections into subsections w/ headings for easier navigation? Section 6 It says your classifier restricts to the first 256 tokens in the document. But your classifier is modified to a maximum of 128 tokens. Can explain this? Why is the token extraction evaluation only for the trigger? Regarding the statement around “These numbers illustrate that the assumption of a news article contain a single event is mistaken”. It was mentioned earlier that this assumption is being made. Can be more clear which datasets make this assumption? Can also explain how your limit to 128 (or 256?) tokens does/doesn’t make sense given multiple events occur per article? <doc-sep>This paper provides a detailed guideline for annotating socio-political corpus. The detailed annotation of documents can be time consuming and expensive. The author in the paper proposed a pipelining framework to start annotations from higher levels and get to more detailed annotation if they exist. Along with their framework, they have provided the dataset of annotated documents, sentences and tokens showing if the protest-related language exists or not. The author also outlines the baseline line of the transformer architecture regarding the document level and sentence level classifications. The paper describes the details very clearly. The language is easy to follow. So to list the pros will be as follows: -introduction of a new framework for annotating political documents, -annotating a large scale corpus -They baseline results Although they have provided the baseline results on the document and sentence level classifications, they have not provided the results of them over the token level task. It would have been interesting to see if those results are also promising. The author has mentioned that they have three levels of annotations (document, sentence, and token ) to save time and not spent time on detailed annotations of negative labels. Can they examine how many samples are labeled negative and how much time (in percent) and money it reduced for annotations? Some minor comments: -In Page 2: I think “result” should change to “resulted” in sentence below: Moreover, the assumptions made in delivering a result dataset are not examined in diverse settings. -On page 3 : who want to use this resources. —> who want to use these resources. -In page 4: We design our data collection and annotation and tool development — > We design our data collection, annotation. and tool development -Page 6 : As it was mentioned above —> As it is mentioned above -You are 1 page over limit, but there are some repetition in annotation manual, especially when talking about arguments of an event, you can just say as mentioned above, -The author has mentioned that they have three level of annotations (document, sentence and token ) to save time and not spent time on detailed annotations of negative labels. Can they examine how many samples are labeled negative and how much time (in percent) and money it reduced for annotations?<doc-sep>The paper describes a corpus of news articles annotated for protest events. Overall, this is an interesting corpus with a lot of potential for re-use, however, the paper needs some clarifications. A key contribution of the paper is that the initial candidate document retrieval is not based purely on keyword matching, but rather uses a random sampling and active learning based approach to find relevant documents. This is motivated by the incompleteness of dictionaries for protest events. While this might be true, it would have been good to see an evaluation of this assumption with the current data. It is a bit unclear in the paper, but were the K and AL methods run over the same dataset? What are the datasets for which the document relevance precision & recall are reported on page 8? I would also like to see a more detailed comparison with more general-purpose event extraction methods. Is there a reason why methodologies such as [1] and [2] cannot be re-applied for protest event extraction? A small formatting issue: the sub-sections on page 8 need newline breaks in between. [1] Pustejovsky, James, et al. "Temporal and event information in natural language text." Language resources and evaluation 39.2-3 (2005): 123-164. [2] Inel, Oana, and Lora Aroyo. "Validation methodology for expert-annotated datasets: Event annotation case study." 2nd Conference on Language, Data and Knowledge (LDK 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2019. EDIT: Thank you for addressing the issues I raised. I have changed the review to "Accept".
The paper presents a corpus of 10K news articles about protest events, with document level labels, sentence-level labels, and token-level labels. Coarse-grained labels are Protest/Not, and fine-grained labels are things such as triggers/places/times/people/etc. All reviewers agree that this paper is interesting and the contributed resource will be useful for the community, hence we propose acceptance. There were some concerns that the authors fully addressed in their response, updating their paper. We recommend authors to take the remaining suggestions into account when preparing the final version.
The paper provides a benchmark dataset that can be used for training & evaluation of Machine Learning models. Their contribution is that they have a large collection of 12,868 sets of natural images across 410 classes obtained from visual genome data and its metadata for annotations. This helps in accounting for large natural shifts in the data. They also provide a way to measure distance between two subsets to quantize the distribution shift between any two of its data sets. The paper provides a good justification for the need of such a dataset for computer vision tasks and motivate the idea well. It also talks in detail about the steps taken to generate MetaShift from Visual Genome, it also provides a generalization of their 4-step process of dataset creation on any dataset with multilabel, results presented for COCO dataset. The paper further discusses the use of this dataset for two major cases- Evaluating distribution shifts & Assessing training conflicts. They provide the impact of shift distance on the domain generalization by keeping test set same and varying training subsets randomly. Further, it talks about subpopulation shifts where the train and test distribution of same domain with different mixture weights. They show that no algorithm consistently performs better than other algorithms for larger shifts. It provides a detailed understanding of training conflict by analyzing the contribution of each training subset to the change of the validation loss of each validation set during the training process. Overall, it’s a well written paper about the motivation, use cases, applicability, and generalizability of their proposed data set. Strengths: Paper has a strong motivation for building the dataset. The authors also present a detailed understanding of major applications of their dataset across ML models. It provides a good quantization for the population shift and show with experiments how it impacts domain generalizability. The generalizability of the dataset creation for any dataset with multilabel is another strong point of the paper. Weakness: The paper only talks about all advantages for MetaShift that’s derived from Visual Genome data but don’t have any similar comparison or quality analysis for other datasets generated using their 4-step dataset creation, for example, analysis could have been provided on COCO dataset too. The paper also doesn’t address the dependency of dataset performance on metadata, what if there are inconsistencies in the metadata of dataset but images are perfect, how will the dataset creation and performance be impacted for a dataset with such metadata. It would also have been interesting to see the analysis of performance of certain model around underrepresented subsets vs over represented subset, set/characteristics of datum leading to training conflict in general, if there is any pattern in images. Other obvious underlying data bias issue has already been acknowledged in the paper too and I hope the authors will research more into solving it. I feel the authors have done a good job highlighting the motivation of such a dataset, steps of creation of the dataset from Visual Genome, paying attention to the generalizability of the approach, discussing the major applications of the dataset in detail. The github link also provides holistic understanding of the work. This dataset will help the research in the field of CV. Once they overcome and address the current weaknesses of the dataset, it will become even better dataset asset. <doc-sep>This work proposes a collection called MetaShift to study the impact of dataset distribution. The major advantage of MetaShift is that it provides annotation/information to measure the amount of distribution shift between any two of its data sets. In the experiment, this work constructs two applications, 1) evaluating distribution shifts, assessing training conflicts. **[Strengths]** Good idea to study the impact of distribution shift. + This paper is well written and easy to understand. Section 3 gives a good introduction to the step-by-step construction of MetaShift. + The main advantage of MetaShift is it contains systematic annotation about the differences between different shifts. This further helps to study the effects of distribution shifts (e.g., subpopulation shifts) + The idea of generating a large number of real-world distribution shifts that are well-annotated and controlled is attractive. The proposed MetaShift is well illustrated and the figures are helpful to know the information of MetaShift. + Section 4.3 is interesting and the results give some insights. **[Weaknesses]** The major concern is about the experimental evaluation where the constructed tasks are only binary classification. - In Section 4.1, this work constructs four binary classification tasks to study the impact of the shift under the generalization setting. One question is, how about constructing more challenging tasks which involve more classes? - When evaluating subpopulation shifts, the tasks are also binary which contain spurious correlation. The same question is multi-class classification tasks might be needed. - MetaShift is a collection of 12,868 sets of natural images from 410 classes. Why do the experiments only focus on binary classification (e.g., cat vs. dog, bus vs. truck)? In another word, it seems the same settings in Sec. 4 can be constructed based on other classes. It would be helpful if this work could discuss this. Again, more challenging multi-classification setting would be very useful. This work introduces a good way to study the effects of distribution shifts. Specifically, this work proposes a framework called Metashift, which contains systematic annotation about the differences between different shifts. However, the major concern is about the constructed tasks in the experiment. More explanation/discussion can be included to eliminate the question. <doc-sep>In this paper, the authors introduce a new dataset (actully, a collection of datasets) called MetaShift. MetaShift is built on top of Visual Genome and leverages its metadata to cluster images, thus providing a context for each image (labels are of the form class+context, eg, ‘cat in grass’, ‘dog in bathroom’). This context is then used to generate dataset shifts. Besides been much larger than similar (openly available) datasets, MetaShift explicitly provides the context, which can be used to compute a “distance score” pf distribution shift between any two datasets. ### Pros + The paper is well written and easy to follow. + The proposed approach to leverage metadata (form previously published large-scale dataset) to create datasets of domain shifts is simple and well motivated. Splitting a large dataset (with multiple labels) in a meaningful ways to study dataset shift is not trivial. However the authors came up with an intuitive (and relatively simple) approach. + The problem of studying shifts in dataset distribution is very relevant and important to machine learning. This dataset can benefit the community by allowing a more system evaluation of dataset shifts. ### Cons - It would be nice to have more descriptions of the methods used to benchmark the dataset (ERM, IRM, DRO, CORAL and CDANN), architectures used (which model? pretrained or from scratch? what is the model capacity?) and training details (which loss, which optimization, learning rate, batch size, etc). If not enough space on the main text, these informations could be added on appendix. - It would be nice if the authors would give more detail on how the embeddings of meta-graphs are computed. For example what is the matrix A and how are the embeddings computed? Why using spectral embeddings specifically rather than other approaches (eg, using the word embedding (pretrained on large language corpus) of each context)? - It could be nice to show some more quantitative or qualitative (eg t-SNE) for the meta-graph embeddings (used to compute the shift between datasets). - The paper states it generates >12K datasets (across 410 classes), however experiments are done only on a very tiny number of datasets (cat/dog, bus/truck. elephant/horse and bowl/cup). The proposed dataset would be much more useful to the community if the authors would provide a much larger subset of “pre-made” datasets for easy experimentation. I am inclined to accept this paper because of (i) the simplicity of the approach to generate the datasets and (ii) the usefulness to the community. However, I would be more confident with acceptance if the authors would address the weaknesses of the paper (se above).
This work studies the impact of distribution shift via a collection of datasets-MetaShift. Reviewers all agreed that this work is simple, effective, and well-motivated, and has key implications, and will be quite useful to the community. There were some concerns about the lack of analysis of MetaShift, and the binary classification setting, which was addressed by the authors’ responses. Thus, I recommend an acceptance.
Summary: the paper is presenting a learning algorithm for Credal Sum Product Network (CSPN), a type of graphical model that is tractable (easy to compute partition function), and can encode uncertainty in the network parameters (instead of fixed weights, the network parameters have range of values (or more generally, are defined using a set of convex constraints between them)). Prior work [Maua et al., 2017] introduced CSPNs and provided an inference algorithm, and this paper is the first to propose a learning algorithm for CSPNs. Pros: first paper to introduce a weight learning algorithm for CSPNs. Evaluation shows better results than Sum Product Network (SPNs) Cons: - evaluation is limited in two aspects, baselines and tasks. 1) baselines: the only baseline considered is SPNs, which is a reasonable but old baseline. It would be good to see how well CSPN learning works compared to more recent models, especially that even CSPN's inference evaluation [Maua et al., 2017] was similarly limited. 2) tasks: evaluation avoided large datasts. It excluded the largest of the subtasks (footnote page 21), and evaluating on large scale textual data is left for future work. Even though the motivation for SPN was that its inference is tractable and fast, the proposed learning algorithm for CSPNs seems to be 10x slower than that of SPN and didn't scale to large datasets. Notes: - The paper mentioned that CSPN avoids the closed-world assumption, and can work with incomplete examples. I agree with the second but not the first. The proposed learning algorithm takes into account that some instances have unknown values, but it is still assuming that the world only contains the provided list of instances (closed-world assumption). - The paper use of the term "lifting" seems different from how it is used in Broeck et al., 2011 (doing inference at the first-order level without grounding to predicate logic). This needs to be clarified. <doc-sep>In this paper the authors investigate probabilistic representations for learning from incomplete data and specifically investigate credal sum product networks. CSPN are better able to consider data incompleteness which is an important aspect of knowledge bases. The authors perform experiments on a large number of datasets with varying amounts of artificially missing data observing that the optimized log liklihood computed on a learned CSPN generally performed the best. The paper is generally well written and does a good job of explaining the underlying models and algorithms. The paper is not particularly novel but contains a large number of experiments that could be useful to those interested in probabilistic models in regimes with missing data. other comments: - table 4 is a bit busy, there could be a clearer way of presenting and highlighting the relevant results. - section 4.2 has an occurrence of a CPSN type-o<doc-sep>The paper revisits Credal SPNs and proposed a learning approach for Credal SPNs in the presence of missing information That is, now the weights on sum nodes vary in closed and convex set and in turn, one gets a imprecise probability model. Overall, the paper is well written and structured. The main technical contribution are (1) a group-wise independence test and (2) clustering method, both for the credal setting assuming missing data. Specifically, the independence test is a directly application of complete case analysis plus interpreting missing values as contribution to the base population. For the clustering, thee authors should argue why not existing methods for clustering with incomplete data could be use. In any case, the likelihood approach presented also follow the same logic as the independence test. In both cases, the arguments are a little bit hand waving and fluffy. For instance, it is not clear to me what 2is that value that is poorest fit2 (page 6). Still, the clustering is interesting, although as said, a discussion of related work on clustering incomplete data is missing. The empirical evaluation is interesting and follows the standard protocol for SPN. What i am missing is a repeated argument of why CSPNs are important. Furthermore, the running time should be reported. Also, the authors should provide some insights into the structures learned, also in comparison to the complete data case and the even to the standard SPN setting. Furthermore, it might be interesting to use Random Credal SPNs based on Random SPNs (Peharz et al. UAI 2019) as a baseline to illustrate the benefit of structure learning. Currently the results just show likelihood. But shouldn't we also consider here the number of parameters? At least getting some numbers here would be appreciated. Also, sincce you consider the CLL, one should also show a discriminatively learned SPN. General, the experimental protocol should be described at sufficient details. What were the hyperparameters? Was this crossvalidated? To summarize, nice direction with follows the standard approach for learning SPN for learning CSPN with using ideas from data imputation. The empirical evaluation is not well presented in the main part. Some missing related work on the clustering with incomplete data.
This paper develops the first structure learning algorithm for Credal SPNs. The paper is somewhat difficult to evaluate, since the credal paradigm is so different from the usual maximum likelihood paradigm, which makes a direct empirical comparison challenging. By providing more detailed information about the uncertainty, the credal approach certainly has some merit, and while upgrading some SPN structure learning heuristics to the credal setting may not be technically challenging, they are done for the first time in this paper. On the other hand, the reviewers did find many ways in which the paper can be improved. Overall, we recommend acceptance. The authors are encouraged to improve the paper as suggested by the reviewers.
The topic addressed is interesting, the proposed approach is well described and extensively compared against several baselines. Also, an ablation study is performed. The approach itself integrates well-know techniques. The difference with approaches in the baselines are not discussed to highlight the novelty of the approach. The approach combines a number of well-know techniques to come up with an approach for fine-grained, cross-modal retrieval. In spite of addressing a relevant problem, the novelty of the approach is not sufficiently highlighted in the paper. Experimentation was done with two standard datasets and the performance of the approach was compared with several baselines. The approach outperformed the baselines in most cases, however, a detailed analysis of those cases in which is not the best performer would have enrich the discussion. Also, the differences of the baselines with the proposed approach need to be explained. <doc-sep>The most prominent advantage lies in the consideration of both similarity distance and direction for similarity representation and learning. The proposed knowledge graph iterative propagation algorithm is used to explore fine-grained modal representation and the similarity is improved by constructing and adaptively reconstructing the similarity diagram. The ABLATION STUDIES AND ANALYSIS is clear and thorough. Compared with other methods, the advantages of the experimental results are reflected in the multi-perspective strategy, while the proposed KGID and RGR do not show advantages. Although the experiment in this paper contains many comparison methods, the two data sets are still a little insufficient. Formula 18 has a clerical error. <doc-sep>This paper contributes new ideas and this idea seems works according to authors experiments. I think proposed fine-grained matching will benefit cross-modal retrieval. I am not an expert on this topic and educationally guess that the technique in this paper is OK. The experiments in this paper are sufficient. Quantitative results on Flickr30K and MSCOCO are reported. Ablation studies disclose the effect of the two modules. This paper is well written. I can easily follow. Section 3 consists of 5 sub sections and each subsection is comprised by server points. It is hard to understand so many technique points and the title of each subsection is not very friend for readers to understand the relationship of each points. I think that section 3 should be reorganized.
Meta Review: This paper develops an approach for fine-grained matching with multi-perspective similarity modeling for cross-modal retrieval. It contains two main novel modules. One is the knowledge graph iterative dissemination (KGID) module for iteratively broadcast global semantic knowledge and learn fined-grained modality representations. The relation graph reconstruction (RGR) module is developed to enhance cross-modal correspondence by adaptively reconstructing similarity relation graphs. The proposed model is well motivated and novel. Results also show that the model perform state of the art models. Overall this paper is a nice paper that UAI audience will be interested to hear about.
This paper presents a method for enforcing strict orthogonality of convolutional layers, by means of a factorization in the spectral domain. It shows that the technique can be extended to dealing with practical convolutions with strides, dilations and groups. The experiments demonstrated the superior performance in terms of adversarial robustness. Orthogonality is an important problem in the design of neural network architecture that relates to many fundamental properties of the network such as trainability, generalizability and robustness. While the study of orthogonality in fully connected layers (or convolution layers with 4D kernels treated as 2D matrices) has a long history, it is only until very recent (in the past 2-3 years) that work on orthogonality of *convolution* layers emerges. This paper provides a solid study in this area by providing a method of enforcing orthogonality in convolutions, revealing its technical connections with previous methods, designing a deep residual Lipschitz network architectures and conducting solid experiments. I find the presentation to be mostly clear and easy to follow, though I feel that there is a tendency of overclaiming the contribution in the abstract & intro, see below. - Complete parameterization of orthogonal convolution. The paper claims that it offers a complete parameterization of orthogonal convolution, but this is not really the case. As stated in Sec. 2, it only offers complete design for *separable* orthogonal 2D convolutions. This puts the technique in an unfavorable position compared to previous methods that does not require separability (e.g. Trockman & Kolter). - "Orthogonal Networks". The paper frequently use the phrase "orthogonal networks", but it is not clear what that term entails. For example, it is claimed that "Our versatile framework, for the first time, enables the study of architecture designs for deep orthogonal networks", which seems an overclaim since orthogonality in neural networks have already been extensively studied before. In addition, if "orthogonal network" means that the entire network is an orthogonal transformation, then this is kind of a useless network since orthogonality implies linearity (as long as it is surjective). If it means approximately orthogonal then it should consider, in addition to the convolutional layers, the effect of the nonlinear layers - right now there is no discussion of whether the GroupSort that is used as the nonlinear layer is approximately orthogonal or not. Solid work on orthogonality of convolution, though there seem to be some overclaiming / imprecise statements in the intro/abstract that may be misleading. <doc-sep>This paper proposed a theoretical framework for orthogonal convolutional layers based on the equivalency of orthogonal convolution in the spatial domain and the paraunitary systems in the spectral domain. The proposed method parametrizes the orthogonal convolution layer as compositions of multiple convolutions in the spatial domain, resulting in exact orthogonality. The layers are also more memory and computationally efficient than most previous methods. **Strengths** - The proposed method is theoretically grounded and relatively efficient in computations. - The analysis of strided, dilated convolution layers is inspiring. - Numerical evidence on orthogonality evaluation of different designs for standard convolution shows that exact orthogonality is achieved. **Weaknesses** - It is nice to see that exact orthogonality is achieved however it remains unclear exact orthogonality is actually helpful or needed. For example, from Table 2, the proposed SC-Fac achieves the most "accurate" orthogonality result in worse performance in both certified and practical robustness. Even though the author claims that the method achieves "comparable" results with other baseline methods, the results are consistently worse than the baselines. - The authors can compare their core idea with related work that is more heuristic, such as [1] which also considers achieving orthogonality in the spectral domain, as well as [2],[3]. [1] Liu et al. Convolutional Normalization: Improving Deep Convolutional Network Robustness and Training [2] Wang et al. Orthogonal Convolutional Neural Networks [3] Bansai et al. Can We Gain More from Orthogonality Regularizations in Training Deep CNNs? - Even though the method is more computationally efficient, it is only compared with methods such as Cayley which is known to be computationally heavy. The method is still much computationally heavier than the original networks. It would be nice to have an extra line in Figure 4 showing the train time of the ordinary network. The proposed method achieves exact orthogonal convolutional layers through re-parametrization. The method is theoretically grounded and easy to understand. Numerical proofs are provided to show exact orthogonality is achieved by composing a sequence of learnable convolutions. <doc-sep>This paper suggests a framework for designing orthogonal convolutions using paraunitary systems. The proposed framework studies orthogonalization of different convolution operations. The proposed methods were examined on several datasets in comparison with state-of-the-art orthogonalization methods. The paper proposes a nice framework which covers different orthonormalization methods for convolutional kernels. In the analyses, the proposed framework performs on par with the state-of-the-art. However, there are several issues with the paper. In general, first, some of the claims should be revised since they are not verified in the analyses. Second, experimental analyses should be improved with additional analyses on additional datasets and tasks in comparison with the other state-of-the-art methods. More detailed comments are given as follows: 1. The paper states that “However, many previous approaches are heuristic, and the orthogonality of convolutional layers is not systematically studied.” However, there is a nice literature on orthogonal CNNs, which explores these CNNs from various aspects including generalization and convergence properties of models, in various tasks including image recognition, speech processing and NLP, including adversarial robustness studied in this paper. Then, could you please describe the systematic study referred to and proposed in this paper, which is not covered in the literature? 2. Please explain the statement “There are mature methods that represent orthogonal matrices via unconstrained parameters.” How does this enable to optimize parameters of orthogonal convolutions using standard optimizers instead of optimizers designed for optimization on orthogonal convolutions? 3. There are some issues with the definition and interpretation of orthogonal layers. First, an orthogonal layer is defined according to preservation of norm of input and output. This can be achieved using different types of parameters of convolutions, even with scaled Gaussian parameters. In addition, orthogonal convolutions proposed in the literature satisfy some particular orthogonality properties of matrices of parameters. Second, the orthogonal convolution proposed in this paper is associated with paraunitary property of the transfer matrix. 4. In the experimental analyses, in most of the results, state-of-the-art outperforms the proposed method, while in some results, the proposed method outperforms them. To show a more clear benefit of the proposed method over the state-of-the-art, could you please perform analyses on additional larger scale datasets such as Imagenet? Could you please also compare the proposed method with the methods which employ orthogonal matrices? 5. It is proposed that “related works proposing orthogonalization methods do not guarantee Lipschitzness after training”. However, the proposed orth. conv. employ deep Lipschitz networks to guarantee Lipschitzness. If the proposed orth. conv. does not employ deep Lipschitz networks, then does it guarantee Lipschitzness? 6. How do you optimize “learnable (column)-orthogonal matrices”? 7. While training models, how do you estimate and optimize h[z], H[z], ortho. Factors, and model params? In the code, Adam is used to optimize parameters. However, it is not clear how orth. factors are also optimized. Do you also optimize them using Adam? 8. How do you apply z-transform on input, kernel and output. For instance, if an input NxN image x is convolved with a 7x7 kernel h, then how do you apply z transform on x and h? That is do you apply path wise or holistically? Also, if x’ is a feature map of size CxWxH, where C is the number of channels, W and H are weight and height of the map, then how do you apply the z-transform on the map? 9. How do you compute ortho. factors efficiently? 10. How do you calculate model parameters A? 11. In the experiments, the proposed methods perform similar to the state-of-the-art. To show superiority of the methods in comparison with the state-of-the-art, additional analyses on larger scale datasets and models should be provided. The proposed framework is nice, and the initial results are promising. However, there are various unclear parts in the paper. In addition, some of the claims are not verified and experimental analyses are limited. Therefore, the paper should be improved with additional analyses and in detail revision for clear acceptance. <doc-sep>This work proposes a new method for orthogonalizing the convolutional layers by exploring the equivalence between spatial orthogonal and spectral paraunitary. The work then empirically demonstrates the effectiveness of the proposed methods by comparing (1) the Lipschitzness (2) the results of adversarial robustness and (3) the time and memory cost among different methods. This work proposes a new method for orthogonalizing the convolutional layers by exploring the equivalence between spatial orthogonal and spectral paraunitary. The experiments are conducted on various networks including the shallow KW-large networks and slightly deeper WideResNet22. Although the reviewer does not check the submitted code in detail, the code is well-written and clearly commented. The major concerns are (1) the work seems to have made some overstatement of the contributions, claiming that all the previous work are heuristic, and the proposed approach is systematic with theoretical justification. The reviewer does not quite buy this point, and better explanation on this is needed; (2) the experimental results are not consistently showing the advantages of the proposed method, also the improvement in terms of computational efficiency seems to be marginal. Below are some more detailed comments. 1. The reviewer found the implementation of the proposed method somewhat hard to follow, it could be better to incorporate an algorithm view of the method to clearly present it. 2. In the paper, the Q matrix is defined as an orthogonal matrix that is randomly initialized and fixed during training. But the reviewer didn’t find the associated implementation in the code (correct me if I missed anything), so the reviewer is wondering how the Q matrix is constructed in the experiments. 3. When demonstrating the results of adversarial robustness, the paper devotes to $\\ell_2$ norm based attacks. The reviewer is curious about the results of $\\ell_\\infty$ based attacks. 4. The reviewer notices that in the code, when considering striding, the authors include 2 use cases [stride_wide, stride_slim], the reviewer is curious about the actual definition of the different use cases. Besides, the code of the proposed method mentions that the kernel size should be a multiple of stride (in the stride_wide case, this constraint is bypassed by letting kernel_size = kernel_size * stride), the reviewer would appreciate it if this part is presented in more detail. (An algorithm for handling different cases would be nice). 5. This paper is missing quite a few citations on related work: https://arxiv.org/abs/1810.09102 https://arxiv.org/abs/2103.00673 https://arxiv.org/abs/1911.12207 https://arxiv.org/abs/1905.11926 Please refer and discuss the relationship The paper is well-presented overall. However, better positioning of the work, and more convincing experimental results are needed.
This paper proposes a method for parameterizing orthogonal convolutional layers that derives from paraunitary systems in the spectral domain and performs a comparison with other state-of-the-art orthogonalization methods. The paper argues that the approach is more computationally efficient than most previous methods and that the exact orthogonality is important to ensure robustness in some applications. The reviewers had diverging opinions about the paper, with most reviewers appreciating the theoretical grounding and empirical analysis, but with some reviewers finding weakness in the clarity, reproducibility, and discussion of prior work. The revisions addressed many, but not all, of the reviewers' criticisms. One point that was highlighted in the discussion is that the method is restricted to separable convolutions. The authors acknowledged this limitation, justifying the expressivity of the method with a comparison to CayleyConv (Trockman & Kolter) and a suggestion that more expressive parameterizations are not necessarily available in 2D. I am not sure this is entirely accurate. In the discussion of related work, the paper briefly mentions dynamical isometry and the prior work of Xiao et al. 2018, who develop a method for initializing orthogonal convolutional layers. What the current paper fails to recognize is that Algorithm 1 of Xiao et al. 2018 actually provides a method for parameterizing non-separable 2D convolutions: simply represent every orthogonal matrix in that algorithm in a standard way, e.g. via the exponential map. While I think there is certainly value in the connection to paraunitarity systems, it seems to me that the above approach would yield a simpler and more expressive representation, and is at minimum worth discussing. Overall, between the mixed reviewer opinions and their lingering concerns and the existence of relevant prior art that was not discussed in sufficient depth, I believe this paper is not quite suitable for publication at this time.
The paper presents a benchmark for contemporary symbolic regression, constructing a set of regression problems on which SR methods are tested. The authors use a variety of datasets, some from PMLB, some other SR databases, and use R2 testing for establishing accuracy. Overall, genetic programming-based models seem to perform best, while AIFeynman perform best for finding solutions for synthetic problems. Overall, the paper is clearly-written, with extensive datasets for benchmarking. The authors do a good job at bringing together SR methods and comparing their performance, clearly explaining drawbacks and limitations. The work has good contributions despite the limitations and is a good exposition of presented methods. The paper presents limitations in the using real-world data from models, as authors also explain. The presentation of the results in the main text is clear, yet a more in-depth analysis of different datasets and variance across them would be helpful in further understanding the benchmark (although a more extensive analysis is presented in the Appendix). The applications could be strengthened in the exposition as well. <doc-sep>The authors introduce an open-source benchmarking platform for symbolic regression and benchmark an array of different methods on over 200 regression problems. The benchmarking code and data is readily accessible and appears to be straightforward to use (although I have not tried). The benchmarking results reveal that the best performing methods for real-world regression combine genetic algorithms with parameter estimation and/or semantic search drivers. In the presence of noise, the authors find that deep learning and genetic algorithm-based approaches perform similarly. The benchmarking code appears to be easy to apply and the data is made available. The proposed way of benchmarking allows more detailed comparison of symbolic regression methods than other available benchmarking software. The concrete advantage over commercial symbolic regression benchmarking platforms (the paper mentions Eureqa and Wolfram) should be stated more clearly. It is also not exactly clear to me how the algorithms used by these platforms differ from the proposed solution and why the proposed solution is more ‘straightforward’ as mentioned in the paper. Overall, I think the paper spends too little time describing the proposed benchmarking method and its advantages. The summary of existing algorithms which spans a large part of the paper could be shortened to make room for these explanations. The paper’s focus is placed mainly on the benchmarking results and not on the benchmarking methodology. Shifting the focus a little could be beneficial to the wider community given this is a dataset / benchmark track. <doc-sep>The paper is motivated by the need for an accessible, reproducible, and realistic benchmark for symbolic regression (SR) research to help the field evaluate new methods, agree on the quality of existing ones, and reduce reliance on toy problems. To that end, it contributes SRBench, a repository of datasets and 14 SR methods constructed to allow integration of future work. The paper states that this represents “the largest and most comprehensive SR benchmark effort to date.” Relative to previous work, the paper also contributes particular attention to the multiple criteria at play in evaluating SR methods. Because SR involves not just prediction but learning an generative form, these criteria include prediction accuracy, model simplicity, and whether the method learns the true underlying model. For this reason, SRBench augments the existing Penn Machine Learning Benchmark (PMLB) of datasets without specified ground truth with a second set of 130 synthetic datasets from physics that have known ground truth. The paper also uses the benchmark to compare the performance of the 14 SR methods. It describes statistically significant differences among the algorithms in terms of both prediction and ground-truth, though some algorithms do well for one task and not the other. For the black-boxed prediction tasks, it also compares SR methods to non-SR machine learning methods and finds some SR methods can produce models that are both better and simpler. First, a disclaimer: I am not an expert in symbolic regression or genetic programming, so this review is written from the perspective of a technically literate audience that is interested in but not well-versed in the details of the symbolic regression algorithms presented here. As such, I cannot vouch for the technical accuracy of the descriptions of the algorithms in Section 3. That said, I appreciate the organization of Section 3. The descriptions clearly reference back to Table 1, which includes both sources for each method and links to its code on GitHub. Clicking, for example, on the “Operon” row link yields a well-formatted Github page with building and usage instructions and example code. These would be valuable resources to the expert or to a person looking to get into this area. More broadly, the paper and SRBench represent a resource for the SR community. Although the benchmark draws on existing datasets and approaches, the project of gathering those together into an accessible, reproducible, standardized framework; incorporating 14 SR methods; and making the whole thing open to future contributions, represents extensive, valuable work. The experimental results may also be useful, both in guiding practitioners in their algorithm choices and as results that other researchers may seek to replicate or extend. The introduction and conclusion note that the benchmark provides a “large and diverse” set of regression problems and the paper repeatedly mentions the need to assess “real-world performance,” but it does not make it clear what kind of variety the datasets cover or what ‘real’ means here. The datasets derived from physics equations are clearly more ‘real’ than contrived toy examples, but are they a good representation of the main ‘real’ context that SR is applied to? The authors note in the conclusion that future improvements might incorporate “more realistic applications settings” but the paper would be improved if it gave a bit more attention to this point earlier on. In particular, the introduction gives little sense of what is at stake in a broader sense in the use of SR methods. I would recommend including some mentions here of the kinds of areas where SR methods have been or could be applied (or perhaps list a few example tasks, more specific than “physics, engineering, statistics…”). In what contexts is it advantageous or necessary to learn an analytical model rather than only predict? Or, do SR algorithms sometimes do better at prediction than ML algorithms and are they being adopted in ML applications? It would also be helpful to have a summary of what the datasets cover. Clearly, the ground-truth datasets come from physics, but what kinds of application areas are the PMLB datasets drawn from? Does it have any relevant gaps, known limitations, or conscious omissions? This can all be fairly brief, but it would help both motivate the paper and make the limitations and social implications of the benchmark more specific. There is additional detail on the data in the appendix, but it remains unclear to me (A) whether SR is actively being applied, say, in criminal justice applications (the example dataset context mentioned in the appendix) and (B) whether any information warning of potential biases or contextual factors for particular datasets is provided with the datasets.
All reviewers support accepting the paper, especially after taking the author feedback into account. One concern was how the proposed algorithms compare to those implemented in commercial software platforms such as Wolfram. The authors correctly point out that the closed source nature of these platforms makes a comprehensive comparison difficult. A limited comparison could still be a valuable addition to the benchmark so that researchers can compare their algorithms to the commercial state-of-the-art. In any case, the paper is of high quality and I recommend accepting it.
This paper proposes a submodel optimization technique in federated learning in the presence of diverse data feature distributions across clients. Heterogenity in features of local data leads to distinct update speed for individual feature-related parameters, degenerating convergence of the model. The paper claims that the global objective function of typical federated learning framework is ill-conditioned when each client updates and communicate only subset of the total model (submodel). This paper handles the problem by compensating the amount of parameter updates based on the feature heat dispersion. It also provides formal analysis for the convergence of the proposed method. It demonstrates the effectiveness of the proposed method over baselines on three benchmarks; rating classification, text sentiment classification, click-through rate prediction. ### Strengths - The submodel optimization setup where each client updates the subset parameters of the central server is realistic and would be the proper extension with clear motivation. In real-world applications edge devices are likely to have limited capacity in memory, computation, and communication bandwidth. - Experimental results are quite compelling. Figure 3 presents that FedSubAvg converges fast and shows similar tendency with centralized training (CentralSGD) ### Weakness - The assumption that server knows indexes for the feature-related parameters of all clients is unrealistic. Due to the assumption, the proposed algorithm is not compatible with deep models (e.g. DNN, MLP) with multiple non-linear layers. - In the similar vein, The proposed method needs to calculate data feature dispersion, which can bring about privacy concerns. - As mentioned above, the proposed algorithm is incompatible with modern deep learning architectures (CNN, MLP), the more compatible approach should be proposed without using prior knowledge that the feature dispersion is known and feature-related parameters are predefined. - The writing is hard to follow, and some details (e.g. implementation details) were not explained well. <doc-sep>The manuscript considers a specific use case of federated learning in a recommender system or NLP scenario and proposes to scale the model update per coordinate through the ratio between the total number of clients and the number of clients who involve this model parameter coordinate. Some theoretical results are provided to motivate and justify the necessity of design choices. # Strengths * The considered scenario is interesting and important to the community. The proposed method is simple yet (intuitively) effective in alleviating the identified limitation. * The entire manuscript is generally well-structured, and most claims are well-supported. * Extensive numerical results are provided for some aspects. # Weaknesses 1. The index set $S(i)$ determines the sub-model of $X S(i)r$. However, for the four considered datasets, the exact distribution of features (hot vs. cold features) is still unknown, and some specific treatments (line 211- line 212, line 226 - line 227) may require some justifications, such as "we label the samples with the ratings of 4 and 5 to be positive and label the rest to be negative". Will these treatments magnify the large imbalance phenomenon between hot and cold features? 2. Given the considered challenges and proposed coordinate-wise scaling solution, one strong competitor should be considered, i.e., using an adaptive optimizer on the server as in [1]. 3. The client's index set must be sent to the server, and a clustering-based FL technique can utilize this information naturally. The reviewer is also interested in how cluster-based FL approaches perform, as these algorithms may have already addressed the issue of hot vs. cold features. # Reference 1. Adaptive Federated Optimization --- # Post rebuttal The reviewer acknowledged the authors' feedback and checked other reviews. The response has addressed concerns #2 and (partially) #3. However, the reviewer still believes that the manuscript needs some revisions to polish its text (e.g., make it self-contained, well-structured, precise, etc). Check Weaknesses part. <doc-sep>The authors consider a specific federated learning scenario where different data features ‘involve’ different clients. For some features, a large number of clients can be involved while other features might involve only limited clients. The authors show that in this case, the classical FedAvg can suffer from slow convergence. In the proposed new algorithm, the aggregations of parameter updates are weighted per parameter by the ratios of the local clients involved. The authors prove that by reweighting the parameter updates in this way, the condition numbers of the Hessian of the learning objectives become smaller than the original Hessian. In the experiments with four real-world datasets, the authors demonstrated that the proposed algorithm (FedSubAvg) offers faster convergence than existing alternatives. Strengths: Improving the convergence of federated learning is an important and active area of research. The authors contribute a new algorithm that can potentially improve the convergence of the federated averaging algorithm. The experiments were conducted on large-scale real-world datasets. Limitations: - The paper would need a thorough rewriting. For example, the example illustration of feature heat dispersion in recommender systems at L45-50 is difficult to comprehend. What does ‘less than 1% of the average’ mean? Please formally define ‘involvement‘. L122 ‘the number of clients who involve this model parameter’: How is this number determined in general? - The proposed algorithm seems applicable only when ‘submodels‘ are well-defined, i.e., the individual clients do not have to update the full model parameters but instead, can download and update only the required small parts (submodel) of the complete model. On the other hand, typical local learning steps tend to require simultaneous updates of all model parameters, unless the model is linear. The authors should provide a detailed discussion as to when such submodels are well-defined and how the index set S(i) is determined in practice. Can this algorithm be applied to the standard MLPs? Minor comments - The proposed algorithm can be considered as diagonal preconditioning on stochastic gradient descent (SGD) (L167). The authors could discuss connections to existing SGD preconditioning methods, e.g., AdaGrad: AdaGrad preconditions the SGD update based on the magnitudes of individual parameter updates. Extending AdaGrad to the federated learning setting can be straightforward. - Please enlarge the plots in figures 3 and 4. The limitations and potential negative societal impact were not discussed. <doc-sep>The authors point out the fact that (in the context of federated learning) client’s local data normally involve a small subspace of the full feature space. Especially in the case of models that contain large sparse embeddings, this would mean that each client downloads and updates only a small part of the full global model (I.e., a submodel). As some features are more popular than others (e.g., words in a vocabulary), some embedding will then be averaged by a larger fraction of clients than others. The authors then show that this discrepancy (called heat dispersion) might result in slower convergence of algorithms like FedAvg. They then propose a new method where each weight essentially has a different learning rate, based on how many clients participate in its update. They show both analytically and through an evaluation that this method improves the convergence speed. + The authors address an interesting problem on training FL models with sparse embeddings: the fact that not all of them are equally popular. + The authors conducted a theoretical analysis of their model + They compared with 4 baselines, some of them designed to speed up convergence on non-iid data. + The results show that their methodology is promising - There are some assumptions in the evaluation (see below in questions for details) - It is unclear how well this methodology would work with privacy-preserving mechanisms (e.g., local DP noise) Please see above (questions)
This paper considers a particular FL scenario, where the model includes a large embedding layer as is typical in NLP and recommendation models. To make training feasible or more efficient, the FedSubAvg method is proposed. In particular, it deals with the setting where not all features are equally encountered in training data. This is leveraged to reduce communication and computation overhead, and also to improve optimization dynamics. The proposed approach comes with theoretical guarantees, and the paper also provides a thorough numerical evaluation demonstrating benefits over other approaches. The reviewers raised concerns about the relevance and potential narrowness of the setup, assumptions, and whether the proposed FedSubAvg approach would be comparable with privacy-enhancing technologies like secure aggregation and differential privacy. It is clear that the setup considered is indeed relevant given the prevalence of models with large embedding layers in NLP and recommendation models, and the useful of such models in several applications. Given this, it isn't necessary for the authors to demonstrate any relevance to training standard MLPs since that isn't the focus and no claims are made in the paper about such architectures. The authors responses also were convincing that the approach can be made compatible with DP and secure aggregation in a reasonable way. I'm happy to recommend that this paper be accepted. When preparing the camera ready, to make the paper accessible to a broader audience, it would be helpful to include (in the intro, or early in the paper) additional material and references to motivate the relevance of models with large embedding layers, in addition to the key revisions already made in response to the initial reviews.
In this paper, the authors propose to use contrastive learning for matching in latent space in the Wasserstein autoencoder (WAE). In addition, they employ techniques such as momentum contrast in contrastive learning. Experimental results show that the proposed method, MoCA, is more stable and converges faster than existing methods. It is also capable of generating high-resolution images such as CelebA-HQ. Strengths: - Overall, this paper is well organized. - The method proposed in this paper is a reasonable combination of existing state-of-the-art methods. - Experimental results show that the proposed method has stability and faster convergence, which is promising. Weaknesses: - The authors claim that MoCA can generate images with high quality. However, the experimental results in this paper do not show this very well. First of all, although the authors claim that the results in Figure 5 and Figure 6 are "look realistic", some of the face images seem to be collapsed, and the interpolation between the two images seems to be discontinuous. Since there is no qualitative comparison with existing methods, we cannot judge these methods as "realistic". In Table 1, the authors show the quantitative comparison results with the existing methods, but there are some puzzling points. First, why does MoCA-A2, which has fewer parameters, have higher performance? Also, why does WAE-GAN perform better than MoCA? Since Table 1 shows that WAE-GAN is better than MoCA, shouldn't it be compared with WAE-MMD? Furthermore, why are the quantitative results of CelebA-HQ not shown? - The authors show in their experiments that MoCA achieves faster convergence than existing methods, but they do not fully explain why MoCA shows such convergence. Why does the convergence of the proposed method become better when contrastive learning is included? In addition, Figure 1 shows the line graphs of only one training trial for each method, and the variance of each method is not shown. Therefore, I cannot judge whether the difference in results between methods is large or small. - In Section 4.1, the authors compare WAE-MMD with MoCA as the original WAE algorithm, but I do not understand why they do not compare it with WAE-GAN, which is also original. I think this should be done because Table 1 shows that WAE-GAN has better image generation performance than the proposed method. For example, the authors employ MoCo, but how effective is this in improving the performance of the proposed method? The authors do not seem to have verified such a thing. Minor comments: - The significant figures of the results of each method in Table 1 should be the same. - Section3: any fixed t -> any fixed \\tau In terms of convergence and stability, the proposed method is considered to be effective to a certain extent. Also, the idea of using contrastive learning for WAE is interesting. However, the explanation of the claim and the presentation of the results are insufficient. <doc-sep>The paper presents a regularization technique for Wasserstein Auto-Encoders, based on contrastive learning. Strengths: 1. The paper is well-written and easy to follow. I do think that some of the notation such as "push forward" from measure theory, is really not needed or particularly useful here. Simpler terminology such as just using encoding and decoding functions would be more than sufficient. 2. Some of the experiments are interesting and show the effects of the proposed regularization e.g. on the singular value distribution of the latent representation. 3. Using a contrastive approach is a potentially effective way to match the prior and posterior distributions. Weaknesses: It is unclear that the proposed regularizer results in qualitatively better reconstructions than the baselines. FID is not a perfect measure and the samples from baselines should be shown side-by-side with the proposed approach to know whether there is indeed an improvement. I found the CIFAR-10 reconstruction results are somewhat poor. Question: the value of the lambda parameter is very large - what are the relative loss values during training/convergence (the reconstruction loss vs. regularizer loss)? The paper is interesting but has some shortcomings. I would like to see some results of the baselines to decide if the proposed regularizer does indeed improve results qualitatively. I do not believe that FID is a proper measure of quality (not just for this paper but for measurement of GAN sample quality, in general). I give the paper a slightly positive score based on the idea, but I am looking forward to some samples in the rebuttal to decide my final score. <doc-sep>This paper proposes a new approach to train Wasserstein auto-encoders (WAE) with contrastive learning techniques. Specifically, the paper proposes to enforce the marginal matching constraint of WAE by exploiting the fact that contrastive learning objectives optimize the latent space distribution to be uniform over the unit hypersphere. I notice this is a re-submission from ICLR-2021. Thus some of my comments are based on the differences between two versions. ## Strengths 1. The paper is well written and well motivated. 2. I think the idea of using contrastive learning to enforce a hypersphere prior for WAE is clever and neat. 3. The authors provide extensive ablations on hyperparameters. ## Weaknesses 1. My main concern is the performance of the proposed method on CIFAR10 and CelebA. The interpolation, reconstruction, and samples in Figure 6 are very blurry, and hard to justify the benefit of using the proposed approach. The reported FID in Table 1 and 2 are very high. It would be nice to include a comparison of [1] (which has FID of 5.25 and 24.08 on CelebA and CIFAR10 respectively). Also, why is the two-stage VAE baseline in the previous version removed? 2. It would be nice to include WAE-GAN in Figure 1 and 2, since it outperforms the proposed MoCA in Table 1. 3. I think it would be interesting to see how to integrate the instance contrastive loss as in DC-VAE [2] into the proposed MoCA. [1] Aneja, Jyoti, et al. "Ncp-vae: Variational autoencoders with noise contrastive priors." arXiv preprint arXiv:2010.02917 (2020). [2] Parmar, Gaurav, et al. "Dual contradistinctive generative autoencoder." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. The main idea of the paper is well motivated. However, I still find the results on image datasets such as CIFAR10 and CelebA hard to justify the superiority of the proposed method over baselines. I lean towards weak rejection but am willing to amend my score if my concerns are addressed.
This paper presents a variant of the WAE which uses a contrastive criterion to enforce the marginal distribution matching constraint. Experiments show faster convergence in terms of Wasserstein distance, more visually appealing samples, and better FID scores compared with other WAE models. The original WAE framework leaves open the choice of approximation for enforcing marginal distribution matching, and the original paper gives two such algorithms. Therefore, it's pretty natural to replace this approximation with something else (such as the contrastive criterion used here), so a submission would need to show evidence that it's significantly better than other approaches. Reviewers have expressed various concerns about the experiments. None of them are major problems, but overall the method doesn't seem consistently better than other WAE methods; e.g., the FID score is worse than that of WAE-GAN. I encourage the authors to take the reviewers' comments into account in preparing the submission for future cycles.
This submission proposes a novel loss function, based on Maximum Mean Discrepancy (MMD), for knowledge transfer (distillation) from a teacher network to a student network, which matches the spatial distribution of neuron activations between the two. The proposed approach is interesting but there is significant room for improvement. In particular: *Clarity* It is not clear how the distribution of neuron activation are matched between the teach and student network. The C_T and C_S are not defined specifically enough. Does it include all layers? Or does it only include a specific layer (such as the last convolution layer)? *Interpretability* Section 4.1. tries to interpret the approach but it is still not clear why matching distribution is better. The MMD loss proposed could run into problem if the classification task does not involve "spatial variation". For example, for a extremely simple task of classifying three classes "R", "G" and "B" where the whole image has the same color of R, G and B respectively, the spatial distribution is uniform and the proposed MMD loss would be 0 even if the student network's channels do not learn discriminative feature maps. Another example is when a layer has H=W=1. *Significance* The experiment shows that polynomial-two kernel gives better result, but Sec. 4.3.2. mentions that it is equivalent to Li et al. (2017b) in this case. *Practical usefulness not justified* In the experimental section, the student network's number of parameters and FLOPS are not detailed, so it is unclear how much efficiency gain the proposed method achieved. Note in practice small networks such as MobileNet and ShuffleNet have achieved significantly better accuracy-efficiency trade-off than the teacher networks considered here (either for CIFAR10 or for ImageNet1k). *Improvement not significant* The results obtained by the proposed approach is not very significant compared to "KD" along.<doc-sep>This paper targets knowledge distillation of a large network to a smaller network. The approach is summarized by equations (3) and (4), which in short proposes that one should use the maximum-mean-discrepancy (MMD) of the network activations as a loss term for distillation. When considering CIFAR image classification tasks, it is shown that only when using a specific quadratic polynomial kernel (which as described in https://arxiv.org/pdf/1701.01036.pdf is tantamount of applying neural style transfer) the proposed approach is able to match the performance of the seminal paper of Hinton et al. When embarking to imagenet, the proposed approach is only able to match the performance of standard knowledge distillation by adding the quadratic term (texture in neural style synthesis jargon). This is actually a sensible proposal. Yet, the claims about MMD as a way of explaining neural style transfer has appeared in the paper cited above, which the authors mention. The idea of transferring from one domain to another using MMD as a regularizer appeared in https://arxiv.org/pdf/1605.06636.pdf by Long et al --- indeed equation (3) of this paper matches exactly equation (10) of Long et al. Note too that Long et al also discuss what kernels work well and which work poorly due to vanishing gradients and propose parametrised solutions. This is something this paper failed to do. The two works cited above make me wonder about the novelty of the current paper. In fact, this paper ends us being an application of the neural style transfer loss function to network distillation. As such this could be useful, if not already done by someone else previously. I find that the paper is poorly written, with many typos, and lacks focus on a single concrete story. The CIFAR experiments fail to use KD+NST (ie the thing that works for imagenet - neural style transfer) and section 5.3 appears trivial in light of the cited works. For all these reasons, I am inclined to reject this paper. <doc-sep>This paper proposes a simple method for knowledge distillation. The teacher and student models are matched using MMD objectives, the author demonstrates different variants of matching kernels specializes to previously proposed variants of knowledge distillation. - The extensive evaluation suggests that the MMD with polynomial kernel provides better results than the previously proposed method. - It is interesting to see that MMD based transfer has more advantage on the object detection tasks. - Can the author provides more insights into the behavior of different kernels, for example visualizing, the gradient map might help us to understand why certain kernel works better than another one? - Did you consider translation invariance or other spatial properties when designing your kernels? In summary, this is an interesting paper with good empirical results. The technique being used generalization is quite straightforward, but the paper also includes a good amount of discussion on why the proposed approach could be better and I think that really helps the reader.
The paper presents a sensible algorithm for knowledge distillation (KD) from a larger teacher network to a smaller student network by minimizing the Maximum Mean Discrepancy (MMD) between the distributions over students and teachers network activations. As rightly acknowledged by the R3, the benefits of the proposed approach are encouraging in the object detection task, and are less obvious in classification (R1 and R2). The reviewers and AC note the following potential weaknesses: (1) low technical novelty in light of prior works “Demystifying Neural Style Transfer” by Li et al 2017 and “Deep Transfer Learning with Joint Adaptation Networks” by Long et al 2017 -- See R2’s detailed explanations; (2) lack of empirical evidence that the proposed method is better than the seminal work on KD by Hinton et al, 2014; (3) important practical issues are not justified (e.g. kernel specifications as requested by R3 and R2; accuracy-efficiency trade-off as suggested by R1); (4) presentation clarity. R3 has raised questions regarding deploying the proposed student models on mobile devices without a proper comparison with the MobileNet and ShuffleNet light architectures. This can be seen as a suggestion for future revisions. There is reviewer disagreement on this paper and no author rebuttal. The reviewer with a positive view on the manuscript (R3) was reluctant to champion the paper as the authors did not respond to the concerns of the reviewers. AC suggests in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
This paper discusses a algorithm for variational inference of a non-linear dynamical models. In this paper model assumption is to use single stage Markov model in latent space with every latent variable Z_t to be defined Gaussian distributed with mean depends on Z_(t-1) and time invariant variance matrix lambda. The non linearity in transition is encoded in mean of Guassian distribution. For modeling the likelihood and observation model, the Poisson or Normal distribution are used with X_t being sampled from another Gaussian or Poisson distribution with the non-linearty being encoded in the parameters of distribution with variable Z_t. This way of modeling resembles so of many linear dynamical model with the difference of transition and observation distribution have nonlinearity term encoded in them. The contribution of this paper can be summarized over following points: - The authors proposed the nonlinear transition and observation model and introduced a tractable inference model using Laplace approximation in which for every given set of model parameter solves for parameters of Laplace approximation of posteriori and then model parameters get updated until converges -the second point is to show how this model is successful to capture the non-linearity of the data while other linear models do not have that capabilities Novelty and Quality: The main contribution of this paper is summarized above. The paper do not contain any significant theorem or mathematical claims, except derivation steps for finding Laplace approximation of the posteriori. The main challenge here is to address effectiveness of this model in comparison to other non-linear dynamical system that we can name papers as early as Ghahramani, Zoubin, and Sam T. Roweis. "Learning nonlinear dynamical systems using an EM algorithm." Advances in neural information processing systems. 1999. or more recent RNN paper LSTM based papers. I think authors need to distinguish what this paper can give to community beside approximate posteriori of latent variables that other competing models are not capable of. If the aim is to have that posteriori, the authors should show what type of interpretation they have drawn from that in experiments. There are lots of literature exist on speech, language models and visual prediction which can be used as reference as well. Clarity: The paper is well written and some previous relevant methods have been reviewed . There are a few issues that are listed below: 1- as mentioned in Quality sections authors should be more clear about what is distinguished in this paper that other non-linear dynamical systems 2- they used short form RM for Recognition model or FPI for fixed point iteration that need need to be defined before being used significance and experiments: The experiments are extensive and authors have compared their algorithm with some other linear dynamical systems (LDS) competing algorithms and showed improvement in many of the cases for trajectory reconstruction. A few points can be addressed better, it can be seen for many of experiments exhaustive search is used for finding dimension of latent variable. This issue is addressed in Kalantari, Rahi, Joydeep Ghosh, and Mingyuan Zhou. "Nonparametric Bayesian sparse graph linear dynamical systems." arXiv preprint arXiv:1802.07434 (2018). That paper can use non-parametric approaches to find best latent dimension, although the paper applied the technique on linear system, same technique could be adopted to non-linear models. Also that model is capable of finding multiple linear system that model the non linearity by switching between diffrent linear system, for switching linear system, this paper can be named as well: Linderman, Scott, et al. "Bayesian learning and inference in recurrent switching linear dynamical systems." Artificial Intelligence and Statistics. 2017. It is shown that the model can reconstruct the spikes very well while linear model do not have that power (which is expected), but it is interesting to see how other non-linear models would compare to this model under those certain conditions It is desired and interesting to see how the model behave one step ahead and K-step ahead prediction. Please address why it cannot be done if there is difficulties in that.<doc-sep>I'll start with a disclaimer: I have reviewed the NIPS 2019 submission of this paper which was eventually rejected. Compared to the NIPS version, this manuscript had significantly improved in its completeness. However, the writing still can be improved for rigor, consistency, typos, completeness, and readability. Authors propose a novel variational inference method for a locally linear latent dynamical system. The key innovation is in using a structured "parent distribution" that can share the nonlinear dynamics operator in the generative model making it more powerful compared. However, this parent distribution is not usable, since it's an intractable variational posterior. Normally, this will prevent variational inference, but the authors take another step by using Laplace approximation to build a "child distribution" with a multivariate gaussian form. During the inference, the child distribution is used, but the parameters of the parent distribution can still be updated through the entropy term in the stochastic ELBO and the Laplace approximation. They use a clever trick to formulate the usual optimization in the Laplace approximation as a fixed point update rule and take one fixed point update per ADAM gradient step on the ELBO. This allows the gradient to flow through the Laplace approximation. Some of the results are very impressive, and some are harder to evaluate due to lack of proper comparison. For all examples, the forward interpolate (really forecasting with smoothed initial condition) provides a lot of information. However, it would be nice to see actual simulations from the learned LLDS for a longer period of time. For example, is the shape of the action potential accurate in the single cell example? (it should be since the 2 ms predictive r^2 shows around 80%). Except in Fig 2, the 3 other examples are only compared against GfLDS. Since GfLDS involves nonconvex optimization, it would be reasonable to also request a simple LDS as a baseline to make sure it's not an issue of GfLDS fitting. For the r^2=0.49 claim on the left to right brain prediction, how does a baseline FA or CCA model perform? Was input current ignored in the single cell voltage data? Or you somehow included the input current as observation model? As for the comment on Gaussian VIND performing better on explaining variance of the data even though it was actually count data, I think this maybe because you are measuring squared error. If you measured point process likelihood or pseudo-r^2 instead, Poisson VIND may outperform. Both your forecasting and the supplementary results figure show that Poisson VIND is definitely doing much better! (What was the sampling rate of the Guo et al data?) The supplementary material is essential for this paper. The main text is not sufficient to understand the method. This method relies on the fixed point update rule operating in a contractive regime. Authors mention in the appendix that this can be *guaranteed* throughout training by appropriate choices of hyperparameters and network architecture. This seems to be a crucial detail but is not described!!! Please add this information. There's a trial index suddenly appearing in Algorithm 1 that is not mentioned anywhere else. Is the ADAM gradient descent in Algorithm 1 just one step or multiple? MSE -> MSE_k in eq 13 LFADS transition function is not deterministic. (page 4) log Q_{phi,varphi} is quadratic in Z for the LLDS case. Text shouldn't be 'includes terms quadratic in Z' (misleading). regular gradient ascent update --> need reference (page 4) Due to the laplace approximation step, you don't need to infer the normalization term of the parent distribution. This is not described in the methods (page 3). Eq 4 and 5 are inconsistent in notation. Eq (1-6) are not novel but text suggests that it is. Predict*ive* mean square error (page 2) Introduction can use some rewriting. arXiv papers need better citation formatting.<doc-sep>The paper presents a variational inference approach for locally linear dynamical models. In particular, the latent dynamics are drawn from a Gaussian approximation of the parent variational distribution, enabled by Laplace approximations with fixed point updates, while the parameters are optimized the resulting stochastic ELBO. Experiments demonstrate the ability of the proposed approach to learning nonlinear dynamics, explaining data variability, forecasting and inferring latent dimensions. Quality: The experiments appear to be well designed and support the main claims of the paper. Clarity: The clarity is below average. In Section 2 the main method is introduced. However, the motivation and benefits of introducing a parent and child variational approximation are not discussed adequately. It would be helpful to move some of the stuff in the appendix to the main text, and present in a neat way. I also struggled a little to understand what is the difference between forward interpolate and filtering. Originality: Given the existing body of literature, I found the technical novelty of this paper rather weak. However, it seems the experiments are thoroughly conducted. In the tasks considered, the proposed method demonstrates convincing advantages over its competitors. Significance: The method shall be applicable to a wide variety of sequential data with nonlinear dynamics. Overall, this appears to be a board-line paper with weak novelty. On the positive side, the experimental validation seems well done. The clarity of this paper needs to be strengthened. Minor comments: - abstract: uncover nonlinear observation? -> maybe change "observation" to "latent dynamics"?
The reviewers in general like the paper but has serous reservations regarding relation to other work (novelty) and clarity of presentation. Given non-linear state space models is a crowded field it is perhaps better that these points are dealt with first and then submitted elsewhere.
The authors propose to encapsulate the update rule for a neural net into a look-up table specifying weight changes for each combination of "pre-synaptic" input to the weight, and "post-synaptic" activation of the unit receiving that incident connection. They learn the elements of this matrix by gradient descent, and then use that learned update rule to train neural nets on a new task. This is motivated by a separation of timescales biologically, wherein learning rules might be evolved over long timescales, and then act within each brain over shorter ones. There is a nice discussion of related previous work, but it misses a few key items that, to me, diminish somewhat the novelty of this work. That's okay: being first isn't everything. But I think it is important to point out to readers what is new and better about this work vs previous work. a) The auto ML zero paper from Quoc Le et al. (arXiv 2003.03384). They learn both architectures and learning rules via simulated evolution b) Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M. W., Pfau, D., Schaul, T., ... & De Freitas, N. (2016). Learning to learn by gradient descent by gradient descent. In Advances in neural information processing systems (pp. 3981-3989). They use GD to learn plasticity rules. c) A few recent papers on bio-plausible backprop-type algorithms. i) Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits. Biorxiv: https://doi.org/10.1101/2020.03.30.015511 from Naud and colleagues ii) Guerguiev, Jordan, Timothy P. Lillicrap, and Blake A. Richards. "Towards deep learning with segregated dendrites." ELife 6 (2017): e22901. iii) Sacramento, João, Rui Ponte Costa, Yoshua Bengio, and Walter Senn. "Dendritic cortical microcircuits approximate the backpropagation algorithm." In Advances in neural information processing systems, pp. 8721-8732. 2018. Aside from the relation to prior work, I have a few technical and conceptual questions / comments: 1) Fig. 2: were all three nets given the same initialization? That could matter for comparing the training curves of accuracy vs. training time because a good initialization could give one learning rule an apparent advantage. And given the accuracy at t=0, it doesn't look like they are the same. 2) I like that the authors studied generalization of the learned rule between tasks: that is important (although, SGD also generalizes well). I'm a bit less impressed by the performance obtained in the MNIST and fashion MNIST tasks. At the same time, using two-factor rules (update is a function just of pre- and post-synaptic inputs) to solve MNIST sounds hard given that there's no credit assignment signal. I think that the authors would be well served to read up on the papers on bioplausible deep learning, and consider variants of this work that include a credit assignment signal. <doc-sep>Summary: This is a fascinating paper which push the bounds of artificial neural network knowledge and understanding, while questioning the typical approach of considering various neural networks branches in isolation. Concretely, it shows that by using concepts from neuroevolution together with deep learning concepts, we can learn how to learn learning rules (i.e. plasticity rules) which can generalize across tasks and can train neural networks which are more robust to adversarial attacks than typical networks trained with stochastic gradient descent. Strong points: • In my opinion, the paper is visionary. It answers few questions, while opening the path for a large number of new research directions and new unanswered questions. • The paper has a very well-balanced content of novelty, math, computer science, neuroscience, and even philosophy. • The paper is very well written and anchored in a multidisciplinary literature. It has the potential of becoming a “must read” paper in the future. Weak points: • The datasets used, including MNIST and Fashion-MNIST are rather simple. It would be very interesting to see how the approach behaves on more complex datasets. During the discussion phase, I would recommend to the authors to address the following comments: 1. In the limit of time, try to perform experiments also on CIFAR 10/100. I believe that it would be interesting to see on CIFAR 100, the behavior of three types of learned plasticity rules: (1) plasticity rules learned on the simple datasets, (2) plasticity rules learned on CIFAR 10, and (3) plasticity rules learned on a subset of the CIFAR 100 training set. 2. Are you encountering problems with ReLU activation in recurrent networks such as exploding or vanishing weights? Does your approach work also with hyperbolic tangent? 3. I believe that it would help the paper clarity if you can add a table towards the end of the paper to summarise the main results in terms of accuracy, training time, etc. 4. Perform a proof-read of the whole paper to improve the English usage and the presentation. For instance: typos (e.g. “rule, The next theorem”), unit measures for axis labels (e.g. figure 5 – accuracy [%]), etc. <doc-sep>Summary: This paper uses meta-learning to search for novel, local learning rules in artificial neural networks. This is done by parameterizing local learning rules for feedfoward and recurrent neural networks, then using mini-batch gradient descent to update the parameters of the learning rule. The authors argue that this is a promising strategy for discovering the learning rules used by biological systems, with three main contributions: (1) they provide proofs that this approach does what we would hope it would do when applied to a single linear layer; (2) experiments demonstrate meta-learning for simple non-linear and recursive architechtures; and (3) the authors provide an argument that evolution could replace gradient descent as the method of searching over possible learning rules. The authors also show through experiments that models trained with these non-gradient methods are more robust to (gradient-based) adversarial attacks. Overall, I thought this paper was well-written and provided interesting arguments and proofs. However, the experiments are not enough to support the main claims, so I think this paper is borderline. Pros: While the main idea of using meta-learning to search for new bio-realistic learning algorithms is not new, the particular formulation used here with recursive neural networks was new as far as I know, and I found this idea very interesting. Most of the work in this area has been focused on feed-forward networks, but as the authors emphasize, recurrent neural networks add a whole new dimension to the space of biologically-plausible, local learning algorithms. An example of such an algorithm is Recirculation, described in Hinton and McClelland 1987, "Learning Representations by Recirculation", which is closely related to Feedback Alignment (Baldi and Sadowski 2018, "Learning in the machine: Recirculation is random backpropagation"). Cons: My main criticism is that the experiments are not enough to show that the "discovered" learning rule does anything useful in the RNN. As the authors admit, one can achieve good performance in a multi-layer NN by fixing the random weights in the hidden layers and by training only the output layer. I would have really liked to see how the meta-learned algorithm compared to fixed, random representations. The experimental performance of the meta-learned algorithm on MNIST is quite poor (~80% test accuracy, Figure 3), so its unclear what is going on. The results of the adversarial robustness experiments are not surprising. Adding additive or multiplicative noise during training will also make the trained networks more robust. It think these experiments actually distract from the main ideas of the paper. It would have been better to more carefully explore whether the learning algorithm can learn more difficult functions. Section 2 describes two possible alternatives for the plasticity rules: one that incorporates information about the error, and one that does not. I think it is important to highlight the fact that the latter is an unsupervised plasticity rule --- the meta-learning algorithm has access to the target output, but the local plasticity rule does not. So while a plasticity rule trained on Dataset 1 does have some information about the Dataset 1 targets (by way of the meta-learning updates), when it is applied to Dataset 2 it never receives any information about Dataset 2's target, and is thus unsupervised. This is an important distinction between the two approaches. <doc-sep>This paper introduces a new method for meta-training plasticity rules, allowing networks to learn new instances of a given domain quickly and efficiently. The method consists in implementing plasticity as an arbitrary function of the past few timesteps of local activity at a synapse (input and output). Experiments show that the method finds reasonably successful rules, that these rules generalize across (some) domains, and that the learning seems more robust to (some) adversarial attacks than plain gradient descent. While meta-training plasticity rules is not new, I believe the method is novel and quite interesting. I also appreciate the experiments to demonstrate cross-domain generalization and robustness to adversarial attacks. A possible caveat is that the experiments, though diverse, are still a bit limited. The paper only uses relatively small feedforward networks and recurrent networks with only 3 timesteps. In terms of "real" datasets, only MNIST and Fashion-MNIST are considered. Similarly, the trained plasticity rule seems robust to a certain type of adversarial attack, but is it more robust to other forms of distortion - such as plain noise, deformations, etc. ? I suppose this is tolerable for an introductory paper. Minor comments: - Bengio et al. 1992. “On the Optimization of a Synaptic Learning Rule.”, used gradient descent to meta-learn plasticity rules and should be included in the Related Work section. - Similarly, Metz et al. 2019: https://openreview.net/forum?id=HkNDsiC9KQ. - Sometimes, in the figures, it is not clear what exactly is shown. E.g. Which type of plasticity rule is used (network-based or lookup tables)? Also in Figure 4, how exactly are these image generated - what's the criterion for a sufficient error? Speculative comment: IIUC, the networks in this paper are binary and can be seen as spiking networks. Interestingly, the space of rules explored by this method seems to include standard models of biological plasticity, such as spike-timing dependent plasticity, as well as more complex "triplet" rules (see e.g. https://www.pnas.org/content/108/48/19383). There's probably some interesting work to be done in this direction (for future work of course, not for this introductory paper).
This paper explores meta-learning of local plasticity rules for ANNs. The authors demonstrate that they can meta-learn purely local learning rules that can generalize from one dataset to another (though with fairly low performance, it should be noted), and they provide some data suggesting that these rules lead to more robustness to adversarial images. The reviews were mixed, but some of the reviewers were very positive about it. Specifically, there are the following nice aspects of this work: A) The meta-learning scheme has interesting potential for capturing/learning biological plasticity rules, since it operates on binary sequences, which appears to be a novel approach that could help to explain things like STDP rules. B) It is encouraging to see that the learning rules can generalise to new tasks, even if the performance isn't great. C) The authors provide some interesting analytical results on convergence of the rules for the output layer. However, the paper suffers from some significant issues: 1) The authors do not adequately evaluate the learned rules. Specifically: - The comparison to GD in Fig. 2 is not providing an accurate reflection of GD learning capabilities, since a simple delta rule applied directly to pixels can achieve better than 90% accuracy on MNIST. Thus, the claim that the learned rules are "competitive with GD" is clearly false. - The authors do not compare to any unsupervised learning rules, despite the fact that the recurrent rules are not receiving information about the labels, and are thus really a form of unsupervised learning. - There are almost no results regarding the nature of the recurrent rules that are learned, either experimental or analytical. Given positive point (A) above, this is particularly unfortunate and misses a potential key insight for the paper. 2) The authors do not situate their work adequately within the meta-learning for biologically plausible rules field. There are no experimental comparisons to any other meta-learning approaches herein. Moreover, they do not compare to any known biological rules, nor papers that attempt to meta-learn them. Specifically, several papers have come out in recent years that should be compared to here: https://proceedings.neurips.cc/paper/2020/file/f291e10ec3263bd7724556d62e70e25d-Paper.pdf https://www.biorxiv.org/content/10.1101/2019.12.30.891184v1.full.pdf https://proceedings.neurips.cc/paper/2020/file/bdbd5ebfde4934142c8a88e7a3796cd5-Paper.pdf https://openreview.net/pdf?id=HJlKNmFIUB https://proceedings.neurips.cc/paper/2020/file/ee23e7ad9b473ad072d57aaa9b2a5222-Paper.pdf And, the authors should consider examining the rules that are learned and how they compare to biological rules (e.g. forms of STDP), if indeed biological insights are the primary goal. 3) The paper needs to provide better motivation and analyses for the robustness results. Why explore robustness? What is the hypothesis about why these meta-learned rules may provide better robustness? There is little motivation provided. Also, the authors provide very little insight into why you achieved better robustness and insufficient experimental details for readers to even infer this. This section requires far more work to provide any kind of meaningful insight to a reader. What was the nature of the representations learned? How are they different from GD learned representations? Was it related to the ideas in Theorem 4? Note: Theorem 4 is interesting, but only applies to a specific form of output rule. 4) In general, the motivations and clarity of the paper need a lot of work. What are the authors hoping to achieve? Biological insights? Then do some analyses and comparisons to biology. More robust and generalisable ML? Then do more rigorous evaluations of performance and comparisons to other ML techniques. Some combination of both? Then make the mixed target much clearer. 5) The authors need to tidy up the paper substantially, and do better at connecting the theorems to the rest of the paper, particularly for the last 2 theorems in the appendix. Also, note, Theorems 2 & 4 appear to have no proofs. Given the above considerations, the AC does not feel that this paper is ready for publication. This decision was reached after some discussion with the reviewers. But, the AC and the reviewers want to encourage the authors to take these comments on board to improve their paper for future submissions, as the paper is not without merit.
This paper considers the problem of meta-learning in a multi-agent environment under the assumptions that: * the learning agent's policy evolves over time as a function of the other agent's actions * the other agents' policies evolve potentially using the learning agent's actions. The policy learning problem is assumed to be Markovian. The meta-learning problem is considered to be that of finding the best initial policy parameters (that will subsequently be evolved according to the learning dynamics) as to maximize the agent's cumulative marginal payoff. The paper is very well written, easy to read and relatively straightforward in its exposition. I do not have any big remarks about writing except that the authors may want to rethink the term "defective persona" to avoid the weird double meaning. A sufficient amount of related work presented and the lineage of the ideas is traced convincingly well. The main contribution of this paper is to extend the ideas of Al-Shedivat et al. in a way that exposes the other agent's learning dynamics to the policy optimization (as opposed to treating them as a non-stationarity). The policy gradient form corresponding to this setting is derived in Theorem 1. The approach is evaluated in a synthetic experiment using iterated games as well as a somewhat less synthetic experiment on a quarter-cheetah problem (each agent controls a leg of the half-cheetah). I think that while the paper is incremental, the point that is raised within is rather intriguing. If anything my main criticism is that the authors could have gone for a more challenging setting that iterated games. E.g. recent results (https://arxiv.org/pdf/1901.08654.pdf) indicate that in settings like collaborative exploration, being aware of the other player's learning dynamics is important for achieving a better outcome. Perhaps the policy gradient approach can solve issues that cannot be addressed straightforwardly within the bandit framework. Another question is whether the approach can be used successfully to tune the inner learning process, e.g. by incorporating the policy gradient step size and other hyper-parameters into phi_0. Overall I think this is a solid paper, which would benefit significantly from more ambitious problems. <doc-sep>This paper makes a full derivation of the meta-learning gradient estimation for multi-agent adaptation. The resulting algorithm combines the meta-learning of the opponent's updates (existing in LOLA) and of oneself's futur updates (existing in Meta-PG). While the theoretical part of the paper is clear and well explained, the experimental setup is missing a lot of details to be interpreted: - In each experiment, it seems (but never explicitly formulated) that "agent i" (agent 1, since all experiments are involving 2 players) is doing the meta-learning algorithm (meta-MAPG, meta-PG or LOLA) while the other (agent 2) is a naive agent initialised with defective/cooperative policies. - In that case: how are naive agent updated? with simple policy-gradient? - How many lookahead are used (denoted by L in algorithms)? - Why did LOLA failed at learning to cooperate with the cooperative opponents? (it should have learned to cooperate, unless naive agents are still doing selfish PG updates --and in that case, meta-MAPG results are very impressive)? - Are the opponent's policies given or learned (i.e. with opponent modelling)? Also, I would have been interesting to see an ablation study showing the importance of the "own learning" and "peer learning" terms in equation 6 (from the same implementation with fixed HP). Does the authors have tried it? <doc-sep>This paper studies meta-learning in multi-agent reinforcement learning. It proposes a meta multi-agent policy gradient method that considers the learning processes of other agents in the environment for fast adaptation. This method can be seen as a unified framework of previous methods (Al-Shedivat et al. (2018) and Foerster et al. (2018a)). The method outperforms previous methods in two matrix games and 2-agent HalfCheetah. Pros: - The method is simple and well motivated. It additionally takes into consideration peer learning, comparing to Al-Shedivat et al. (2018). - The method unifies the benefit of Al-Shedivat et al. (2018) and Foerster et al. (2018a). - The method greatly outperforms these two methods in two matrix games. Cons: - Like LOLA, the method needs the access to policy parameters of other agents, while Al-Shedivat et al. (2018) do not. This may be impossible in mixed and competitive environments. How to deal with this? - In experiments, most questions are answered by the two matrix games. It is not fully convinced since the state space is very limited. Why not choose RoboSumo in Al-Shedivat et al. (2018) as an experiment? - For two matrix games, opponent policy is limited compared to complex environments, for example, halfcheetah. Although the out of distribution is tested, it is less informative for generalization. Why not test the out of distribution for halfcheetah? - "The out of distribution has a smaller overlap between meta-train/val and meta-testing distribution." What exactly is the out of distribution? - The experimental results need to be more elaborated. Why do Meta-PG and LOLA perform similarly to REINFORCE? --- **After rebuttal** The responses address my main concerns. I have increased the score to 6. But, I also agree with other reviewers that the novelty of this paper is somewhat limited. <doc-sep>This paper points out that a key challenge in MARL is the non-stationarity of other agents' policies, as opposed to previous papers which only account for non-stationarity of the environment. The paper extends (Al-Shedivat et al., 2018) by directly conditioning the meta-policy on a distribution of other agents' policies. In my opinion, the major contribution of this paper is a new multiagent meta learning theoretic framework that explicitly accounts for the dynamics of all agents. Strengths of the paper: 1) A new perspective in MARL that considers nonstationarity of MARL in terms of dynamics of the other agents' policies 2) A new theoretically grounded algorithm that explicitly models the policy dynamics of all agents Weaknesses of the paper: 1) Except for the new perspective of incorporating policy dynamics of other agents, the backbone of the paper (i.e., meta-RL based framework to mitigate non-stationarity of MARL) is inherently the same as (Al-Shedivat et al., 2018). The novelty is somewhat limited. 2) In experiments the paper answers several questions that show the effectiveness of the new algorithm. However, this is subject to the two-agent setting. It is questionable whether such a framework can perform well in settings where there are multiple agents. Question: does the proposed framework generalize to >2 agents scenarios? if yes, what is the reason that the authors did not conduct empirical evaluations in these scenarios?
This paper studies the problem of multi-agent meta-learning. It can be viewed as extending Al-Shedivat et al. (2018) by incorporating the dynamics of other agents. The reviewers praised clear writing and theory. There were two main concerns. The first concern is the novelty when compared to Al-Shedivat et al. (2018). The second concern are experiments, which could be more ambitious and are not always clearly described. The reviews of this paper were borderline and this was not enough to get accepted.
This manuscript proposes a new graph neural net (GNN) method to learn the dynamics of a spatio-temporal PDE-driven dynamical system directly from data. This manuscript proposes a new graph neural net (GNN) method to learn the dynamics of a spatio-temporal PDE-driven dynamical system directly from data. The authors propose to do that using the finite element method (FEM). The proposed method builds on using: basis function approximation for the (unknown) field u, Galerkin method with the assumption that discrepancy between the dynamics F and basis function approximation is orthogonal to (finite) basis functions, method of lines, and message passing GNN as a proxy for the dynamics. The use of linear interpolation allows to express the time derivative of $Y$ as a solution of a system of linear equations, which is further approximated to gain computational additional efficiency. Authors also propose a method to incorporate inductive bias into model learning for models that are assumed to contain a convection component. Overall the proposed method is well-motivated, and for the most part the description is clear. To my knowledge, the proposed method is novel and contains some methodologically new ideas, and the performance seems to be on par with previous methods that learn free-form dynamics, and shows an improvement for models that contain a convection component when such prior knowledge is utilised in the model training. Authors could address and/or clarify the following aspects: 1. I understand that a piece-wise linear basis simplifies computational complexity by making some of the computational steps straightforward, but on the other hand the selected basis is the simplest and obviously not optimal from approximation accuracy point of view. Can the method be extended to other basis? For example, if we knew that the dynamics $F$ contains a diffusion term $\\nabla^2u$ we would not be able to introduce it since the second derivative of PWL functions is zero everywhere. I think the discussion of possible limitations of the PWL basis and of possible extensions to higher-order bases is missing. 2. It seems that the measurement from the initial time point is used as the initial state? Why not introduce a separate free parameter for the initial state? 3. As described towards the end of the manuscript, the system is initialed with an initial state and then the PDE dynamics define how the system evolves over time. However, below eq. (12) it is noted that "where $u$ encodes the data $Y$ at time $t$ in its coefficients." Is $u$ defined based on the dynamically evolving system state or by data? 4. Paragraph "Network architecture" on page: scimitar-learn is used to compute inner product between basis functions. Provide a brief description of how the computation is done. 5. End of page 6: gradients are back propagated through an ODE solver. Why not use adjoint method (possibly with checkpoints) as proposed in previous work and implemented e.g. in comparison methods? Discrete back-propagation may not scale to longer sequences. Is this the reason why data trajectories are shortened? 6. Table 1: split the methods into two groups. Group 1 should includes PA-DGN, GWN, CT-MPNN and FEN, which do not assume prior knowledge about system dynamics. FEN performs similarly with CT-MPNN on ScalarFlow and perhaps slightly better on Black Sea data (MAE values within one std). Group 2 should include only T-FEN that is specifically designed to learn convection systems, and thus provides a small performance improvement. 7) In experiments, the authors make their models time- and position-dependent while the strongest baseline models (CT-MPNN) does not utilize neither time nor positions. That makes it hard to tell whether the improvements in performance of FEN and T-FEN are due to the models's structure and inductive biases or due to time and position dependencies. Authors should provide an ablation study to address this. 8. Provide additional comparisons on systems with larger variety of dynamics using simulated data (for which the ground truth is known) to better understand when FEN performs better/worse than comparison methods. 9. Datasets are subsampled to 1000 spatial points. Provide results for smaller and also larger spatial grids to demonstrate the argument that "approximation becomes arbitrarily good as the mesh resolution increases". To my knowledge, the proposed method is novel and contains some methodologically new ideas, and the performance seems to be on par with previous methods that learn free-form dynamics, and shows an improvement for models that contain a convection component when such prior knowledge is utilised in the model training. <doc-sep>The paper proposes a graph / simplicial neural network based on the finite element method for learning dynamics from data when only a finite number of samples exist and the true dynamics are not known or only partially known. # Strenghts - The paper looks at a very realistic setting for learning PDEs from data: a finite number of samples and (partially) unknown true dynamics. Tackling these problems concomitantly is of great practical importance and this makes the proposed method relevant for practical applications. - The introduction does a good job motivating the work and pinpointing the main challenges of learning dynamics from data. - As someone with little experience with the finite element method, Section 2.1 does a great job explaining the required background in the right amount of detail for understanding the paper. - The connection that the paper makes between the finite element method and message passing neural networks is interesting and, to the best of my knowledge, original. - The authors show how inductive biases can be added to the model by using a certain prior over the structure of the function $F$. - I like that the paper focuses on real-world datasets. Also, the datasets themselves are extremely interesting to visualize and make the paper more interesting. - Figure 3 provides a very insightful qualitative understanding of the proposed model compared to the baseline. - I am glad that the paper includes a super-resolution experiment. Often, models that work with a discretised space can be very sensitive to changes in the resolution of the mesh. Figure 4 shows that the proposed model is relatively robust to changes in the number of triangles. - The authors show (in a relatively specific setting) that factorizing the dynamics achieves a disentanglement effect which allows some degree of interpretability of the model. # Weaknesses - While I appreciate the focus on real-world datasets, a synthetic experiment where the model could have been evaluated in a more systematic way would have been useful. - The trick to stabilize training described at the end of Section 3 is slightly peculiar. How important is this trick? Do the authors have any results for when this trick is not used? Could this trick improve the performance of the baselines as well? - The importance of the approximation from Equation (13) is not studied. Perhaps that is something that could have been tried in the more controllable synthetic setting I was suggesting above. In general, I would be interested to know what are the costs of this approximation and if more advanced approximations might be worth being considered to boost performance. - Minor suggestion: The paper frames the model as a hypergraph neural network. However, the authors might want to be aware that there is a recent line of work developing simplicial and cell complex neural networks: https://arxiv.org/abs/2103.03212 (ICML 2021), https://arxiv.org/abs/2106.12575 (NeurIPS 2021), https://arxiv.org/abs/2010.03633. Since the model learns a function over the 2-simplices in the simplicial complex, the model is probably more accurately described as a type of simplicial neural network. The weaknesses reported above are relatively minor and far outweighed by the strengths of the paper. Therefore, I recommend the paper for acceptance. <doc-sep>This paper proposes a new model for learning partial differential equations from data. The PDE is first discretized then solved as an ODE. The dynamics function is learned with Message-Passing Neural Networks, where the function is split into a sum of physically informed terms. This splitting both improves model performance and makes the model more interpretable by disentangling the dynamics. The model is tested rigorously against multiple baseline models, and the results show the new model performs well. Strengths: - The paper is well written and well presented. On the whole it was relatively easy to understand and the diagrams definitely contribute to that. - The proposed model is well motivated and nicely extends ideas such as Graph Neural ODEs to the PDE domain. - The paper considers the issues with a naive implementation in depth, (which is that the model would be slow). It then provides the solution to this. - Rigorous tests on the proposed model are carried out against robust baselines. - There is an extensive review of related work. - A lot of effort has been put in to make the results reproducible, including all experimental details and code/datasets to come. Weaknesses/Questions: - My main concern is that training has been carried out over 10 time steps, was this a hyperparameter that was tuned? I agree that the correlation over say 30 steps will be minimal, however my understanding is that all models learn an update based on the current state and potentially a few steps in the past. Would it not still be possible to train over more time steps? Could T-FEN extrapolate better than GWN if it is trained on a larger time range? - One of the proposed reasons GWN outperforms T-FEN at extrapolation is that it can use the past time-steps as input. This could be extended to T-FEN in the form of delay differential equations (https://arxiv.org/abs/2102.10801), where the dynamics takes the state at $(t)$ and the state at $(t-\\tau)$ as input. This would make the model use the past as well. Would it be possible to carry out an ablation study using this? - The paper could benefit from a discussion section. Saying what the model is good at and bad at. For example, we see that it performs very well in the times used to train, but is not as good at extrapolation. - In table 1, for the ODE based method, the number of function evaluations are provided to show T-FEN is faster than FEN and CT-MPNN. Is it possible to time the evaluation to support this claim further? That way one could also compare to the other two baselines GWN and PA-DGN. - Is it possible to move the related work section, either to the introduction or to just before the conclusion? It breaks the flow. - What is the reason for using the $L_1$ loss over the $L_2$ loss? Could we expect better/worse results with $L_2$? - Could the model be improved even further by using more physically informed terms? For example, like those in electro-magnetism $\\dot{v}=qv \\times B$, where $B$ is learnt (https://arxiv.org/abs/2109.07359). Or, because the experiments use data of fluids, could it help to include Laplacian or curl terms in the dynamics, which appear in Navier-Stokes? Is it possible to carry out an ablation? - The mass matrix $A$ is approximately inverted by lumping the matrix, are there situations where this approximation could lead to errors? - Is it possible to extend this method where we expect higher order PDEs, that include terms such as $\\frac{\\partial^{2}u}{\\partial t\\partial x}$? - The model disentangles the dynamics into a convection term and the remainder. How does this relate to disentangling the dynamics of an Augmented Neural ODE (https://arxiv.org/abs/1904.01681) into a velocity and an acceleration (https://arxiv.org/abs/2006.07220), which can also aid the interpretability of the ODE? - I don’t entirely understand the Black Sea dataset. Is the mean temperature the mean temperature of the entire sea or of a region, say $1m^2$? If it is of the whole sea, does this not remove the spatial element of the task? Additionally are there any interesting effects appearing over long time periods due to global warming/concept drift? Given that the training regime is taken over 2012-2017 and the testing regime is in 2019? Minor Points: - The paragraph just above equation 15 has a small mistake. The manuscript says “This would prohibit training even with the adjoint equation (Chen et al., 2018)”, while the paragraph is about the speed of training. The benefit of the adjoint method is that it is memory efficient but slow compared to directly backpropagating through an ODE solver, which is fast but uses a lot of memory. - There is a typo at the bottom of page 8, “disappears Extrapolationat the” - There is a typo at the bottom of page 16, “ODE-base models” should be “ODE-based models” The paper is well written, the model (to my best knowledge) is novel, with the method building on existing work. The experiments rigorously test the model against the necessary baselines and information is given in the appendices on reproducing the results. Therefore I recommend acceptance, with a few clarifications to be made. **EDIT** I have increased my confidence score from 3 to 4 after my initial questions have been answered. <doc-sep>The author proposes a method for forecasting in Partial Differential Equations by coupling Finite Element Method on an arbitrary grid with the learning of the dynamics from data. For this purpose a variant of message passing based graph networks is used. It is show in the paper that it's possible to incorporate priors on the structure of the PDE that results in an interpretable solution. The model also show more stability to changes of the mesh structure in test time (like superresolution) and to extrapolation than competitors. The paper uses Message Passing Neural Networks to implement a Finite Element Method with learnable dynamics. It combines different models and techniques from the literature, but it clearly does that in a nontrivial way including modifications, therefore it is more than just derivative work. I find it valuable that the author also considers performance of implementation, and discusses consideration on GPU architecture related performance issues. Also the effort put into making the work reproducible adds value to the paper. The method can clearly have practical value, and the discussion of the method is clear and quite detailed. I am not as familiar with the PDE/NN literature as for example the Neural ODE literature, therefore I cannot rule out entirely that something similar already exists. Questions / actionable comments: I) "The results show that FEN and T-FEN provide the smallest prediction error on both datasets with a further boost due to the separate transport term in T-FEN." I feel this is a bit too strong statement. While I tend to accept the paper with the results in Table 1 given the other clear benefits of the method (like robustness and interpretability) I am not comfortable with this statement. The table is based on 3 repeats, and have results for example: Black Sea dataset CT-MPNN 0.944 ± 0.003 vs. FEN 0.938 ± 0.005, these confidence intervals are clearly overlapping and using 3 samples I am a bit skeptical. This is one of the strongest gaps in the table. In the ScalarFlow FEN is really on par with CT-MPNN. Did you used a statistical test to bold the results? Again, I have no problem supporting the paper even if the method is on par with the best competitor, but the statement in the paper should be supported by the statistics. Similarly, in the case of NFE CT-MPNN have approx +-60 deviation on the ScalarFlow dataset, I am not too comfortable to compare these numbers. II) What is the motivation of using L1 loss? instead of like MSE? III) In Experiments/Multi-step Forecasting section: Time horizon choice is well motivated, but still feels arbitrary in some sense. Does the author see some way to formalize what we accept as "meaningful dynamics"? How one should choose a comparison horizon for example. Can it be done at least approximately without domain knowledge on the system? IV) In Model/Network architecture section: Order invariance of cell vertices is assured by ordering the nodes canonically. Would the author expect improvement if a permutation invariant network, like a Set Transformer be used here? Why or why not? In some sense the message aggregation step, summation being permutation invariant, is a set network, but not a very expressive one. This however makes the cell order invariant for a given node and not the other way around. The paper give valuable contribution, the method expected to be practical, robust, and in some cases interpretable. I find the statement on raw prediction error overly strong.
This paper introduces a graph neural network (GNN) based on the finite element method (FEM) for learning partial differential equations from data. The proposed finite element network is based on a piecewise linear function approximation and a message passing GNN for dynamics' prediction. The authors also propose a method to incorporate inductive bias when learning the dynamical model, e.g. including a convection component. The paper received three clear accept and one weak accept recommendations. The reviewers discussed the possible extensions of the method, and also raise several concerns regarding experiments, e.g. the added value of a synthetic dataset, implementation tricks or hyper-parameter settings. The rebuttal did a good job in answering reviewers' concerns: after rebuttal, there was a consensus among reviewers to accept the paper. The AC's own readings confirmed the reviewers' recommendations. The paper is well written and introduces solid contribution at the frontier of GNNs and finite elements methods, especially a pioneer graph-based model for spatio-temporal forecasting derived from FEM. Therefore, the AC recommends acceptance.
Strengths: 1) The experiments are extensive, and clearly demonstrate the merits as compared to prior benchmarks for off policy RL. 2) The contextual discussion is clear, well-motivates the proposed approach, and gives a nice overview of how importance sampling and off policy RL intersect. Weaknesses: 1) Theorem 1 seems vacuous. The proof is a simple exercise in elementary calculus -- one may easily show the minimizer of a quadratic is the least squares estimator. The authors need to better explain what is the technical novelty of this statement, and why it is meaningful. Upon inspection it does not seem to qualify as a theorem. This is also true of Theorems 2 and 3. Therefore, I feel the conceptual contribution is not enough to warrant acceptance. 2) The notion of successor representation seems identical to the occupancy measure, which in order to estimate, requires density estimation, which is extremely sample inefficient. Can the authors comment about how to estimate the successor representation efficiently? There is very little discussion of sample complexity throughout, which is somewhat alarming because a key selling point of off-policy schemes for RL is that they alleviate the need for sampling from the MDP transition dynamics. 3) The actual algorithm pseudo-code is missing from the body of the paper, which is permissible because it is in the appendix. However, the structural details of how the algorithm works iteratively and how it departs from previous works are also not explained. That is, while derivation details are presented, iterative details are not. In my opinion this should be strictly required in the body of the paper, as well as contextual discussion of what is similar/different from previous works, but all I could find was high level presentation of objectives minimized at various sections, but not how they are interlinked. 4) The background discussion is disjointed. There is a preliminaries section on page 5, as well as a background section 3. Minor Comments: 1) References missing related to the sample/parameterization complexity issues associated with importance sampling: Koppel, A., Bedi, A. S., Elvira, V., & Sadler, B. M. (2019). Approximate shannon sampling in importance sampling: Nearly consistent finite particle estimates. arXiv preprint arXiv:1909.10279.<doc-sep>***Summary*** The paper proposes an approach to employ successor representation combined with marginalized importance sampling. The basic idea exploited in the paper consists of expressing the occupancies in terms of the successor representation and to model it via a linear combination of some features. This allows handling, although approximately, continuous state-action spaces. After having derived the objective function, an experimental evaluation on both Mujoco and Atari domains is presented, including an ablation study. ***Major*** - (About the linearity of the weight) Linear representations expressed in terms of a feature function are common in RL as the reward function can be often seen as a trade-off of different objectives encoded in the features. However, the choice of the linear representation in Equation (7) is based on the assumption that the marginalized weight is linear in the feature function. This assumption seems to me less justified compared to the one for the reward function. Clearly, a suitable feature design could overcome this limitation. Can the authors explain how the features \\phi are selected or learned? - (Experimental evaluation) The results presented in the experimental evaluation are partially unsatisfactory, as also the authors acknowledge. It seems that there is no clear benefit in employing the marginalized importances sampling (both the baselines and the proposed approach) compared to standard deep temporal difference approaches. The authors suggest that this phenomenon can be ascribed to the fact that the quality of the marginalized weights is affected by the successor representation learned. I don't think this is the main weakness of the paper, but a reflection of the usefulness of the method in complex scenarios is necessary. Alternatively, it would be interesting to compare the proposed approach with DualDICE and GradientDICE on simpler tasks (maybe toy ones) in which DualDICE and GradientDICE work well. ***Minor*** - The related work section should be moved later in the paper, maybe after Section 4 - Pag 2, two lines above Equation (2): the transition model is here employed as a distribution over the next state s' and the reward r, but the reward function is considered separately in the definition of MDP presented before - Figures 2, 3, and 4: the plots are not readable when printing the paper in grayscale. I suggest using different linestyles and/or markers ***Typos*** - Pag 2: isn't -> is not - Pag 2: doesn't -> does not - Pag 8: the the -> the ***Overall*** The paper can be considered incremental compared to DualDICE. I did not find any fault, but I feel that the significance of contribution is currently insufficient for publication at ICLR. In particular, for a paper that proposes a practical variation of a theoretically sound algorithm, the experimental evaluation is essential. I think that the results are currently unable to clearly show the advantages of the proposed method.<doc-sep>The paper proposes SR-DICE, which uses a successor representation to compute DICE (discounted stationary distribution correction term). * I am worried about both the technical and experimental qualities of this work. The theorems presented are either obvious or previously presented in other works. While the authors argue that the marginalized importance ratio is independent of the horizon (I assume that they are talking about the variance), MIS only alleviates the estimator variance's exponential dependence on the horizon to become the polynomial dependence on the horizon (as proved in Xie et al., Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling, 2019). In the experiments, it is hard to believe that the GradientDICE and DualDICE perform that poorly having Log MSE larger than 0 while the GenDICE paper reports Log MSE less than -4 (HalfCheetah). * The paper uses $\\phi$ and $\\psi$ learned by the previous deep successor representation learning algorithm, which is not meant to be used to learn marginal importance ratio. In particular, $\\phi$ is learned by minimizing state, action, and reward reconstruction error and $\\psi$ is the discounted sum of $\\phi$. If we consider a case where $\\pi$ only exploits a very small subset of state-action space, it is easy to see that the reconstruction error minimization in the dataset is not an optimal representation for the marginal importance ratio learning. In this sense, only the linear vector $w$ is used for the learning of marginal importance ratio. * The experiment setting is not fair. Direct-SR and SR-DICE in their implementation have effectively 2 hidden layers, where DualDICE and GradientDICE in their implementation have a single hidden layer. * The paper is hard to follow. Especially, notation abuse between the real reward and the virtual reward which is optimized to give a marginal importance ratio is very confusing (abuse between real Q and minimizer Q as well). Section 4.2 is also confusing because the authors imposes the problem of DualDICE that is not actually handled by SR-DICE. * The idea of adopting successor representation for learning marginal importance ratio seems quite novel. * Some people will be interested in this work, but I think the paper would not have much impact on the field. Overall, PROS: * The idea of using successor representation for learning marginal importance ratio is novel. * Avoids minimax formulation of other DICE algorithms, which makes the optimization very hard. CONS: * Not very meaningful theoretical results are presented, which mostly just confuse readers. * Uses the representation that is not learned for marginal importance ratio learning * Questionable experiment results Minor details: * y axis label is "Log MSE" for figures although the y axis is log scaled MSE. -------------------------------------- Most of the concerns are addressed by the authors, and I raised my score accordingly. <doc-sep>The authors propose SR-DICE based on deep SR for density ratio learning. Empirical advantages are observed in tested domains. Overall I think the idea is interesting and theoretically sound, but the experiments are not fully convincing. It looks the main claim is that SR-DICE is better than other MIS methods because SR-DICE delegates the update propagation over the MDP to SR, while other MIS methods consider update propagation and density ratio learning together. To me this claim is not coupled with function approximation at all, so I would like to first see some experiments in the tabular setting. SR-DICE is a two-stage learning algorithm, i.e., SR learning + density ratio learning, both have hyperparameters to be tuned. GradientDICE and DualDICE are one-state learning algorithm. If in the tabular setting, we can empirically verify that under the best hyperparameter configuration of each algorithm (guaranteed by a thorough grid search), SR-DICE is more data efficient (counting the samples used in both stages) than GradientDICE and DualDICE, in terms of the density ratio prediction error, then the argument can be well backed. Well-controlled experiments like this, however, do not appear in the current submission. Once deep networks are used for function approximation, we run into the problem of representation learning. The authors should at least include one more experiment, where MIS methods run directly on the pretrained deep SR features \\psi_\\pi(s) and/or \\phi(s). In this way, we can distinguish whether the empirical advantage of SR-DICE comes from SR-DICE itself or the improved representation learning. I'm also interested in seeing experiments for larger gammas, e.g., 0.999, 0.9999. I'm wondering if SR-DICE can consistently outperform GradientDICE with increasing discount factors. Overall, I'm happy to increase the score if I have any misunderstanding or more convincing results are presented. I appreciate that the authors include deep TD and behavior R(\\pi) as baselines. The empirical study has independent interest beyond SR-DICE. Moreover, deep TD is also referred to as Fitted-Q-Evaluation (FQE) in [x]. x: Voloshin, Cameron, et al. "Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning." arXiv preprint arXiv:1911.06854 (2019). ======================= (Nov 24) The author response addressed my concerns and I therefore raised my score from 5 to 6. I particularly like the idea of using successor representation for density ratio learning.
The paper is about an approach that combines successor representation with marginalized importance sampling. Although the reviewers acknowledge that the paper has some merits (interesting idea, good discussion, extensive experimental analysis) and the authors' responses have solved most of the reviewers' issues, the paper is borderline and the reviewers did not reach a consensus about its acceptance. In particular, the reviewers feel that the contributions of this paper are not significant enough. I encourage the authors to modify their paper by taking into consideration the suggestions provided by the reviewers and try to submit it to one of the forthcoming machine learning conferences.
In this paper, the authors assess the quality and reliability of open ended systems such as GPT-3 and Codex from the perspective of cognitive biases. The authors primarily focus on the application of code generation and design examples that elicit commonly made errors by these systems for the task of code generation and completion. The authors primarily focus on the 4 different set of cognitive biases: (1) framing effect; (2) anchoring bias; (3) availability heuristic and (4) Attribute substitution. The main contributions: - Creation of framework that contains hypothesis and prompts that extend existing cognitive bias methodology. The hypothesis and examples created can measure the model sensitivity to these transformations. The framework is also capable of discovering high impact errors and other types of errors made by the model. Strengths & Weakness: - The paper studies an important problem of robustness of large scale LM's such as Codex and GPT-3 from the task of of code generation. Using cognitive bias as a way to measure robustness of these models is an interesting and novel approach. - Results in the paper show that by manipulating exhibits different learning effects and thus resulting in significant drops in performance. However, claiming these to be exhibits of cognitive biases seems to be a bit of a stretch. Most of the errors discovered could be novel from the perspective of code generation but prior literature in the area of robustness and fairness and prompt engineering shown that these large scale model are susceptible to unwanted context added to prompts The authors address limitations of systems such as GPT-3 and Codex. However, there is no mention of limitations of designing prompts <doc-sep>The paper aims to evaluate generation models in text using qualitative categories - patterns that demonstrate systematic errors as opposed to individual instance errors. The main case-study is performed on OpanAI’s Codex, where the authors find that Codex does predictable mistakes which are dependent on (a) how the input prompt is framed (b) outputs are closer to anchors, or (c) biased towards instances that are similar to highly frequent training samples. The paper also claims that the proposed framework, which uses the concept of human cognitive biases, to elicit hypotheses of errors, can unearth high-impact errors – which for Codex amounts to generations that can delete files. Strengths: - Essentially the paper attempts to understand how large language models are sensitive to the context provided - do they focus on irrelevant information or distractors? And can their failures be categorized into frequent modes. The novelty claimed here is that the paper analyzes these failures from the lens of cognitive biases. - The paper can be seen as an extension of Checklist (Ribeiro et al.). Here, the authors rely on cognitive bias literature to design adversarial prompts with expected outputs. - Unearthing high-impact errors is an important use-case. Weaknesses: - The nature of the prompts or the output tendencies (mimicking patterns from training corpora), etc. have been studied in the literature before on different applications. For example, checking for sensitivity to context and the noise present in them is a known approach. The contribution that this paper makes is probably in collating these testing strategies and clubbing them under the cognitive bias framework. - The connection to cognitive biases, while interesting, at times seem forced. Nevertheless, it is an interesting way to look at adversarial examples and might be beneficial as it summarizes some important patterns of failure. Particularly, patterns such as framing (noise in context), and anchoring are interesting as they have wide applicability. There is no clear section dedicated to the limitations of the current work - maybe the authors should think about situations where cognitive bias framework would fail to elicit errors. Also, are there cases where output of certain prompt attacks can never be anticipated? <doc-sep>This paper explores the robustness of large language models through the lens of 4 human cognitive biases - framing, anchoring, attribute substitution, and availability. Results show that OpenAI's Codex program synthesis model is not robust to changes in prompts that are inspired by these 4 biases. **Strengths** * Well-written and easy to understand for the most part (especially introduction and some figures) * Categorizes a few types of errors for program synthesis models and develops interesting evaluations for them (i.e. functional accuracy + custom evaluations for each error type) * Experiments demonstrate that Codex, and likely other language model-based program synthesis models, are not robust to certain types of perturbations * Example of anchoring applied to GPT is especially interesting, since it shows a similarity between how language models (LMs) and humans behave **Weaknesses** * Paper seems to over-generalize contribution - e.g. claims to study "failure modes of large language models", but contributions are rather specific to prompt robustness. I would expect a paper with this general of a title and abstract to study more errors more comprehensively (e.g. anaphora resolution, entailment, long context, etc.). I recommend narrowing down the claims made in this paper * Contribution is relatively simple (changing prompts and measuring robustness). Could expand scope by increasing types of cognitive biases tested, or more experiments on GPT * Missing some key details for reproducibility and understanding * Needs details on how functional accuracy is checked (e.g. environment, open source code) * Figure 3 - unclear what is going on. Maybe add full example and/or more explanation in caption * Figure 5 - clarify what colors mean, and what is input vs. generated * Section 5 - what examples are used here, and how many? How were examples generated and evaluated? * Section 3.2.3 on Availability bias unsubstantiated * L213-214 - “Our results suggest that Codex can err by outputting solutions to related, frequent prompts in the training set.”. Need to show evidence that unary operations applied before binary in training set to make this claim. Recommend sampling training set and counting unary-first vs. binary-first Yes <doc-sep>This paper proposes to use human cognitive biases as inspiration for types of errors that large language models may make, as a means of finding and catching such errors. They run experiments on four such cognitive bias-inspired errors on code generation with Codex models. They also include two additional sets of smaller experiments, one with GPT-3 with a pre-existing human cognitive bias experiment, and another with Codex on code deletion as a means of emphasizing high-impact errors. As a foreground, my view on the use of human cognitive biases as a motivation in this paper is that it is a useful and valid approach for coming up with potential error categories, but ultimately the results in the paper can be interpreted and have value independent of the associated human cognitive biases. Because of this, and my relative unfamiliarity with human cognitive bias work, I will focus my comments on evaluating the experiments primarily from the point of a view of a machine learning researcher. Strengths: - The paper is exceedingly clear, well-written and easy to follow. The examples of the prompt formats is extremely welcome and papers that do prompt-based experiments would do well to follow this example. - The experiments are well-structured and well-motivated. While I do have concerns regarding some of the experimental setup (described below), the logical flow from human cognitive biases to a proposed equivalent model experiment to the metrics being measured are generally clear, reasonable, and cover the questions a critical reader might have (e.g. also measuring copying of the irrelevant propmt). Weaknesses: - Experimental setup / evaluation. My major concern with this paper is the experimental design, and whether the experiments correspond to the research question being asked. I will discuss my concerns with each of the experimental setups here. This section includes both what I consider weaknesses as well as questions/additional results I would like to see. 1) Framing. I have an issue with the choice of "irrelevant preceding function" and the interpretation of model behavior, in that I do not share the authors' interpretation that the prepended functions are irrelevant. For instance, given a prompt: ``` def add(x, y): """Adds numbers""" raise NotImplementedError() def multiply(x, y): """Multiplies numbers""" ``` I *would* expect the model to output `raise NotImplementedError()`. This looks like a file that is work-in-progress, and to my mind the model is correctly extrapolating from the prompt. In the authors' words, I do not believe this is "semantically irrelevant information". A better example (and this overlaps somewhat with anchoring) would be something like: ``` def divide(x, y): """Divides numbers""" assert y != 0 return x / y def multiply(x, y): """Multiplies numbers""" ``` In this case, if the model outputs `assert y!= 0`, then we can conclude that the model is actually using the prepended function in an erroneous/unhelpful way. Put another way, if the experimenter wants the model to generate functional code, they should to prompt *with* functional code. This should be a pretext for all experimental involving functional accuracy. 2) Anchoring. I think the setup and results in this experiment are clearer than for framing. One concern I have is with the confounding factor of the anchor function having the same name as the function to be completed. An experiment with a similar name instead (e.g. common1 vs common2) would clarify this result. 3) Availability. The authors conjecture without evidence that "Programmers tend to apply unary operations first", and use this as the basis for their conclusion that Codex has learned to output "related prompts that occur more frequently in the training set". From the results presented, we can take away that the model appears more biased toward unary-first solutions, but it seems difficult to draw conclusions relating to the training set without further evidence. I also have a milder concern regarding the formatting of "docstrings" in this and the attribute substitution experiments. Docstrings should go under the function signature and describe the function. The "docstrings" in these two experiments are provided above the function signature, and written in the form of an instruction "Write a function that...", which is not how docstrings are used in practice, and therefore should not be part of the expected behavior of the model. 4) Attribute substitution. My primary concern with this experimental setup is that it introduces contradictory function names and docstrings (see also the discussion above), and asserts that the model ought to take the dosctrings as the ground truth and that the function names are misleading. It is not clear to me that this has to be that case. I could conversely argue that in practice, documentation updates often lag behind code changes, and that the function name is a more reliable source-of-truth than docstrings. Under this interpretation, what the results in Table 2 show is that the model mostly "correctly" adheres to the function name and is only mildly affected by the "incorrect" docstring information. Even in this lens, I think the results are still valuable, because the conflicting information appears to introduce a set of "Other errors". I recommend the authors revisit and rethink the design and interpretation of this experiment. (Admittedly, this is confounded in the "docstring" sub-experiment, since the only source of information is the dosctring.) 6) Deletion This suffers from the same issue as the attribution substitution experiments, though not to the same degree (`delete_all` can be reasonable seen to be context dependent). That said, I wonder if the results would be different if the function were named more helpfully (`delete_all_with_libraries`). In summary, while I think this approach for finding model errors is valid and worthwhile, the current experiments fall short of demonstrating the biases in a convincing manner. Addressing these issues would make a meaningful difference to my review. - Scale of the experiments. The experiments tend to be on a small scale with a small-ish number of examples. Particularly given the potential of models to be sensitive to the exact phrasing of prompts, this is a moderate but not large concern. - Lack of evaluation on open models. As the authors acknowledge, the Codex model is private and many details regarding the architecture and training are not public. This makes it difficult to ascertain, for instance, that the models have no had additionally curated data to avoid some of these errors (i.e. a more naively model may perform even worse than reported in the paper). While I acknowledge that this paper is primarily focused on Codex and duplicating experiments on an additional model with a separate setup is significant additional work for the authors, I think it is fair to say that experiments solely on a set of non-public models with many unknowns does hurt the scientific contribution of the paper and calls into question whether these results generalize. Even a small set of experiments on a similar more public model (e.g. CodeGen) that demonstrate equivalency of results would help assuage the reader. The authors have framed the work in terms of being helpful to catch errors of large language models, which I think is reasonable. The work does have experiments testing the propensity for models to indiscriminately delete files, but I believe the experiments are conducted within a reasonable and contained setting. <doc-sep>Authors highlight the fragile nature of the GPT-3 style code generation models. Inspired from cognitive biases research, authors design different manual input transformations (prompt tuning) where they add some irrelevant information to the model. Using experiments with Codex model, they show that model's code generation performance drop significantly when they fed modified input to the model. It's worth highlighting that the proposed transformations are generally non-adversarial in nature. Strengths: - Proposed transformations are very intuitive and are well-motivated by prior research in cognitive science. Weaknesses: - While the transformations described in the paper are intuitive, some of them are hard to scale. In particular, one has to design such transformations manually for a given programming language setup. Further, authors have not provided an easy to use list of transformations (or dataset version) which other researchers can reuse for future experiments. - Prior work [1] show that problem description, context are crucial to execution accuracy. This paper's experiments also underline that execution accuracy highly depends upon the prompt (context) fed to the model and minor changes in the context can lead to big drop in execution accuracy. I am not sure that this paper offers new insights to the community. - Authors crafted these prompts manually and show that all of these transformations lead to significant accuracy drop. It's not clear 1) how many different prompts they tried for a given experiment. 2) Have they reported results for all transformations or only for those transformations which lead to accuracy drop. - Paper is missing crucial details related to how the code was generated. In particular, it's not clear which decoding method (greedy, sampling with temperature, etc.) is used to generate the data. Without such crucial details, it's hard to replicate the results presented in the paper. I don't see any major concerns related to negative societal impact.
This paper aims to qualitatively categorize errors by large language models for generation tasks (i.e., summarization, program synthesis), drawing inspiration from 4 human cognitive biases: framing effect, anchoring bias, availability heuristic and attribute substitution. The work includes case studies on Open AI's Codex and GPT-3, and demonstrates that Codex makes predictable mistakes based on the framing of the input prompt, outputs that are closer to anchors and instances that are similar to frequent training examples. The paper uses the proposed framework to elicit high-impact errors. The paper is interesting and is clearly presented, the experiments are well-designed. The rebuttal includes additional experiments suggested by the reviewers with different prompts and additional models.
The paper evaluates two moving average strategies for GAN optimization. Since exact theoretical analysis is difficult for this case, some informal consideration are provided for explanation of performance gain. Experiments confirmed high performance of averaging. The basic idea seems to be reasonable. Moving average-based strategy would stabilize optimization process. The obvious weakness of the paper is technical novelty. Although the experimental improvement is confirmed, I would have to say just comparing two known averaging methods would not have strong novelty. Section 3.1 would be most important part of the paper, but it only mentions quite general tendency of averaging (seems not specific to GAN).<doc-sep>This paper tries to adapt the concept of averaging, well known is the game literature, to GAN training. In a simple min-max example the iterates obtained by gradient method do not converge to the equilibrium of the game but their average does. This work first provides intuitions on the potential benefits of exponential moving average (EMA) on a simple illustrative example and explore the effect of averaging on GAN. In think that the approach of this paper is interesting. I particularly like the experiments on Celeb-A (Fig 6 and 7) that seem to show that the averaged iterates change more smoothly (with respect to the attributes of the faces) during the training procedure. Nevertheless, I have some concerns about the claims of the paper and the experimental process. I'm surprised by the values of the inception score provided in Table 2 which do not seem to correlate with the sample quality in Fig. 3. Why did not you use the standard implementation of the inception score provided in Salimans et al. [2016]'s paper ? I think that the effectiveness of EMA over uniform averaging is a bit overclaimed. - From a theoretical point of view uniform averaging works better (at least in your example in 3.1): If you (uniformly) average the periodic orbit you get a converging iterate. Moreover, concerning to this toy example, note that this continuous analysis has been already introduced in [Goodfellow et al., 2016] and the Hamiltonian interpretation has been already provided in [Balduzzi et al. 2018]. However I think that the intuition on the vanishing magnitude of the oscillation provided by EMA is interesting. - The continuous dynamics is actually different from the discrete one, I think that an analysis on the discrete case that is used in practice might be more insightful. - The comparison with uniform averaging is not fair in the sense that uniform averaging has no hyperparameter to tune: In figure 6 uniform averaging performs better than a not well tuned EMA. A fair comparison would be for instance to propose a parametrized online averaging $\\theta_{MA}^t = \\frac{t - \\alpha}{t} \\theta_{MA}^{t-1} + \\frac{\\alpha}{t} \\theta_t$ and to tune it the same way $\\beta$ is tuned in EMA. Refs: Salimans, Tim, et al. "Improved techniques for training gans." Advances in Neural Information Processing Systems. 2016. Goodfellow, I. (2016). NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160. Balduzzi, David, et al. "The Mechanics of n-Player Differentiable Games." ICML (2018). Minor comments: - In the introduction "gradient vector fields of the game may not be conservative (Mescheder et al. 2017)" and the related work "Mescheder et al. (2017) states that a reason for non-convergence is the non-conservative gradient vector of the players.": the notion of conservative vs. non-conservative vector field is never mentioned in [Mescheder et al. 2017]. I think you are actually referring to the blog post on that paper https://www.inference.vc/my-notes-on-the-numerics-of-gans/ . - In the Related work "can not" - "In fact, it has recently been established that the smooth (continuous-time) analogues of first order methods such as online gradient descent (follow-the-regularized leader) in bilinear zero-sum games are recurrent (i.e. effectively periodic) with trajectories cycling back into themselves. " can you provide a citation ? - Some published papers are refereed as arxiv paper ( for instance (Mescheder et al. 2017) and (Mescheder et al. 2018)), you should cite the published version. <doc-sep>The submission analyzes parameter averaging in GAN training, positing that using the exponential moving average (EMA) leads to more well-behaved solutions than using moving averages (MA) or no averaging (None). While reading the submission, the intuitively given explanations for using EMA (cycling, mainly) seem reasonable. However, I do not think there is sufficient understanding of the (non-)convergence behavior in real-world GAN settings, and this submission does not contribute much to it. The theoretical underpinnings in Section 3.1 are quite thin, and focus on describing one particular example of a bilinear saddle problem, which is quite far from a typical GAN, as used e.g. in computer vision problems. Although interesting to read, I would not draw any wider-reaching conclusions from this carefully constructed example. Instead, the submission serves mainly as an experimental study on why EMA works better in some of the tested cases than MA/None. Main quantitative measures are the often-used IS and FID. It is clear from both the provided quantitative values as well as the provided qualitative images that either averaging method is likely better then no averaging. Unfortunately, IS and FID contradict each other somewhat for EMA vs. MA in Table 2, which is attributed to IS being [more] flawed [than FID]. Neither measure is flawless, however, which diminshes the usefulness of the numeric results somewhat. Well designed human studies may be complicated to set up and costly to conduct, but these could demonstrate additional confirmation of the usefulness of the proposed method. EMA introduces an additional hyperparameter, beta, which is only discussed very briefly, and only in the context of qualitative results. I missed a more thorough discussion of the impact of beta. Overall, the submission makes an interesting proposition (usage of EMA during GAN training), but falls short in convincing me that this is a useful thing to do in broader contexts. Overall originality is minor; projected significance is minor to medium. EDIT: After the rebuttal, resulting in several changes and additions to the paper, I am changing my rating from 5 -> 6.
This work analyses the use of parameter averaging in GANs. It can mainly be seen as an empirical study (while also a convergence analysis of EMA for a concrete example provides some minor theoretical result) but experimental results are very convincing and could promote using parameter averaging in the GAN community. Therefore, even if the technical novelty is limited, the insights brought by the paper are intesting.
The authors proposed a normalization method that learns multi-modal distribution in the feature space. The number of modes $K$ is set as a hyper-parameter. Each sample $x_{n}$ is distributed (softly assigned) to modes by using a gating network. Each mode keeps its own running statistics. 1) In section 3.2, it is mentioned that the MN didn't need and use any regularizer to encourage sparsity in the gating network. Is MN motivated to assign each sample to multiple modes evenly or to a distinct single mode? It would be better to provide how the gating network outputs sparse assignment along with the qualitative analysis. 2) The footnote 3 showed that individual affine parameters doesn't improve the overall performance. How can this be interpreted? If the MN is assuming multi-modal distribution, it seems more reasonable to have individual affine parameters. 3) The overall results show that increasing the number of modes $K$ doesn't help that much. The multi-task experiments used 4 different datasets to encourage diversity, but K=2 showed the best results. Did you try to use K=1 where the gating network has a sigmoid activation?<doc-sep>The paper proposes a generalisation of Batch Normalisation (BN) under the assumption that the statistics of the unit activations over the batches and over the spatial dimensions (in case of convolutional networks) is not unimodal. The main idea is to represent the unit activation statistics as a mixture of modes and to re-parametrise by using mode specific means and variances. The "posterior" mixture weights for a specific unit are estimated by gating functions with additional affine parameters (followed by softmax). A second, similar variant applies to Group Normalisation, where the statistics is taken over channel groups and spatial dimensions (but not over batches). To demonstrate the approach experimentally, the authors first consider an "artificial" task by joining data from MNIST, Fashion MNIST, CIFAR10 and SVHN and training a classifier (LeNet) for the resulting 40 classes. The achieved error rate improvement is 26.9% -> 23.1%, when comparing with standard BN. In a second experiment the authors apply their method to "single" classification tasks like CIFAR10, CIFAR100 and ILSVRC12 and use large networks as e.g. VGG13 and ResNet20. The achieved improvements when comparing with standard BN are one average 1% or smaller. The paper is well written and technically correct. Further comments and questions to the authors: - The relevance of the assumption and the resulting normalisation approach would need further justification. The proposed experiments seem to indicate that the node statistics in the single task case are "less multi-modal" as compared to the multi-task. Otherwise we would expect the comparable improvements by mode normalisation in both cases? On the other hand, it should be easy to verify the assumption of multi-modality experimentally, by collecting node statistics in the learned network (or at some specific epoch during learning ). It should be also possible to give some quantitative measure for it. - Please explain the parametrisation of the gating units more precisely (paragraph after formula (3)). Is the affine mapping X -> R^k a general one? Assuming that X has dimension CxHxW, this would require a considerable amount of additional parameters and thus increase the VC dimension of the network (even if its primary architecture is not changed). Would this require more training data then? I miss a discussion of this aspect. - When comparing different numbers of modes (sec. 4.1, table 1), the size of the batch size was kept constant(?). The authors explain the reduction of effectiveness of higher mode numbers as a consequence of finite estimation (decreasing number of samples per mode). Would it not be reasonable to increase the batch size proportionally, such that the amount of samples per mode is kept constant?<doc-sep>Summary: Batch Normalization (BN) suffers from 2 flaws: 1) It performs poorly when the batch size is small and 2) computing only one mean and one variance per feature might be a poor approximation for multi-modal features. To alleviate 2), this paper introduces Mode Normalization (MN) a new normalization technique based on BN. It uses a gating mechanism, similar to an attention mechanism, to project the examples in the mini-batch onto K different modes and then perform normalization on each of these modes. Clarity: The paper is clearly written, and the proposed normalization is well explained. Novelty: The proposed normalization is somewhat novel. I also found a similar paper on arXiv (submitted for review to IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018): M. M. Kalayeh, M. Shah, Training Faster by Separating Modes of Variation in Batch-normalized Models, arXiv 2018. I didn’t took the time to read this paper in details, but the mixture normalization they propose seems quite close to MN. Could the authors comment on this? Pros and Cons: + Clearly written and motivated + Try to address BN’s weakness, which is an important direction in deep learning - I found similar papier in the literature - The proposed method aims to make BN perform better, but pushes it toward small batch settings, which is where BN performs poorly. - Misses comparisons with other techniques (see detailed comments). Detailed Comments: 1. Multi-modality: It is not clear if the features are multimodal when performing classification tasks. Some histograms of a few features in the network would have help motivate the proposed normalization. However, it seems indeed to be an issue when training GANs: to make BN work when placed in the discriminator, the real and fake examples must be normalized separately, otherwise the network doesn't train properly. Moreover, when dealing with multimodal datasets (such as the one you created by aggregating different datasets), one can use the FiLM framework (V. Dumoulin et al., Feature-wise transformations, Distill 2018), and compute different means and variances for each datasets. How would the proposed method perform against such method? 2. Larger scale: It would be nice to see how MN performs on bigger networks (such as the ResNet50, or a DenseNet), and maybe a more interesting fully-connected benchmark, such as the deep autoencoder. 3. Small batch regime: It seems that the proposed method essentially pushes BN towards a regime of smaller mini-batch size, where it is known to performs poorly. For instance, the gain in performances on the ImageNet experiments drops quite a lot already, since the training is divided on several GPUs (and thus the effective mini-batch is already reduced quite a lot). This effect gets worse as the size of the network increases, since the effective mini-batch size gets smaller. This problem also appears when working on big segmentation tasks or videos: the mini-batch size is typically very small for those problems. So I fear that MN will scale poorly on bigger setups. I also think that this is the reason why you need to use extremely small K. 4. Validation set: What validation sets are you using in your experiments? In section 4.1, the different dataset and their train / test splits are presented, but what about validation? Conclusion: Given the similarity with another paper already in the literature, I reject the paper. Also, it seems to me that the technique actually pushed BN towards a small batch regime, where it is known to perform poorly. Finally, it misses comparison with other techniques. Revision: After the rebuttal, I increased my rating to a 6. I feel this paper could still be improved by better motivating why multi-modality is important for single tasks (for example, by plotting histograms of activations from the network). I also think that the paper by Kalayeh & Shah should be presented in more details in the related work, and also be compared to in the experimental setup (for example on a small network), especially because the authors say they have experience with GMMs.
The paper develops an original extension/generalization of standard batchnorm (and group norm) by employing a mixture-of-experts to separate incoming data into several modes and separately normalizing each mode. The paper is well written and technically correct, and the method yields consistent accuracy improvements over basic batchnorm on standard image classification tasks and models. Reviewers and AC noted the following potential weaknesses: a) while large on artificially mixed data, improvements are relatively small on single standard datasets (<1% on CIFAR10 and CIFAR100) b) the paper could better motivate why multi-modality is important e.g. by showing histograms of node activations c) the important interplay between number of modes and batch size should be more thoroughly discussed d) the closely related approach of Kalayeh & Shah 2018 should be presented and contrasted with in more details in the paper. Also comparing to it in experiments would enrich the work.
### Summary The authors propose late-phase weights, a method of updating the weights near the end of training via a splitting and ensembling mechanism. They analyze the benefits in the noisy quadratic setting. The method improves validation performance on a range of image recognition tasks and on enwiki8. ### Comments * The weight interaction functions $h$ should be more explicitly defined rather than just described in text. * The paper is overall well written and flows smoothly. * I think there should be more discussion on the choice of $T_0$. For example, in table 1, why does SGD perform worse when $T_0=0$? It would be good to get a sense of robustness to this hyperparameter. * Good results on CIFAR. Late-phase weights are shown to boost performance over SGD and to be complementary with SWA. There are some benefits in the OOD setting as well. ### Recommendation / Justification I vote to accept the paper. The idea is interesting, well-motivated, and seems straightforward to incorporate into existing pipelines. However, the improvements seems modest in some settings (e.g. ImageNet) and for the best performance, it seems like we should still stick to Deep Ensembles. ### Questions * On the ImageNet experiments, what is the validation accuracy of the pre-trained model? * Can you comment on the computaional and memory complexity of your algorithm versus vanilla SGD? * In the comparisons between late phase weights and SGD, do both algorithms consume the same amount of data? If so, this would be good to mention. * Could the entire network be treated as "late-phase weights"? Would this help performance? ### Minor comments * I would consider alluding to possible choices of the weight interaction functions $h$ when it is first introduced at the start of 2.1. * In Algorithm 1: How does the loss function consume three inputs? This is different from when it is initially described. * It's a bit unclear what is being compared in Figure 2. (increased score from 6 to 7) <doc-sep>Summary: The paper proposes a method to improve solutions found by SGD by ensembling subsets of weights in late-phase. A family of low-dimensional late-phase methods are analyzed and shown to improve generalization in CIFAR-10/100, ImageNet and enwik8. Authors also analyze the method in more tractable noisy quadratic settings. Contribution of the authors is that rather obtaining ensemble they utilize efficient ensemble to guide SGD training and ultimately obtain a single model. Reason for score: While the paper discusses efficient ways of utilizing late-phase weight ensemble and improving SGD training, the demonstrated benefit is not significant enough for practitioners to pursue the method. Without strong practical application potential, merit of the proposed method is weak since it does not obviously elucidate some aspects of neural network training. Pros: The paper is clearly written and easy to understand the proposed method is. It is well structured that helps to improve the clarity. Proposed method tackles a significant problem in the standard ensemble method in which both training/inference computation can be quite costly. The paper’s method only ensembles subset of weights therefore added training cost is minimal and since inference is done on averaged weight, it becomes essentially a single model. Among various late-phase schemes, BatchNorm late-phase seems to work well which is widely used among vision models so easily applicable. Also since late-phase can be applied post-pretraining, it can be used to improve pre-trained models. As far as I can tell various experimental conditions are very well controlled and thoughtfully designed. Cons: The idea of weight averaging is not so novel as duly noted by the authors. Main question arises for the paper is whether the proposed method is worth the effort. While all experiments show that the proposed method improves the baseline somewhat, deep ensemble baselines remain strong. Also quoted difference between methods does not mean statistically significant effect (see Vincent Vanhoucke’s article on reporting significant figures https://towardsdatascience.com/digit-significance-in-machine-learning-dea05dd6b85b). According to this article, results reported in Table 1, CIFAR-10 in WRN, a significant figure with a 10k test set should be around 0.2% and differences between different methods are at best marginal. This can be applied to most tables and except for Deep Ensemble’s improvement other differences are not very significant. I wonder as discussed by the authors, this is due to mostly the benefit of ensembles is through incorporating different modes as argued in [Fort et al., 2020] rather than a single mode. I imagine a single mode ensemble could be beneficial when variance within the mode is large, however for models considered by the authors seem to have small model variance which minimizes effect of technique utilizing single mode. While \\sigma_0 and T_0 are hyperparameters of the algorithm, no good way to determine it is explained. The role of section 3.1 is not clear. For one thing, the legend in Figure 1 is confusing where the role of non-integer K is mysterious to me. I would suggest clarifying what the message of the section would be in context of understanding late-phase weight models. Nits and additional feedback: Anonymized link is neither there in the main paper or included as supplementary material. If the authors intended to include the code, this is a note that code can not be found to the reviewers. For models that do not use BatchNorm, I believe most interest to practitioners would be using Transformer based models. I wonder if rank-1 late-phase or LayerNorm late-phase would show improvements in this case. Was “Late-phase classification layers” ever evaluated or discussed in the main paper? I find some discussion on the appendix but seem to be missing in the main text. --- I thank the authors for their hard work addressing issues raised by the reviewers. Authors have answered many issues pointed out (by improved performance and showing robustness to hyperparameters) and I've increased my score from 5 to 6, and support accepting the paper. <doc-sep>This work suggests a variant of ensembling that is more compute-efficient. Specifically, it involves forking an ensemble only in the late stage of training, and forming this ensemble via a "low-dimentional" family. That is, instead of maintaining independent networks, maintain only "low-rank"-style perturbations of the base network (for various instanciations of "low-rank"). The experimental results are somewhat limited, but appear to be competitive with current efficient-ensembling approaches like SWA/SWAG. The absolute improvement of this method is not very large (<0.3% on CIFAR, <0.2% on imagenet), and there is a large gap to Deep Ensembles. I weakly recommend acceptance, because the method appears promising for future work, and the experiments seem correct. There is also a theory section included, though I am generally unconvinced by results in such simple toy examples. (such settings can usually be contrived to exhibit any desired behavior) Weaknesses: - The experimental section would be greatly strengthened by additional experiments for different models and settings. There are only 2 architectures tested on CIFAR-10, for example. It would also be informative to see the performance of these methods in "harder" settings -- for example, CIFAR-10 with fewer train samples. - The OOD uncertainty results could be expanded. Uncertainty estimation and robustness are some of the most relevant practical uses of ensemble methods, so it is especially important to evaluate ensembles in this context. Currently aggregate results are shown in Table 4, but it would be good to explicitly see, for example: how the performance of this method degrades with increasing CIFAR-10C corruption severity, as opposed to Deep Ensembles. Also, reporting the Mean Corruption Error (mCE) for each dataset individually will allow standard comparison to prior methods. Comments which do not affect the score: It seems that starting the ensembling at a "late phase" in training is the main contribution of this work. This could be applied to any ensemble method, and you propose several explicit instantiations. It could help to focus the writing in terms of this contribution, and also to further investigate the role of T0 (the time at which ensembling starts). --- Edit after rebuttal: Increased score from 6 to 7. <doc-sep>To improve the generalization performance of SGD methods,  this paper proposes to use an efficient ensemble-like approach  which computes an average of an ensemble of SGD weights when retrained from some late-phase of SGD dynamics.  This idea is different to most recent ensemble-based approaches which  aim to average the predictions of the models.  The paper focuses on some specific layers of neural networks  in order to apply the late-phase training.  The batch normalization layers are shown to be  simple and effective. Some other layers are also analyzed,  including a recently introduced rank-1 multiplicative matrix  weights idea for full-connected layers.  Section 3 presents the numerical results and show that the generalization of SGD  is more-or-less improved on various benchmarks. Explanation of why the generalization is improved in relation with the flatness of energy landscape is also discussed.   I find that this approach is quite sensitive the choice of the hyper-parameters,  such as the beginning of the late-phase T0, and the noise perturbation sigma0.  It is written in Section 2.1 that in practice … sigma0>0 yields a set of models … this results in improved final generalization. However, in the result of ImageNet in Section 3.3,  the sigma0 equals to 0. Thus, it is not conclusive that sigma0>0 is better.  As the improvement in Section 3.3 seems marginal compared to the baseline and the  standard deviation, it thus does not fully support the effectiveness of the batch normalization layers.  I would recommend using some other dataset or models,  but with a more consistent set of hyper-parameters.  In terms of writing, I would recommend to write out the full algorithm of Alg. 1 or at least in the Appendix,  including the variant of the SGD momentum and Adam.  The SWA is also worth writing out clearly, which is not clear to the reader.  Is the DeepEnsemble result in Table 1 from SGD or SWA?  This is not clear from the text. Overall, I think both the methodology and the writing need to be improved. ## The revisions made by the authors have addressed all my concerns.
This paper proposes to learn an ensemble of weights given a set of base weights from some point late in normal training. The authors apply this approach to a number of configurations and find modest performance improvements for normal test settings and larger improvements for out of distribution settings. While reviewers had some concerns about the size of the improvement relative to baselines, all reviewers agreed that the proposed method is interesting and will likely impact future work, especially given the new experiments provided by the authors. I recommend that the paper be accepted.
The paper shows a latent-planning RL model to learn control policies directly from pixels. To learn a better representation it uses recurrent model contrastive learning approach which enhances the representation learning performance of single frame based contrastive methods. This was tested robotic control suites with challenging distracting backgrounds. The main contribution of the work is the addition of recurrence to the model and the extensive testing and explanation on the reason it tends to work better. ## Strong points The paper is well written and easy to follow. I particularly liked the more straight-to-the-point and honest approach on writing. The authors could have easily try to mask their similarity with some other methods in the literature but they were upfront with that. This made the understanding of the paper way easier and also easier to understand the main contributions. There is a substantial comparison with relevant models in the literature that already deal with the DCS. The results obtained are really impressive and are a new state-of-the-art on interactive environments with distractive backgrounds. The ablations specifically the ones concerning the usability of the recurrent contrastive method are very useful. Specially the intuition on the negative samples per mini batch. Further, the appendix provides even more ablations, making very clear that a recurrent state has a major impact on the general performance. The visualizations are also good standard to be used on this field which performs control directly from pixels. They also show that their model is capable of directly detecting the object of interest. ## Weak points The proposed model is indeed very similar to Dreamer[1]. The authors address specifically one point from dreamer: the contrastive learning strategy, which failed to produce better results as reported by [1]. However, I do believe that this closer look into this specific point can be useful for the community and insightful in general. ## Questions I would be curious to find more references on the idea that hard negatives needs to be found on the mini batch in order to contrastive learning to have a good performance. I wonder how the performance of this method would be in a more open task than mujoco style robotic control. Tasks with a different initial state or with other elements in the scene that the agent need to interact might have a negative impact on this method results. [1] Hafner, Danijar, et al. "Dream to control: Learning behaviors by latent imagination." arXiv preprint arXiv:1912.01603 (2019). I think this is a very useful and well written paper. Even though the scope is small, the results are convincing and it shows a very clear way on how to effectively use contrastive representation learning methods while learning to control directly from pixels. ## After Rebuttal After reading the other reviewers comments and the rebuttal I see that i missed some literature that needed further comparison. I think it is a good and well written paper but I would lean for rejection given this new data. <doc-sep>The paper proposes a recurrent state-space model that learns robust representations for robotic control. The proposed method builds on top of prior works on world-models which learn a latent dynamics model of the agent which can be used for planning and action selection. Different from prior work such as Dreamer and SLAC which rely on pixel-based observation reconstruction, this paper highlights that a simpler contrastive loss for the next-observation prediction achieves better results **if** a recurrent state-space model is used for the latent space. Results are presented on the Distracting Control Suite benchmark and show strong improvements over prior approaches. # Strengths * The clarity of the paper is good and the approach has been described well. * The experiments have been well designed and indicate strong improvements in the model robustness (particularly Figure 2 and Table 3). Table 2 was helpful to understand the key design choices and their impact on final performance. * The mask visualizations in Figure 5 provide good clarity on how the robustness is achieved by the proposed method. * The ablation studies in the main paper and supplementary clearly highlight the benefits of individual components of the method. # Weaknesses ## Novelty * The novelty of the approach isn't quite clear. In Sec. 1 (page 2), the authors highlight that one of the key findings is that "contrastive learning can in fact lead to surprisingly strong robustness to severe distractions, provided that a recurrent state-space model is used". This finding itself is valuable in my opinion, but the method "CoRe" does not seem novel to me. Relative to Dreamer and SLAC, the key novelty appears to be the contrastive loss term from Eqn (1) which predicts future observations instead of auto-encoding. However, the idea of using a recurrent model to predict encodings of future observations (i.e., not an auto-encoder) has been studied in CPC [1], action-conditioned CPC (aka CPC|A) [2], Predictions of Bootstrapped Latents (aka PBL) [3], etc. CPC does not use action conditioning, but uses contrastive learning. CPC|A uses action conditioning + contrastive learning (very similar to CoRE). PBL uses action conditioning + reconstruction (similar to "Recon" baseline, but has an additional loss to predict state conditioned on observation representation). * Given the above, the methodological novelty of CoRe is not clear to me (particularly, relative to CPC|A). Is there something different about the contrastive loss in CoRe (relative to CPC|A) without which the performance degrades severely? Or is the novelty only in the observation that recurrent models are needed for contrastive loss to work well? Note, while some recent work like DBC does not use recurrent models, prior approaches already use them with contrastive loss. ## Intuition behind why recurrent state models are needed for contrastive learning * [End of section 1] The authors suggest that when recurrent models are used along with contrastive learning, the smoothness of the state-space ensures the presence of informative hard negatives in the same mini-batch of training. This isn't clear to me. Positives and negatives are obtained from the next-step real observation's encodings (just a single frame) and does not use the recurrent model. How is the smoothness of the state-space related to hard negatives? While the performance degrades without the recurrent model in Figure 3 (top), isn't this more likely due to a poorer state representation caused by the lower capacity (no RNN) and lack of information aggregation over time? This would affect all models, not just the CoRe. ## Missing baselines / related works A few related methods have not been compared with or discussed. * Augmented Temporal Contrast (aka ATC) [4] does not use recurrence, but uses future observation prediction and has been shown to achieve good improvements over CURL. * CPC|A [2] uses recurrence, future observation prediction (contrastive), and action conditioning. This is very similar to the proposed method and should be compared with. * PBL [3] uses recurrence, future observation prediction (reconstruction), and action conditioning. This method introduces an additional P(state | observation) term that results in bootstrapped learning (might improve over the Recon baseline). ## Other concerns * Why is CoRe worse than PSE on 4 / 6 tasks for DAVIS (2 videos) in Table 1? Why is CoRe better with 60 videos? * Why are only 2 baselines used in Figure 4? Can the authors include the complete set? Do we observe similar trends as Figure 2? * Are the findings from Figure 5 specific to CoRe? Or do other methods (like PSE) also learn similar masking functions? [1] CPC - Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. "Representation learning with contrastive predictive coding." arXiv preprint arXiv:1807.03748 (2018). [2] CPC|A - Guo, Zhaohan Daniel, et al. "Neural predictive belief representations." arXiv preprint arXiv:1811.06407 (2018). [3] PBL - Guo, Zhaohan Daniel, et al. "Bootstrap latent-predictive representations for multitask reinforcement learning." International Conference on Machine Learning. PMLR, 2020. [4] ATC - Stooke, Adam, et al. "Decoupling representation learning from reinforcement learning." International Conference on Machine Learning. PMLR, 2021. I am concerned about the lack of clear novelty in the paper, and other experimental issues highlighted in "Weaknesses". I will update my rating based on the author's responses. <doc-sep>This paper has presented CoRe, Contrastive Recurrent state space model, for model-based robust model-based reinforcement learning for robotic control. Standard reconstruction-based state space models are less robust in the unstructured real-world scenarios because of the high-frequency details. Instead, CoRe learns the state space model with contrastive learning, which greatly improves robustness. In addition to this, a policy is being learned with SAC. Experiments on distracting control suites and several robotic control tasks demonstrate the better robustness of CoRe. Weaknesses: The major issue is that the proposed idea and the experiment setup is not novel. They highly overlap with a prior CoRL 2020 paper, Contrastive Variational Reinforcement Learning for Complex Observations (CVRL) [1], which, however, has not been cited in the submitted manuscript. They overlap in the following aspects: 1/ The idea is the same. CVRL also extends RSSM using contrastive learning and aims to improve the robustness of the learned model against real-world observations with high frequency noise. Both of them use InfoNCE for contrastive learning and the same RSSM structure. In addition, both CVRL and the proposed method use the policy loss from dreamer for learning the policy network. The resulting equations and loss functions of the two algorithms are almost the same. 2/ CVRL also experimented on natural Mujoco games, which introduces moving backgrounds into the standard dm-control suites. This is exactly the same with the distracted dm-control suites used in the submitted manuscript. Beyond what has been discussed in the paper, CVRL has mathematically shown that by replacing the generative observation likelihood with a contrastive objective, we can lower bound the original ELBO. There are some other weaknesses, but I believe the issues discussed above are sufficient to make it a clear rejection. The paper is highly similar to a prior work as mentioned above, and as a result, the contribution of the paper is very limited. I would vote for a rejection.
Meta Review of Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models This work investigates a recurrent latent space planning model for robotic control from pixels, but unlike some previous work such as Dreamer and RNN+VAE-based World Models, they use a simpler contrastive loss for next-observation prediction. They presented results on the DM-control suite (from pixels) with distracting background settings. All reviewers (including myself) agree that this is a well-written paper, with clear explanation of their approach. The main weaknesses of the approach are on the experimental side (see review responses to author’s rebuttal by skrV and cjX3). Another recommendation from me is to strengthen the related work section to clearly position the work to previous work - there is clear novelty in this work, but this should be done to avoid confusion. The positive sign is that in the discussion phase, even the very critical cjX3, had increased their score and acknowledged the novelty from previous related work. In the current state, I cannot recommend acceptance, but I’m confident that with more compelling experiments recommended by the reviewers, and better positioning of the paper to previous work, I believe that this paper will surely be accepted at a future ML conference or journal. I’m looking forward to seeing a revised version of this paper for publication in the future.
This paper focuses on understanding the tail behavior of normalizing flows through a mathematical and statistical way. Motivated by Jaini et al 2020's work on learning long-tailed distribution via triangular flows, this work proves that the marginal tailedness can be controlled by the tailedness of the marginals of the base distribution in flow-based models. Based on this theoretical insight, the authors propose a new algorithm by leveraging a data-driven permutation scheme to enable a correct tail behavior of the target distribution. Strength: 1. Solid and rigorous mathematics foundation The theoretical proofs are very helpful and provide a clear insight, which explains the motivation and intuition. 2. Careful discussion about the related work and current limitations The authors provide a good review and comparison of the existing works. Also, the limitations and future directions are helpful and insightful. 3. Well-written and easy to follow The paper is well-structured and easy to follow. The theoretical proof strongly connects with the experiments, which makes the papers much easier to understand. Weakness 1. The novel contribution is marginal This work is mainly inspired by Jaini et al, 2020, who proposed to model long-tailed distribution via normalizing flows. Although the theoretical contribution is strong, the new proposed mTAFs did not show a significant improvement, as shown in Table 1, compared with vanilla and TAF. 2. Lack of baseline methods and comparison The SOTA flow models and architectures are not included in the baselines. Although it is argued by the authors in the potential work, I still believe the comparison is necessary. Many papers have shown that the affine coupling layers, such as RealNVP, have limited expressivity in handling complex distribution. Either light-tailed or heave-tailed distribution would be more challenging. So the worse performance might be due to the limited representation capability of the vanilla flows. 3. Experiments need to be improved with large-scale and high-dimensional datasets Currently, only synthetic toy examples are provided to demonstrate the performance. The dimensionality is also low. If the proposed algorithm is able to scale to high-dimensional problems, it would be very helpful to increase the impact. The paper provides a strong theoretical insight but the experiments and baselines are weak. The contribution seems limited with a marginal improvement compared with the current work. <doc-sep>The paper proposes an extension to Tail-adaptive flows for learning the tail behavior of target distributions using normalizing flows. The authors propose to learn the tail behavior by learning flows that match the tail properties of the marginal distributions. They achieve this by using a source distribution consisting of marginal distributions with tail properties matching the target distribution. The tail coefficient of the source distribution is set in a data-driven manner using estimators that can estimate this tail coefficient. The paper is The problem of estimating the tail behavior **Pros**: Modelling tail-phenomena (or rare events) in general is a challenging problem made even more difficult in higher dimensions due to the lack of any definition of heavy/light tails in higher dimension. The problem considered by the authors of capturing tail behavior by normalizing flows and limitations of the chosen architecture for flow layers imposes is an interesting and valid problem. The proposed solution that uses estimators to estimate the tail-coefficient for choice of base distribution is interesting and a nice addition to start the optimization of the $\\nu$ at a favorable location and to be closer to whats needed. Overall, the paper is very well written and easy to follow. The paper develops the idea in a natural and easy to understand manner. **Cons**: 1) **Motivation**: I found that the problem under consideration was not properly motivated and this issue lingers throughout the paper essentially making the paper come across as just an extension of Jaini et.al 2020. For example, it is not clear from the paper why capturing tails in variational inference paradigms is of importance. It can be shown that the error in modelling a probability density can be bounded arbitrarily well by learning the density properly on a bounded subset of the support of the density. Thus, what are the drawbacks in the model if it is unable to capture the tail phenomena present in the problem? The authors do make statements regarding the limitations of the push-forward density given the choice of base density and the transformation map. However, I believe a more thorough discussion about the implications of these results (both in general and particularly for normalizing flows) and some recipe or ideas to alleviate these problems will help the paper tremendously. 2) **Significance**: Another weakness I believe of the paper itself is the significance itself. The main result of the paper ie that of specifying and ensuring that marginal distributions have the correct tail coefficient is a direct extension of the work of Jaini et.al 2020. In some ways, the optimization problem presented in Jaini et.al already can encompasses the correct marginal tails by optimizing over the $\\nu$ vector. This weakness of significance is further amplified by the lack of strong empirical results. In the present form, the experimental results come across more as proofs-of concept rather than proving strong empirical support. Furthermore, I'd be interested to see the gain in starting the optimization with tail-coefficients estimated using the various estimators vs random initialization and letting the process figure these out. Will the first step of using estimators lead to any significant gains? **Other comments:** - Definition 4 it seems is a bit restrictive as well since it completely side-steps any issues with differences in tail-coefficients. For examplke different ,marginals can be heavy tailed but have different degrees of heaviness. In that case, is it correct to call the two distributions having the same tail behavior? - It seems in the experiments that if the tail estimator estimates that some marginals are light-tailed, a normal distribution is used. However, again there can be degrees of light-tailedness (see Jaini et.al 2020) eg. uniform vs normal. In these scenarios too, a lighter tailed distribution cannot be pushed-forward to another light-tailed distribution but with higher tail coefficient with lipschitz maps. Thus, the problem of mismatched tails may still persist. Overall, the paper studies a pertinent and difficult problem. However, in the current form, the present manusript provides only initial proofs-of-concept for potentially interesting ideas. These ideas need to be demonstrated and explored in more detail both in theory and empirically to make the manuscript stronger. <doc-sep>This paper introduces Marginally Tail-Adaptive Flows (mTAFs), which extend existing work on TAFs to better learn a generative model of heavy-tailed distributions. In particular, they propose a new type of normalizing flow (NF) that can learn marginals with mixed-tail behavior. **Strengths:** I thought the paper was interesting. NFs definitely do have their limitations despite their expressivity, and I don’t think the problem of generating distributions with a mixture of both heavy- and light-tailed marginals has been considered before. The paper provides a more general definition of heavy tailedness that extends existing work and uses it to construct their mTAF method. **Weaknesses:** That being said, I think the paper still requires a significant amount of work in order to demonstrate the efficacy of mTAF. - First, it’s not clear to me when you would run into situations where you want to generate distributions with mixed-tail behavior in the marginals. I understand that it would be desirable to generate distributions with heavy tails, but when do we encounter cases where we would like to do both? I think making this clear would definitely strengthen the paper, and could also guide some downstream evaluation tasks. - The experiments were probably the weakest aspect of the paper. mTAF was only evaluated on a synthetic dataset of 16 dimensions, which seems too small (even for tabular datasets commonly used for evaluating NFs). Also, the evaluations conducted in the experiments did not clearly demonstrate the advantage of mTAF over existing methods. For example in Table 1, does mTAF capture both the light-tailed and heavy-tailed components better than TAF/the base method? (this is hard to tell with just a simple average). Additionally, it’s hard to tell the difference between mTAF and TAF in Figures 2 and 3. I think the paper would be much stronger if the authors could find some compelling use cases of the method beyond synthetic Gaussians, and demonstrate that mTAF both captures all marginals more faithfully (via likelihoods) and can generate samples properly in the tails. - I also think a big limitation of the method is that mTAF essentially requires separating out the light-tailed marginals from the heavy-tailed marginals (the permutation step where such marginals are grouped into 2 categories). This seems particularly problematic as the real advantage of using NFs is to learn complicated dependencies between all dimensions of the data to best capture the overall density. This is also why I was asking whether there are real-world examples where such mixtures occur, and whether this kind of ``independence assumption’’ makes sense in these scenarios. It seems like mTAF is very restrictive, and I am wondering if maybe that is why it doesn’t significantly outperform TAF and the vanilla baseline. **Questions:** - I’m also curious if the method performs worse relative to conventional flows (e.g. MAF) when the distribution in question is only light-tailed or heavy-tailed. It seems like if the tail index estimator is correct, mTAF should return the correct “tail behavior” of each marginal and generate either a light-tailed or heavy-tailed distribution only. Is this the case? Or does mTAF do a worse job at modeling, say, the light-tailed components, etc? - Additionally, the vanilla baseline has pretty high variance and sometimes seems to perform on par with TAF -- would the authors elaborate upon this point? **Miscellaneous/minor typos:** - “Allow [us] to” in Section 2.1 - “By out theory” in Section 5 Although the paper extends an existing approach to learn generative models of distributions exhibiting mixed-tail behavior, the paper has a number of weaknesses: (1) it’s not clear when such mixed-tail behavior arises in the real-world; (2) the class of flows considered are quite restrictive (affine, coupled with a permutation that requires light-tailed and heavy-tailed marginals to be split into two consecutive blocks); and (3) the empirical results are lacking: they only provide experiments on datasets of dim=16. <doc-sep>The paper is composed of two main parts: 1) A theoretical section where the authors prove that (Lipschitz) triangular normalizing flows cannot map either heavy- or light-tailed base distributions into target distributions with different tail indices for different marginals; 2) An algorithmic section where the authors introduce a new method for modeling distributions with different tail indices for different marginals. Both the theoretical and the algorithmic parts are a straightforward extension of the analysis and methods introduced in the paper "Tail-adaptive flows" (Jaini, 2020). Strengths: - The addressed problem is important. Vanilla normalizing flows (such as other deep density estimators) are poor at tail-estimation, which limits their applicability to many problems in science and engineering. - The paper is very well written and it serves as a good introduction to both normalizing flows and heavy-tailed distributions. - The offered solution is technically sound. - The theoretical analysis is sound and convincing. Weaknesses: - The main weakness of the paper is its very limited novelty. All theoretical analysis and methodological improvements are relatively minor modifications of the work in (Jaini, 2020). The present paper does not contain major new ideas. - The proposed method is somewhat inelegant as it proposes the use of a separate off-the-shelf tail estimator prior to the flow training. I do agree that it could be the right approach in many applications, however it is a rather obvious idea, not really worth of a top conference publication. I highly appreciate the clarity and technical soundness of the paper. However, I cannot recommend acceptance given the very limited novelty.
This paper addresses the performance of normalizing flows in the tail of the distribution. It does this by controlling tail properties in the marginals of the high-dimensional distribution. The paper is well-motivated, and the key theoretical insight has merit. However, the general perspective and methodology appears to be incremental relative to past results. Furthermore, some concerns over correctness remain after discussion with authors. Also, clear baselines and more realistic settings are lacking in the experimental results. Thus, while the paper generally has promising ideas on a pertinent topic, it appears to be not developed enough to merit dissemination.
This is a very interesting paper and it suggests a novel way to think of "implicit regularization". The power of this paper lies in its simplicity and its inspiring that such almost-easy arguments could be made to get so much insight. It suggests that minimizers of the Bregrman divergence are an alternative characterization of the asymptotic end-points of "Stochastic Mirror Descent" (SMD) when it converges. So choice of the strongly convex potential function in SMD is itself a regularizer! Its a very timely paper given the increasing consensus that "implicit regularization" is what drives a lot of deep-learning heuristics. This paper at its technical core suggests a modified notion of Bregman-like divergence (equation 15) which on its own does not need a strongly convex potential. Then the paper goes on to show that there is an invariant of the iterations of SMD along its iterations which involves a certain relationship (equation 18) between the usual Bregman divergence and their modified divergence. I am eager to see if such relationships can be shown to hold for more complicated iterative algorithms! But there are a few points in the paper which are not clear and probably need more explanation and let me list them here. ( and these are the issues that prevent me from giving this paper a very high rating despite my initial enthusiasm ) 1. Can the authors explain how is the minimax optimality result of Theorem 6 (and Corollary 7) related to the main result of the paper which is probably Proposition 8 and and 9? Is that minimax optimiality a different insight separate from the main line of the arguments (which I believe is Proposition 8 and 9)? 2. Is the gain in Proposition 9 over Proposition 8 is all about using loss convexity to ensure that the SMD converges and w_\\infty exists? 3. The paper has highly insufficient comparisons to many recent other papers on the idea of "implicit bias" like, https://arxiv.org/abs/1802.08246, https://arxiv.org/abs/1806.00468 and https://arxiv.org/abs/1710.10345. It seems pretty necessary that there be a section making a detailed comparison with these recent papers on similar themes. <doc-sep>The authors look at SGD, and SMD updates applied to various models and loss functions. They derive a fundamental identity lemma 2 for the case of linear model and squared loss + SGD and in general for non-linear models+ SMD + non squared loss functions. The main results shown are 1. SGD is optimal in a certain sense for squared loss and linear model. 2. SGD always converges to a solution closest to the starting point. 3. SMD when it converges, converges to a point closest to the starting point in the bregman divergence. The convergence of SMD iterates is shown for certain learning scenarios. Pros: Shows implicit regularization properties for models beyond linear case. Cons: 1. The notion of optimality is w.r.t. a metric that is pretty non-standard and it was not clear to me as to why the metric is important to study (the ratio metric in eq 9). 2. The result is not very surprising since SMD is pretty much a gradient descent w.r.t a different distance metric. <doc-sep>Optimization algorithms such as stochastic gradient descent (SGD) and stochastic mirror descent (SMD) have found wide applications in training deep neural networks. In this paper the authors provide some theoretical studies to understand why SGD/SMD can produce a solution with good generalization performance when applied to high-parameterized models. The authors developed a fundamental identity for SGD with least squares loss function, based on which the minimax optimality of SGD is established, meaning that SGD chooses the best estimator that safeguards against the worst-case disturbance. Implicit regularization of SGD is also established in the interpolating case, meaning that SGD iterates converge to the one with minimal distance to the starting point in the set of models with no errors. Results are then extended to SMD with general loss functions. Comments: (1) Several results are extended from existing literature. For example, Lemma 1 and Theorem 3 have analogues in (Hassibi et al. 1996). Proposition 8 is recently derived in (Gunasekar et al., 2018). Therefore, it seems that this paper has some incremental nature. I am not sure whether the contribution is sufficient enough. (2) The authors say that they show the convergence of SMD in Proposition 9, while (Gunasekar et al., 2018) does not. It seems that the convergence may not be surprising since the interpolating case is considered there. (3) Implicit regularization is only studied in the over-parameterized case. Is it possible to say something in the general setting with noises? (4) The discussion on the implicit regularization for over-parameterized case is a bit intuitive and based on strong assumptions, e.g., the first iterate is close to the solution set. It would be more interesting to present a more rigorous analysis with relaxed assumptions.
The authors give a characterization of stochastic mirror descent (SMD) as a conservation law (17) in terms of the Bregman divergence of the loss. The identity allows the authors to show that SMD converges to the optimal solution of a particular minimax filtering problem. In the special overparametrized linear case, when SMD is simply SGD, the result recovers a recent theorem due to Gunasekar et al. (2018). The consequences for the overparametrized nonlinear case are more speculative. The main criticisms are around impact, however, I'm inclined to think that any new insight on this problem, especially one that imports results from other areas like control, are useful to incorporate into the literature. I will comment that the discussion of previous work is wholly inadequate. The authors essentially do not engage with previous work, and mostly make throwaway citations. This is a real pity. I would be nice to see better scholarship.
In this paper, the authors developed a probabilistic programming framework for stein variational gradient descent and its variants using difference kinds of kernels, i.e. nonlinear kernels or matrix kernels. Simple experiments are included that the repository is effective and scalable for various problems. Followings are a few of my questions and comments: 1. How is the new implementation compared with other frameworks using black box variational inference? For example, What is the speed of the training comparing with previous frameworks such as edward in large scale dataset tasks? And the report does not give us a more thorough guide of the performance of each kernels for difference tasks. 2. The authors mentioned that the framework can be extended to use other objective function such as Rényi ELB, , Tail-adaptive f-divergence, or Wasserstein pseudo-divergence. I am extremely confused about this part, since actually there is no objective function for svgd based methods (unless you design a new loss based on KSD or related things), how is this possible to combine other objective function using svgd? It would be great if the authors write down the derivations and have a detailed discussion. 3. Does the current framework implement amortized svgd and other related stein's paper that can be utilized to train neural networks based applications such as stein-vae, stein-gan or kernel stein generative modeling [1, 2, 3]? This implementation can be important since it can be quite helpful for many other applications such as meta learning. Also, the authors give the public code link of their implementation in the paper, which may expose their identity, but I am not sure if this violates anonymous requirement of ICLR submissions. [1] Feng, Yihao, Dilin Wang, and Qiang Liu. "Learning to draw samples with amortized stein variational gradient descent." arXiv preprint arXiv:1707.06626 (2017). [2] Wang, Dilin, and Qiang Liu. "Learning to draw samples: With application to amortized mle for generative adversarial learning." arXiv preprint arXiv:1611.01722 (2016). [3] Chang, Wei-Cheng, et al. "Kernel Stein Generative Modeling." arXiv preprint arXiv:2007.03074 (2020).<doc-sep>Summary ======== The paper shows how a particle-based nonparameteric Variational Inference methodology known as Stein Variational Inference is integrated in a full-featured Probabilistic Programming Language, NumPyro. The paper goes into a fair amount detail describing a number of enhancements that have been made into numpyro using the general technique of particle-based representation of non-parameteric approximating distributions. They describe how geometric transforms of the parameter space can fit into their scheme, how matrix-valued kernels can be integrated. Also, they describe a new variant of Stein VI which they call ELBO-within-Stein. This introduces a new line of research for Stein VI. They also describe a Stein Mixture extension to Deep Markov Models (SM-DMM) and demonstrate on a very large dataset for the latter method. Strengths ========= - Integrating a more powerful variational approximation has clear benefits for probabilistic inference. And integrating this into a full-featured PPL allows users of Bayesian modeling to get access to a cutting edge technique with minimal programmatic effort. - The integration of Stein VI into numpyro seems to have been very well designed given the very large number of ideas that have become easy to add including some innovative approaches. - Showing state-of-the-art results on the high dimensional JSB-Chorales-dataset is a very impressive achievement for any PPL, and it certainly lends credence to this work. Weaknesses ========== - The only claim in the paper that is well supported is that the authors have extended NumPyro with SVI. - The presentation style in the paper sometimes fails to draw a clear distinction between implementations of prior work in NumPyro versus new innovations. It is somewhat unclear whether the authors are making claims about the following points in their paper: * Non-linear Stein * Matrix-valued kernels * Parameter Transforms * Enumeration-based integration of discrete random variables - The objective function of ELBO-within-Stein is not well motivated (see discussion below) and there is no direct comparison to the previous Stein Variational Gradient Descent which this method seeks to improve. - There is no way to objectively evaluate the results on the first three experiments. Recommendation =============== Reject Rationale ======== Experiments don't directly validate the main innovations of the paper. Supporting Arguments ==================== - The main innovation in this paper appears to be the ELBO-within-Stein method. This appears to be different than SVGD (Stein Variational Gradient Descent). The difference appears to be that in the current paper both the entropy term and the Stein repulsion term are in the general objective (page 5 first equation) unlike in SVGD where the entropy term is not there. Philosophically, it doesn't look right to include both of these terms that are serving the same purpose (prevent the collapse of the variational approximation on the mode) . I could be mis-reading these equations, but if there are other difference the authors should clearly state and motivate these differences. Most importantly, the authors should show an experiment directly comparing to SVGD. [SVGD reference: Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose Bayesianinference algorithm.Neural Information Processing Systems (NIPS), 2016.] - The Neal's funnel should show the posterior marginal which is well known so that the reader can judge whether the samples are of good quality. - Not clear how to interpret the dual moons plot. What are we looking at in this plot (right plot figure 2)? The posterior density or the true density? How do we know if this is a good posterior? - For the LDA example there don't seem to be any results. Questions for authors =================== - Please provide motivation for the modification to SVGD objective. - Please clearly state which of the many enhancements to NumPyro are being claimed as novel extensions. - Any results worth sharing for the LDA? Additional Suggestions Not Part of the Review Rating ============================================= - The abstract mentions that this work is better than Stochastic VI but this claim is not actually supported explicitly. I had to read many of the referenced papers to realize that Jankowiak and Karaletsos (2019) had implemented a version of SVI. I'm assuming that this is what the abstract was referencing. Please do make such connections explicit! - In Figure 1b, variables X and y are not actually used in the guide. Space permitting you could make a note as to why these are there in the guide. - The first paragraph of the introduction mentions nuisance variables in Fig 4a. Not clear which variables in 4a were nuisance variables.<doc-sep>### Summary This paper introduces EinStein VI: a lightweight composable library for Stein Variational Inference (Stein VI). The library is built on top of NumPyro and can take advantage of many of NumPyro's capabilities. It supports recent techniques associated with Stein VI, as well as novel features. The paper provides examples of using the EinStein VI library on different probabilistic models. ### Strengths I'm not aware of other PPLs that support Stein Variational Inference. EinStein VI can provide an easier way to compare different Stein VI algorithms, and make research in the area easily reproducible.   ### Concerns The paper states that it provides examples that demonstrate that EinStein VI's interface is easy-to-use and performs well on pathological examples and realistic models. While it is true that there are several examples described, in my opinion there are not enough details to support the claims that EinStein VI is easy to use and performs well. A concrete comparison between EinStein VI and other methods is missing. It would have been helpful to have, for example, some concrete numbers (e.g. time taken to do inference, posterior predictive checks, posterior mean convergence plots, etc) that showcase why it is useful to use Stein VI for those examples, as opposed to other, already existing methods. Another concern is that it is difficult to judge from the paper what the difference to (standard) NumPyro is. There is only a high-level explanation of the examples in the paper, so it's hard to imagine what the actual code looks like. Most importantly, I would have liked to see a comparison between EinStein VI code and what the code would have looked like without EinStein VI. ### Reasons for score Unfortunately, there is not enough to go on in this paper, which is why I recommend reject. There is no strong evidence to support either the usability of the system (through elaborate examples and contrasting EinStein VI to other systems) or its performance (through experiments). This paper will be much stronger, and will have a better chance of reaching more people, if it includes either 1) more elaborate code examples that demonstrate that using EinStein is indeed better and easier than vanilla NumPyro, or 2) experiments comparing different Stein VI techniques to other inference algorithms, as evidence that a dedicated Stein VI library is indeed empowering our inference toolkit. However, I do appreciate that writing a paper about tools / libraries is difficult, as the contribution of tools is typically a longer-term improvement in the workflow of developing new methods and techniques. I am open to increasing my score during rebuttal, depending on the answers of the questions listed below. ### Questions for the authors Why has Stein VI not been implemented in PPL systems previously? Is it a matter of timing, or is there something particularly challenging about integrating Stein VI into a PPL? The paper mentions "compositionality" several times. I was a little confused about what you mean by that: can you explain, perhaps with an example? The paper mentions novel features (second to last paragraph page 8): can you elaborate? The paper shows an example of using NeuTra in combination with Stein VI. Can you elaborate on the kind of problems that NeuTra won't be able to handle on its own? What about more lightweight approaches that can be applied in the context of probabilistic programming, such as "Automatic Reparameterisation of Probabilistic Programs" (Gorinova, Maria I., Dave Moore, and Matthew D. Hoffman. ICML 2020)? When will we see benefits of *both* applying a reparameterization that improves the posterior geometry, *and* using a more sophisticated inference algorithm like Stein VI? ### Suggestions for improvement and typos that have not affected the score I gave to the paper Perhaps the most important change that would improve the paper is adding more concrete examples that would showcase the importance of using EinStein VI as opposed to simply NumPyro / other libraries. It would be nice to see a model where Stein VI gives us better inference results than a range of other algorithms / techniques and compare the code to what the user would have to write otherwise to achieve the same results. The examples of composing Stein VI with reparameterization / marginalization in NumPyro can be improved by comparing the results to Stein VI without reparameterization / marginalization and to other inference algorithms with reparameterization / marginalization. Typos: * last line of the abstract should be 500 000 as opposed to 500.000. * URL in footnote 3 does not lead to the correct page <doc-sep>Unfortunately the authors link directly to the code, and the code is not anonymous. This might be a desk-reject as this is not a double blind review. This work is a description of a library for developing variational inference algorithms using the ELBO-within-Stein framework developed in Nalisnick et al. (2017). The library is evaluated on on Neal's funnel and two moons, and on a polyphonic music dataset. Comments - Nalisnick et al was published in 2017. I assume this was a typo on the authors' part. - Table A in the Appendix, describing different kernels, should include a column with computational and memory requirements for each kernel if they differ. This can affect the scalability. - The work describes LDA but does not evaluate it. It would be helpful to include held-out log likelihood numbers on a standard topic modeling dataset such as 20 newsgroups. This would help people compare to prior work. - Similarly, the library is evaluated by fitting to a standard polyphonic music dataset. Please report these numbers in a table, alongside a reasonable approach using standard variational inference and Stein VI (using the library) side-by-side. For example, the numbers here are much better, and use standard variational inference with the KL divergence: https://papers.nips.cc/paper/6039-sequential-neural-models-with-stochastic-layers.pdf (Stein Variational Inference can be difficult to understand, as can be Pyro, which is built on jax/pytorch, and the library developed here is built on top of all of these moving parts. Before embarking on using the library, a machine learning researcher should be very convinced that all this additional effort is worth it. Benchmarking this new library against existing work is important and will go a long way toward justifying its existence.) - The references are very poorly formatted. Please clean up.
All reviewers have carefully reviewed and discussed this paper. They are in consensus that this manuscript merits a strong revision. I encourage the authors to take these experts' thoughts into consideration in revising their manuscript.
The authors aim to reduce the gap between clean accuracy without adversarial training and with adversarial training. To improve the robustness-accuracy trade-off, the authors introduce Helper-based Adversarial Training. The main idea is to use adversarial examples $\\mathbf{x}_{\\text{adv}} = \\mathbf{x} + 2 \\mathbf{r}$, where $\\mathbf{r}$ is standard PGD adversarial perturbation, as helper adversarial examples. The model is trained to classify these helper adversarial examples as the adversarial label predicted by the model trained without adversarial training. In the experiments, the authors show that HAT improves clean accuracy and robust accuracy on CIFAR-10 and CIFAR-100 datasets when compared with TRADES defense. ### Strengths: - Extremely simple method, which can be useful for practitioners. - A slight improvement over baseline defences on CIFAR-10 and CIFAR-100 datasets. ### Weaknesses: - The method is based on intuition and the authors didn't provide any theoretical justifications for the proposed defence. Based on my intuition, I believe the method is fundamentally flawed as its assumptions are incorrect. For example, it is incorrect to assume that all adversarial examples with perturbations $2 \\epsilon$ should be labelled with its adversarial label. - The authors should compare the robustness of their method for moderate size perturbations as well, e.g. $\\epsilon = 12/255$ and $\\epsilon = 16/255$ on CIFAR-10 and CIFAR-100. It is quite likely that their method will be less robust for moderate size perturbations. - The overall procedure is ad-hoc and requires training and storing the model trained without any regularization first. The model is then finetuned with the proposed training procedure. - Some references are missing and the comparison is outdated. The method should also be compared with [1], [2] and [3] defenses, which improve upon Trades defense. - The experimental comparison can be improved. The authors evaluated the models with AutoAttack. The authors can also compare their method against GAMA [4] attack. The authors should also include the gradient masking checks in the experimental results or at least discuss gradient masking. [1] Amirreza Shaeiri, Rozhin Nobahari, and Mohammad Hossein Rohban. Towards deep learning models resistant to large perturbations. arXiv preprint arXiv:2003.13370, 2020. [2] Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan Kankanhalli. Attacks which do not kill training make adversarial learning stronger. In International Conference on Machine Learning, pp. 11278–11287. PMLR, 2020. [3] Dongxian Wu, Shu-Tao Xia, and Yisen Wang. Adversarial weight perturbation helps robust generalization. Advances in Neural Information Processing Systems (NeurIPS), 2020. [4] Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, and R Venkatesh Babu. Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses. In Advances in Neural Information Processing Systems (NeurIPS), 2020. ### Update after the author's response The authors addressed all my concerns. In particular, the authors: - Added adversarial robustness results with $\\epsilon = 12$. - Added adversarial robustness results with other attacks. Overall, based on the new results for the larger perturbations and the author's comments to other reviewers, I am discarding my doubts about the paper's approach, that it is somewhat ad-hoc. I believe the empirical contributions of this work are significant and novel. Therefore, I recommend accepting the revised paper. The authors proposed a simple technique to improve clean accuracy. However, the method is based on intuition, which in my opinion, is flawed: not all large perturbations should be labelled with its adversarial label. The authors should provide a theoretical justification for their intuition. Besides that, the experimental comparison is outdated with few recent defenses missing, which improve upon TRADES defense. ### Update after the author's response I sincerely thank the authors for addressing the majority of my comments and concerns. The experimental results are undeniable and clearly demonstrate the advantages of the proposed technique. Based on the new results for the medium perturbation $\\epsilon = 12$ and additional experiments with other attacks, I tend to overlook my doubts about the paper's approach. I recommend accepting the revised version of the manuscript. <doc-sep>The paper highlights the presence of excessive invariance in the prediction of robust models along the initial adversarial directions. Initial adversarial directions refers to the directions in which adversarial images generated using a standard trained model are present. Based on this hypothesis the authors propose a training method where the excessive invariance is minimized using the cross entropy loss between the prediction(made using the standard trained model) of larger epsilon adversarial image and the prediction of the adversarial image, in addition to the TRADES loss formulation. This additional loss term indeed improves the accuracy-robustness trade-off by giving a significant boost in clean accuracy along with a slight boost in adversarial robustness as compared to the existing methods. Overall the paper is well written and easy to follow. Strengths: * The paper is well written and easy to understand. The motivation behind the design choices is clear. All the related works are properly addressed and the baselines are also strong. * The paper achieves a significant boost as compared to existing methods on strong attacks like Auto-Attack. The approach shows consistent gains across multiple datasets. Weaknesses: * I think the results shown in Figure-4 are quite expected as initially when the perturbations are generated from a standard trained model, they will be non-smooth similar to random noise. Thus the final model will have high invariance to the directions of these random noise as compared to the perturbations which are smooth in nature and have features. These smoother perturbations would be generated by the adversarially trained models and thus the model would be easily fooled as we go in the direction of these perturbations. This is addressed by the Final Margin of figure 4-c. Although the proposed approach reduces the invariance in the directions of the initial perturbations which are similar to random noise, (as shown in table 1, 5 and stated in section 3), I think ideally the model should focus on reducing the invariance in the direction of smooth perturbations which have semantic features. Could the authors clarify a bit on this. I dont think it would matter much if the model will reduce the invariance in the directions of initial perturbations(similar to random noise) since they won't change the semantics of the image to some other class image. While the smooth perturbations which have some semantics and are generated using an adversarial model have the potential to change the semantics of an image and thus change the true class of the image as well as shown in [1] and thus it is desired to reduce invariance in these directions. Some minor concerns: * Could the authors clarify how they plotted the class boundaries in figure 3? I think this is plotted by examining the predictions of all the points possible in the 3D space? * In table 5 it is shown that the models are trained so that they have the same robustness. I think this is not a good idea for an ideal comparision. Could the authors show the same table with the median margin in R-init , R-5 and R-15 where the models do not have any constraint on having the same robust accuracy. If possible could the authors share the results of table-1 for R-5 and R-15 also. * I think the training budget for the results reported in Table 6 is only 50 epochs. If possible could the authors share the results of HAT for all three datasets for 200 epochs training budget? This will help in better understanding the proposed approach. I think the activation used without additional data is ReLU. If this is true could the authors also share the CIFAR10 200 epochs without additional data results for SiLU activation. In case the authors have used SiLU can they share the results with RelU. * If possible could the authors share the PRN18 and WRN-28-10 for CIFAR10 and CIFAR100(if possible) results as shown in table 4 using the ReLU activation? This would help in understanding the influence of SiLU activation. * An ablation study on using different perturbation bounds for getting the helper label in Algorithm-1 can also help a lot in better understanding the proposed approach. [1] Tramèr, F., Behrmann, J., Carlini, N., Papernot, N., & Jacobsen, J. (2020). Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ArXiv, abs/2002.04599. Overall I think the paper is well written. It shows a significant boost as compared to existing art and has some minor issues at present. If the concerns are properly addressed I am willing to increase my score. <doc-sep>**Few sentences summary**: the paper proposes a new training loss for adversarial training in the Lp-norm setting. Based on the observation that adversarial training increases the classification margin in a disproportionate manner compared to the nominal training setting, the authors introduce additional samples called "helpers" to reduce the classification margin. The helper samples, which are samples translated further away in the worst adversarial direction, can change labels compared to the adversarial samples. Helper samples get assigned labels from a standardly trained model, thus acting as a constraint coming from this standardly trained network. Regarding the **results** and **contributions**: * Novel method using helpers to define a new training loss for adversarial training. * On par or better results in robust accuracy compared to the SOTA training loss TRADES on CIFAR-10/100, SVHN, TinyImageNet and a subset of ImageNet. * Much improved results in clean accuracy compared to TRADES, thus reducing the gap between clean and robust accuracy which is primordial for the application of Lp norm models to practical uses. * Clear analytical tools based on the margin analysis to investigate the proposed method and how/why it works. **Strengths**: * Clean, original and novel idea leading to good experimental results. Very well written paper with a clear story, with clear arguments and experiments to support the story. * Very extensive experiments in the main paper and in the appendix. It gives a lot of intuitions about the problem and Lp-norm robustness in general. * The proposed analytic tools are useful beyond the analysis of the proposed algorithm. Big plus for the toy problem giving interesting intuitions, the margin analysis in Figure 4 and the per epsilon analysis in Figure 6. * The code is attached in the supplementary materials and anyway, the experimental details and code are very well described in the paper. Hence, the paper seems reproducible. **Weaknesses/Suggestions/Questions**: 1) In the bullet points in page 2 and other parts of the paper, please specify when the accuracy is the "clean" or "robust" accuracy. Otherwise, there is an ambiguity. 2) It would be great to see how the proposed method performs compared to TRADES on larger models such as WRN-70-16. Maybe by fine-tuning an already pre-trained model to avoid expensive computations. 3) In Figure 6, maybe specify that the variable $\\epsilon$ on the x-axis is used for the test-time robust accuracy and not the training procedure. 4) In Figure 13 in the appendix, why do the curves TRADES and HAT ($\\gamma=0$) do not match while they are the same method? Is the difference due to the variance in the results? 5) (Very optional but curious to check) I would be curious to see the performance of an alternative helper: $x' = x + r + r'$ where $r'$ is the adversarial perturbation computed at $x + r$. In this way, helper samples could possibly look more "natural" rather than when using $x + 2r$, thus possibly improving the final results. It would require twice as more computations but would be interesting to check. Paper enjoyable to read with extensive experiments supporting a clear and novel idea leading to improved results. The authors also propose great analytical tools to investigate their hypothesis. Hence, I vouch for acceptance. <doc-sep>This paper proposes a helper-based adversarial training (HAT) method to alleviate the trade-off between robustness and accuracy. Empirical evaluations are done on several datasets, and under AutoAttack and common corruptions. Strengths: - The writting is easy to follow, while the illustration of the idea of HAT is clear and reasonable. - I especially admire the empirical evaluations in this paper, which involve large-scale experiments using DDPM generated data and 80M TI extra data. The improvements are significant, and the sanity check for, e.g., gradient masking is also presented. Weaknesses: - The modifications introduced in HAT are simple (which is good), but they depend on an assumption that ``the model should not be robust beyond the threat model``. Namely, under an 8/255 $\\ell_{\\infty}$-norm threat model, an adversarial example with 16/255 perturbation is encouraged by HAT to fool the model, while the label of the adversarial example may not change. For me, this assumption is quite ad-hoc, and introducing another standard model $f\\_{\\theta\\_{\\textrm{std}}}$ seems not an elegant solution. In conclusion, I think the pros and cons of this paper are quite clear. Strong empirical evaluations and promising improvements, but the method itself is somewhat ad-hoc and not very principled. So I would like to recommend an acceptance, but the method could be further polished. Strong empirical evaluations and promising improvements, but the method itself is somewhat ad-hoc and not very principled.
The authors propose a simple addition to adversarial training methods that improves model performance without significantly changing the complexity of training. The initial reviews raised some questions about whether experiments were sufficiently extensive, but these issues were resolved during the rebuttal and discussion period, resulting in a strong consensus that the paper should be published.
The paper proposes a new algorithm for sentence segmentation which can be applied to various sequential tagging problems. The motivation and the description of the algorithm are clearly given, and the proposed method achieved state-of-the-art results for most of the problems and datasets. The proposed method tries to find all possible segments in the given input sequence, to estimate the scores of the segments using pre-trained BERT representations, and to find the best sequence of segments using the dynamic programming algorithms. The proposed method is general enough to apply to various sequential tagging problems and natural language sentence analysis. While the proposed method looks new to apply to the sequential tagging problems in natural language processing, the dynamic programming approach to sequential analysis is a well-known method in the speech recognition community where a sequence of phonemes are segmented into a word sequence. Also, a similar method has been applied to the segmentation of character sequences into word sequences for the languages that have no delimiters between words, such as Chinese and Japanese. In these views, the novelty of the paper is not high. On the contrary, the application of the BERT-based representation to the sequence segmentation tasks such as sentence segmentation and sequential labelling may be new, and the finding that this method can attain a state-of-the-art performance in those problems could be worth reporting. <doc-sep>The paper is well-written, easy to follow and clear. However, the novelty and main contribution of the paper is not clear. The authors used a scoring model to score the composition of each segment, as well as the probability of having a specific label for the segment. The BERT language model is used in the paper to encode the input sequence. The training part is a more like a supervised training and a dynamic programming (DP) approach is used for inference. It is not clear how DP contributes to the success of the model, as the scores for segments are derived during the training (which seems most of the success is coming from the labeled data (i.e. supervised training) and BERT encoding). One other thing about formatting and citing references, some of the references are published in conference proceedings, not sure why authors cited their arxiv version.<doc-sep>This paper presents a method called LUA, Lexical Unit Analysis for general segmentation tasks. LUA scores all the valid segmentation of a sequence and uses Dynamic Programming to find the segmentation with the highest score. In addition, LUA can incorporate labeling of the segment as an additional component for span labeling tasks. Pros: 1. LUA overcomes the shortages of sequence labeling as a token-based tagging method and span-based models as well, by treating them separately. 2. The decomposition of scoring label and scoring span allows the pre-computation of the maximum label score for each span, reducing the complexity. 3. This method achieve the state of the art performance on 13 out of 15 data sets empirically. Cons: 1. The novelty is incremental, as the idea of calculating span-based score and label-based score with DP has been used widely in constituent parsing, which applies interval DP in a similar way. Also check semi-CRF model (Sunita Sarawagi and William W. Cohen, 2004). 2. The way of using neural model to calculate the span-based scoring seems to be very arbitrary (Eq3), without any explanation why it is designed in this way. 3. Label correlations are used to mimic correlation scoring, however the transition between spans are not explicitly modeled. Questions: 1. LUA is only used in inference stage. Do you think by using LUA in training as well, though slower, the performance can be further improved? 2. Do you have any intuition of why designing the scoring function (Eq3) in that way?
This paper is concerned with sequence segmentation. The authors introduce a framework which they call 'lexical unit analysis' - a neural network is used to score spans and then dynamic programming is used to find the best scoring overall segmentation. The authors present extensive experiments on various Chinese NLP tasks, obtaining better results than the systems they compare to. Reviewers raised concerns, including about novelty. In my view, beyond beating the state of the baselines on the chosen tasks, it is hard to extract an actionable insight or novel conceptual understanding. Therefore, the paper is not recommended for acceptance in its current form.
The authors define the task of solving a family of differential equations as a task of gradient-based meta-learning generalizing the gradient-based model agnostic meta-learning to problems with differentiable solvers. The problem is well formulated. Numerical integrators of differential equations can be sensitive to choices of parameters or initial conditions. At the same time numerical integration is a computationally complex task that can benefit from meta-learning. Iterative solvers and their surrogate meta-solvers can both be implemented by a neural networks which makes the problem approachable by gradient based meta learning. The authors demonstrate their approach on a family of 1D Poisson equations and incompressible flow simulations. The authors successfully demonstrate the advantages of using gradient based meta solving with a neural network architecture for the meta-solver over a baseline learner and over regular supervised learning on this task. The presented work is solid. The main concern is with its limited audience in the scope of the conference and potential applications beyond those presented in the paper. <doc-sep>This paper introduces a framework for learning the parameters of computational algorithms in order to improve runtime and/or error. They study the specific case of linear systems that arise in PDE solvers, showing an objective whose solution is an initialization that decreases the number of Jacobi iterations required to solve the Poissson equation as well as empirical results for both Jacobi and SOR on several PDE systems. Strengths: 1. The paper applies gradient-based methods to important problems of learning initializers for iterative methods in scientific computing. 2. The authors provide a guarantee for initializing the Jacobi iteration, albeit under what seems like a restrictive assumption on the model capacity. 3. The authors demonstrate improvement over simply learning one mapping (“supervised learning”) rather than back propagating through iterations. Weaknesses: 1. While the improvement over the “supervised learning” setting is interesting, the evaluation largely seems to be in regimes where the error is far too high for practical applications. For example, in Table 1 the MSE of even the best approach seems quite high, although it is difficult for me to get a sense of what a good scale is. Would the advantage continue to hold and be significant-enough to be interesting if the methods were given sufficient number of iterations for practical purposes? 2. There is no demonstration of practical application utility, i.e. whether going through the trouble learning this initialization is actually useful. Is it more useful for me to spend the (likely substantial) amount of effort of back propagating through a lot of classical solves in order to get a better initialization, or just to use the classical solver to begin with. As an example, in the field of neural PDE solvers there is often a demonstration of end-to-end computational savings provided (c.f. Li et al., (2021)). 3. While the claimed framework is very general, it is only studied for linear system solving. The authors also do not compare their overall framework to the substantial work on data-driven algorithm design, which has been studying these problems both theoretically and empirically for quite some time (e.g. Hutter et al. (2011), Balcan (2020), Mitzenmacher & Vassilvitskii (2020)). References: Balcan. *Data-driven Algorithm Design*. In Roughgarden, *Beyond the Worst-Case Analysis of Algorithms,* 2020. Hutter et al. *Sequential model-based optimization for general algorithm configuration*. ICLLO 2011. Li et al. *Fourier Neural Operator for Parametric Partial Differential Equations*. ICLR 2021. Mitzenmacher & Vassilvitskii. *Algorithms with Prediction*. In Roughgarden, *Beyond the Worst-Case Analysis of Algorithms,* 2020. While the problem setup is reasonably well-motivated and some of the empirical results are interesting, it is not clear to me how practically relevant the empirical results are for the problems being studied. The very general framework is also only discussed in the restricted case of linear system solving for PDEs. As a result I tend to lean against acceptance. <doc-sep>This paper proposes leveraging data from previous problem instances to improve efficiency of solving similar ones in the future. A general gradient-based method is proposed, which is applied to generating initial guesses to differential equation solutions. This problem is formulated as a meta-learning problem. Strengths: - The paper proposes a very general formulation of “meta-solving” numerical problems. Thorough theoretical foundation and justifications are provided. Weaknesses: - The only use-case that is thoroughly empirical validated is solving PDEs. As the paper mentions, other applications, such as root-finding, are applicable. Only evaluating the framework on one application does not showcase its general applicability. - Data augmentations are required for the incompressible flow simulation experiment. Why isn’t it possible for the meta-solver to learn without these augmentations? - The formulation of the dataset for the experiment in 3.2 seems arbitrary. Why are the two previous timesteps required? Are there stronger baselines that can be compared against? For example, are there problem-specific heuristic initial guesses that can be used that leverage domain knowledge about the particular problem? Typos: Section 2.2 - “is a algorithm” - “to find a good initial weights” - “for the meta-solving problems” - “may not be an initial weights” Section 2.3 - “\\theta does not depend on task \\tau” (paragraph 2) - “\\theta is weights of another” (paragraph 3) - “In this work, meta-learning approach” (paragraph 4) - “tested with multi steps of \\Phi” (paragraph 4) Algorithm 2 - “differntiable solver” Section 3.1.2 - “tends to ignore high frequencies and more focus on low frequencies” (paragraph 4) The paper proposes a general framework for efficiently finding solutions to numerical problems, but only evaluates the framework on PDE problems. Furthermore, additional tricks, such as data augmentations and using the previous two timesteps of the solution, are required to make the method work well empirically. I’m not very familiar with meta-learning or PDE solvers, so I’m not very confident in my assessment. <doc-sep>The paper proposed a gradient-based algorithm GBMS to solve PDEs based on the solutions of other similar problems. In GBMS, a network is trained to produce good initial guess for the iterative solver of the PDE. Numerical experiments are performed to show the effectiveness of the method. Strengths: - The paper proposed to predict a good initial guess for traditional PDE solvers, so the PDE solver would converge fast. Also, by using traditional PDE solver, the obtained solution is usually more accuracy than other purely data-driven ML methods. Weaknesses: - The authors spent a lot of effort to create a new terminology “meta-solving”, which has a board meaning and many other algorithms can be formulated in this way. However, this is only a new terminology, but it is not a new idea or a new algorithm. From the paper, there is no clear evidence that why we would need this new terminology, or for what problems we have to use this new terminology. - In fact, in the paper, it only tested the problem of generating good initial guess, which is not really a new idea. - The paper only tested the algorithm on a 1D Poisson equation and 2D incompressible flow. Other challenging problems should be tested. - There is no comparison between the proposed method and other methods in terms of inference speed and accuracy. - The paper didn’t provide the details of networks used. The paper introduced a new terminology and is more like a perspective paper, instead of a comprehensive research article.
The authors define the task of solving a family of differential equations as a task of gradient-based meta-learning generalizing the gradient-based model agnostic meta-learning to problems with differentiable solvers. According to the reviews, there were some concerns regarding the practical value of the paper, for example, (1) the proposed technology is restricted to linear systems, and relatively easy problems (2) there is no demonstration of practical application utility (3) It lacks systematic comparison with other methods (4) some technical details are missing. There were quite a lot of discussions on the paper among the reviewers, and the consensus is that the paper is not solid enough for publication at ICLR in its current form (the reviewer who gave the highest score is less confident and does not want to champion the paper).
The authors propose a recourse methodology to deal with model biases/fairness issues in producing equitable outcomes for all users (classes). The paper addresses an important issue of dealing with model bias in producing fair user outcomes. My main concern with the paper is the lack of clear supporting arguments on why the choice of cost models (including knowing the distribution over the cost functions) is the right one? I am not clear about the assumptions made in the proposed cost function that each user adopts. First, how is the cost function effectively computed even with the use of a recourse set? Next, how is the recourse set itself guaranteed to always produce at least one reasonable counterfactual in the set? More specifically, even as the authors acknowledge knowing the exact cost function by each user is difficult, their explanation of using the recourse set to get around this problem with high confidence is unclear to me. Intuitively, it seems a measure like diversity will be more effective when the cost functions are unknown/private to the user. I am not completely convinced the proposed model of computing a recourse set to minimize the expected cost for the user is always effective in the absence of knowing the cost function even approximately. The experiments to show the effectiveness with respect to the proposed baseline are inconclusive with respect to natural measures like diversity. <doc-sep>In this paper, the problem of algorithmic recourse is studied where the goal is to find best recourse (counterfactual set) that is optimized for user cost. The author proposed new user-incurred cost evaluation method, Expected Minimum Cost (EMC), which approximate user satisfaction without assuming a fixed global user cost function, and instead consider user cost functions as hidden and user-specific. Specifically, the authors define cost function for each user as a set of feature-specific functions of user-incurred cost when transitioning between feature states, and define MinCost as the minimum transition cost across possible recourses. To cover diverse user cost functions, they propose to model user cost distribution with a hierarchical sampling procedure and estimated expected minimum cost by drawing samples from it. Next, they formulate a discrete optimization problem using EMC as objective, and propose a search algorithm (COLS) for best recourse generation. They introduce three new metrics for user satisfaction (all related to MinCost): FS@k, Coverage and PAC. Finally, they test with two real-world datasets and show that COLS achieves significant outperformance against baselines (that optimize for distance-based metrics) on the newly proposed metrics and show that their method is doing better in fairness as well. Pros - This paper proposes a new way of evaluating user satisfaction which differs from existing methods that measures on heuristics such as distance/diversity or assume a fixed global user cost function. It is more flexible and realistic, and thus could be an interesting direction to follow. The proposed formulation is quite novel and technically non-trivial, with some theoretical grounding. - The experimental results are very strong on the 3 newly proposed metrics. The authors also conduct extensive ablation studies on different aspects of the problem, although many of them are deferred to the supplementary. - The discussion is pretty comprehensive, and they also included a fairness analysis. Concerns - One major issue is on the readability of the paper. It is certainly good that the paper contains a lot of information, however currently it seems that the main text is a bit too packed such that very limited detail about the main methodology is provided. In fact, both the core sampling and optimization algorithms are described in the appendix, and it is very hard for this reviewer to understand them solely based on the descriptions in section 4. Perhaps the authors could reorganize the content such that less space is spent on repeating the contributions/motivations. - As algorithmic recourse is a rarely new domain and may not be well-known by general audience, it might be better to translate the domain-specific terminologies into plain language or more general language in ML in the introduction part. - It seems the newly introduced evaluation metrics are generated using the same sampling distribution used for computing EMC, wouldn’t that be a bit circulated to evaluate something where the ground-truth is closely related to the objective used for optimization? Is there any way to evaluate on more realistic user cost rather than simulating it with the same distribution as the one used in EMC? The authors talk about distributional shift regime in the appendix, that still the ground-truth distribution is from the same family of the EMC distribution (mixture of percentile shift and linear cost). It might be more convincing if it is from a totally independent distribution. Questions - What is used as the initial starting set for COLS? - In the problem formulation in (2), does it mean that the best recourse set would consist at least one desired outcome solution but it may not be the one with lowest cost? If so how do one achieve balance between outcome and satisfaction? - What is the computational complexity of the algorithm? - Is there any downside from the underperformance in distance-based metrics? In this paper the author proposed a new way for evaluating and optimizing user satisfaction. The technical contributions are solid and the results are rather promising despite the potential bias toward EMC. The paper contains fruitful discussion and ablation studies, although it can be further improved in terms of clarity. Therefore, I would like to give a weak accept. <doc-sep>This work introduces a new method for identifying actionable recourses for users with user-specific cost functions. Users’ cost functions are hidden from the recourse method. The paper proposed a discrete optimization algorithm COLS to solve the objective EMC. It further used a popular real-world dataset to illustrate the performance. I enjoyed reading this paper in general. My major comments are: 1. In Section 4.1, it is assumed that there is a distribution over all the cost functions D_c for the population. Is the distribution D_c known or unknown? A more practical setting is that D_c is unknown. Then how to use Monte Carlo Estimation to approximate the expectation of the MinCost? For different users u, it is assumed that C_u follows distribution D_c. However, in the introduction, it claims that “we propose a method for identifying a user-specific recourse set that contain at least one good solution for the user”. However, it seems inconsistent between the motivation and the assumption. Why do all users share the same distribution of the cost function? Is the framework generalizable to the setting with different distribution? 2. Theorem 4.1 proves the monotonicity of Cost-Optimized Local Search Algorithm. But how does ExpMinCost(s_u, S_t^best; {C_i}_{i=1}^M) converge? Theorem 4.1 does not imply that, but it is a very important question. 3. Why choosing Equation (3) and Equation (4) as metrics to measure recourse quality? What is the advantage of choosing a threshold function? How to choose k in real cases? 4. In the numerical experiments, could you compare with other functions that measure the recourse quality in previous recourse papers? I think my main concern is on FS@k. Is using FS@k equivalent to the following: assume there exists a black-box algorithm that can output the indicator that if the total cost is smaller than k, then the distance function can be used to measure the recourse quality. Could you use numerical experiments to emphasize the advantage of using FS@k compare to other measure functions such as weighted sum of costs? 5. What is the computational complexity of your algorithm? How is that compared to other benchmarks? 6. Why is fairness an important issue in this work? Could you comment more on this part to motivate? This paper studied an interesting problem. To improve the paper, the author may want to illustrate the advantage of using FS@k and how it is very different from state-of-art measure functions, both conceptually and numerically. <doc-sep>This paper aims to find algorithmic recourse that has low-cost to the users. Unlike previous work, the authors do not assume that there is a known global cost function that is shared by all users. I do not think this papers achieves what it sets out to do. It is mentioned in the related work, how the closest literature to the paper is other cost-based approaches to finding recourse and different from those approaches, this paper drops the assumption that there is a known global cost function that is shared by all users. First, the paper formulates $\\mathrm{MinCost}(\\cdot;\\mathcal{C}_u)$ as the cost function of user $u$, which is characterized by the unknown transition matrices $\\mathcal{C}_u$. However then, the paper assumes a distribution $\\mathcal{D}$ over $\\mathcal{C}_u$ 's of different users is known and proposes to optimize $\\mathbb{E}_\\mathcal{C}{}_\\sim{}_\\mathcal{D}[\\mathrm{MinCost}(\\cdot;\\mathcal{C})]$, which is effectively a known global cost function (with respect to user state $\\mathbf{s}_u$ and recourse set $\\mathcal{S}$). Is this not the case? Having said that, the proposed cost function has a certain structure and it is still novel: (i) authors propose a hierarchical cost distribution as the particular $\\mathcal{D}$ they consider and (ii) by only considering the element with the minimum cost for each sample from $\\mathcal{D}$, they exploit the fact that each users only really requires one recourse that they are happy with to be satisfied. Proposing a new cost-based objective like this could still be a valuable contribution. But then, the paper needs to be positioned accordingly and highlight the merits of optimizing a cost-based objective structured in this new way. Note that the current experiments are not helpful in comparing against other cost-based objectives proposed in previous work: cost functions of the users are simulated according to the proposed cost function, then of course, a method that optimizes it would perform better than methods optimizing other cost functions. Some of the conclusions made in the results section suffer from user preferences being simulated as well. For instance, at the end of "Q2," the authors conclude that high diversity is not necessary to satisfy individual users; this is of course true for the simulated users since their cost function is designed to ignore diversity in the first place. I believe the claim that the paper relaxes the assumption of knowing a global cost function is not true. However, it still introduces an interesting new objective to optimize for when finding algorithmic recourse.
This paper makes an interesting contribution to the literature on algorithmic recourse. More specifically, while existing literature assumes that there is a global cost function that is applicable to all the users, this work addresses this limitation and models user specific cost functions. While the premise of this paper is interesting and novel, there are several concerns raised by the reviewers in their reviews and during the discussion: 1) While the authors allow flexibility to model user specific cost functions, they still make assumptions about the kind of cost functions. E.g., they consider three hierarchical cost sampling distributions, each of which model percentile shift, linear shift, and a mixture of these two shifts. The authors do not clearly justify why these shifts and a mixture of these shifts is reasonable. Prior work already considers lot more flexible ways of modeling cost functions (in a global fashion). For example, Rawal et. al. 2020 actually learns costs by asking users for pairwise feature comparisons. Isn't this kind of modeling allowing more flexibility than sticking to percentile/linear shifts and their mixture? 2) Several reviewers pointed out that the main paper does not clearly explain all the key contributions. While the authors have updated their draft to address some part of this concern, reviewers opine that the methods section of the paper still does not discuss the approach and the motivation for the various design choices (e.g., why a mixture of percentile and linear shifts?) clearly. 3) Reviewers also opine that some of the evaluation metrics also need more justification. For instance, Why is fraction satisfied measured at k = 1 i.e, FS@1 measured and why not FS@2 or FS@3? Will the results look different for other values of k here? 4) Given that Rawal et. al. 2020 is a close predecessor of this work, it would be important to compare with that baseline to demonstrate the efficacy of the proposed approach. This comparison is missing. Given all the above, we are unable to recommend acceptance at this time. We hope the authors find the reviewer feedback useful.
This paper proposes an effective Deliberated Domain Bridging (DDB) for domain adaptive semantic segmentation (DASS). To this end, it takes advantage of two data mixing techniques, region-level mix and class-level mix, to train two corresponding teacher models, which eventually guide one student model on the target domain. It has been tested on several benchmarks (GTA5 to Cityscapes, GTA5 + Synscapes to Cityscapes, GTA5 to Cityscapes + Mapillary). - Strengths: 1. It is a well-written paper that addresses the limitations of previous methods (e.g., global interpolation -> pixel-wise ambiguity), toy game to show the justification of using both coarse-grained and fine-grained DB, and proposes novel learning architecture with multi-teacher and single-source distillation method. 2. The proposed method is well proven to be effective with several benchmarks (e.g., single-source, multi-source, and multi-target settings) by widening the gap with the previous state of the arts in each benchmark. - Weaknesses: 1. As the author illustrated in limitations, I am also a bit concerned with the training efficiency and complexity since the proposed method requires alternating optimization processes. One of the simple end-to-end optimizations is conducting EMA training on one model by combining region-level and class-level mixing techniques. It would be better to show a brief study about how the authors can extend this to end-to-end simple learning architecture. 2. Lack of ablation study on some hyperparameters: 1) alpha in equation 5: the authors suggested updating the teacher models with EMA to avoid a denoised pixel-wise pseudo label. To prove it, an ablation study on alpha needs to be explored. 2) x_aug in equation 9: need to empirically show justification using augmentation input for a student model. Yes, the authors addressed the limitation. <doc-sep>To address UDA in semantic segmentation, this work uses two types of data mixing strategies to artificially create intermediate bridging domains between source and target. The paper starts with a detailed analysis comparing different data mixing strategies, either done globally (mixup[61]) or locally ( CowMix [13], FMix [17], CutMix [60] and ClassMix [42]). The analysis demonstrates favorable results when using local data mixing strategies for UDA in segmentation, in particular CutMix (coarse region-wise mixing) and ClassMix (fine class-wise mixing). Based on results of the analysis, this work proposes a simple way to combine the two mixing strategies CutMix and ClassMix. In the course of training, there are five models: two teacher models trained with CutMix and ClassMix, two EMA models of the two teachers, one student model trained using teachers' pseudo-labels. Training is done in multiple rounds (fixed as 4 in the experiments). In each round: - The two teachers are first trained separately with CutMix and ClassMix - The student is then trained with pseudo-labels of two EMA models of the two teachers. Pseudo-label of a given target sample is determined as a weighted combination of softmax scores of the two EMA models (Eqn. 12). The weights have size $H \\times W \\ K$ with $K$ classes; at each spatial position, the weight vector over $K$ classes is the softmax over the feature distance to class centroids (Eqn. 11). Color jittering and gaussian blurs are used on target sample when training the student. - The two teachers are initialized by the student. ** Strengths ** Overall this is an interesting technical paper that combines multiple existing strategies, namely CutMix [60], ClassMix [42], mean teacher [42], prototypical weighting [62] and pseudo-labelling [30]. Empirical results demonstrate better performance than previous SOTAs on comparable backbone (resnet101) and segmentation framework (deeplab-v2). Experiments are extensive. The paper is well-written and easy to follow. ** Weaknesses ** - My main concern is with the technical novelties of this work. The analysis comparing different mixing techniques, claimed as the first contribution, is somewhat interesting. However the main proposed approach is merely a mix of previous works. Actually there are no new insights that I could get from this work. - It's not clear to me how is the intermediate model selected at each stage. Is the target's validation set used to select the best model? If true, is there a risk of supervision leak from target validation set? - Missing details for the multi-source and multi-target experiments. I'm currently on the borderline, slightly leaning toward the positive side, thanks to the good results. My final decision will be adjusted based on the feedback from the authors and the discussion with other reviewers. ** Typos ** - L185: Eqn. 3 instead of Eqn. 2 - SuppMat: Algo.1 - L7 & L12: Eqn. 3 instead of Eqn. 2 ========== Post-rebuttal I thank the authors for being active during the rebuttal and addressing all of my concerns. I'm happy to increase my score. Limitation on training complexity is discussed in the supplementary material. No discussion on potential negative societal impacts was given. <doc-sep>This paper proposes a deliberated domain bridging (DDB) method for domain adaptative semantic segmentation, where the target labels are not available during the training. In DDB, there are two parts: 1) a dual-path domain bridging step to train two teacher models with two intermediated domains using the coarse-wise and fine-wise, i.e., region-level and semantic-level, data mixing techniques. 2) a cross-path knowledge distillation step to adaptively transfer the knowledge from the two teacher models to a student model. The two steps are repeated for several rounds for a good performance. Extensive experiments on both single-source domain and multi-source multi-target domain settings are conducted to validate DDB’s superiority. Pros: 1. This paper proposes an effective method to significantly boost the UDA segmentation performance in various settings. 2. The comprehensive ablations are done to clearly show 1) the complementarity between the two teacher models and 2) the effectiveness of the distillation step. Cons: 1. Since GTA5 to Cityscapes and GTA5 + Synscapes to Cityscapes are done, what is the performance in Synscapes to Cityscapes? This experiment shows which dataset contributes more to adapt to the real dataset. 2. There are too many symbols, which makes the paper hard to follow. What do the numbers righter after the approach name in Tables 2, 3, and 4 mean? For example, ADVENT(19), BDL (19), FADA(20), etc. 3. The authors claim that soft distillation and hard distillation are compared in Table 5. However, the ‘soft distillation’ choice and the explanation are missing in that table, which is a bit confusing. 4. DDB requires two rounds for a good convergence. In each round, it needs to train three individual models and calculate two groups of category centroids by scanning the target training set for two teacher models respectively. This makes the approach cumbersome and may require more training time than others. The authors are encouraged to discuss the above issue with detailed analysis. Besides, this cumbersome training process seems in conflict with the stated ‘elegant’ method. 5. The following paper can be included for comparison since it also studies the data mixing technique in UDA semantic segmentation. Besides, the difference between DACS which also utilizes the data mixing technique in UDA is not well stated in the paper. Dsp: Dual soft-paste for unsupervised domain adaptive semantic segmentation. Proceedings of the 29th ACM International Conference on Multimedia. 2021: 2825-2833. The authors discuss the limitations of this paper in the supplementary material. No negative social impact has been discussed. <doc-sep>This paper is about unsupervised domain adaptation for the task of semantic segmentation. The paper argues for the importance of gradually bridging the domain gap, instead of a direct attempt to transfer a model from the source to the target domain. Motivated by an empirical analysis about different data mixing technologies, the paper explores two region-based mixing strategies (coarse regions and finer class-wise/mask regions) as domain bridges. Specifically, two models are trained on the two different domain bridges (Dual-Path-Domain-Bridge), which act as ensembled supervision for a single student model (Cross-path knowledge distillation). This student model can then initialize the teacher models for another round of these two steps. Experimental results in three different settings confirm the effectiveness of the proposed approach with state-of-the-art results on standard benchmarks. ### Strengths - The ablation studies in Tables 5-7 are great and showcase the impact of individual components - The results are impressive, with a clear improvement on the standard benchmark (GTA5->CityScapes), as well as other settings (multi-source and multi-target) - The experiments in Table 1 are a good motivation for the choice of data mixing strategies (CutMix and ClassMix) ### Weaknesses - The writing, specifically the motivation and positioning with respect to prior work, needs improvement. - There may be alternative domain bridges than data mixing, like methods that rely on self-training and choose confident target pseudo labels as intermediate source domain. - The justification for exploring new domain bridges in lines 37-41 is vague and unclear: What are "unexpected artifacts in the global input space"? What "optimization constraints" are referred to? - I do not see why the paper only evaluates two domain bridging strategies in the ensemble. One could also include more. Relating to ensemble methods, one could expect improvements if an additional data mixing strategy is "orthogonal". One recent successful example for a global mixing strategy is [A] and could be easily integrated. - There is a related work on domain bridges for semantic segmentation that was not included: [B] - In line 185, shouldn't the reference go to Eq. 3? - I do not quite understand why the mixing weights in Eq. 11/12 help. Aren't the softmax values (i.e., scores) already an indication how far away a sample is from the decision boundary? - It would be good to point the reader to the supplemental material for a detailed description of the training strategy. - It's hard to understand and see details in Figure 1. **References:** - [A] FDA: Fourier Domain Adaptation for Semantic Segmentation. Yang and Soatto. CVPR'20 - [B] Domain Bridge for Unpaired Image-to-Image Translation and Unsupervised Domain Adaptation. Pizzati et al. WACV'20 Potential societal impacts are not discussed. I think a paper on domain adaptation should include a discussion about potential biases that are carried over from source domains (specifically because these are often synthetic data which often contains some hand-crafted components, like object sampling distributions, etc.).
**Summary**: This paper proposes an effective Deliberated Domain Bridging (DDB) approach for domain adaptive semantic segmentation (DASS). It leverages two data mixing techniques: region-level mix and class-level mix, to train two corresponding teacher models, which then guide one student model on the target domain. It is evaluated on multiple benchmarks. **Strength**: The paper is a well-written paper. It is well-motivated based on the limitations of previous methods. The proposed approach is novel, interesting, and effective. The experiments (with the toy game) are solid. **Weakness**: Training efficiency and complexity. Lack of ablation study on some hyperparameters and design choices. Some missing references/comparisons; unclear positioning of the work w.r.t. prior work. **Recommendation**: The paper receives consistently positive ratings. After rebuttal, most of the reviewers’ concerns are addressed and the paper clearly has strengths. The AC thus suggests acceptance. The AC strongly suggests that the authors incorporate their rebuttal (e.g., additional results) into their camera-ready version.
This paper investigates reinforcement learning with a perturbed reward signal. In particular, the paper proposes a particular model for adding noise to the reward function via a confusion matrix, which offers a nuanced notion of reward-noise that is not too complicated so-as to make learning impossible. I take this learning setting to be both novel and interesting for opening up areas for future work. The central contributions of the work are to 1) leverage a simple estimator to prove the convergence of Q-Learning under the reward-perturbed setting along with the sample-complexity of a variant of (Phased) Q-Learning which they call "Phrased" Q-Learning, and 2) An algorithmic scheme for learning in the reward-perturbed setting (Algorithm 1), and 3) An expansive set of experiments that explore the impact of various reward models on learning across different environment-algorithm combinations. The sample complexity term extends Phased Q-Learning to incorporate aspects of the reward confusion matrix, and to my knowledge is novel. Further, even though Theorem 1 is unsurprising (as the paper suggests), I take the collection of Theorem 1, 2, and 3 to be collectively novel. Indeed, the paper focuses on an interesting and relatively unexplored direction for RL. Apart from the work cited by the paper (and perhaps work like Krueger et al. (2016), in which agents must pay some cost to observe true rewards), there is little work on learning settings of this kind. This paper represents a first step in gaining clarity on how to formalize and study this problem. I did, however, find the analysis and the experiments to be relatively disjointed -- the main sample complexity result presented by the paper (Theorem 2) was given for Phased Q-Learning, yet no experiments actually evaluate the performance of Phased Q-Learning. I think the paper could benefit from experiments focused on simple domains that showcase how traditional algorithms do in cases where it is easier to understand (and visualize) the impact of the reward perturbations (simple chain MDPs, grid worlds, etc.) -- and specifically experiments including Phased Q-Learning. Pros: - General, interesting new learning setting to study. - Initial convergence and sample complexity results for this new setting. - Depth and breadth of experimentation (in terms of diversity of algorithms and environments), includes lots of detail about the experimental setup. Cons: - Clarity of writing: lots of typos and bits of math that could be more clear (see detailed comments below) [Fixed] - The plots in Section 4 are all extremely jagged. More trials seem to be required. Moreover, I do think simpler domains might help offer insights into the reward perturbed setting. [Fixed] - The reward perturbation model is relatively simple. Some high level questions/comments: - Why was Phrased Q-Learning not experimented with? - Why use majority voting as the rule? When this was introduced it sounded like any rule might be used. Have you tried/thought about others? - Your citation to Kakade's thesis needs fixing; it should read: "Kakade, Sham Machandranath. On the sample complexity of reinforcement learning. Ph.D Thesis. University of London, 2003." (right now it is cited as "(Gatsby 2003)" throughout the paper) - You might consider picking a new name for Phrased Q-Learning -- right now the name is too similar to Phased Q-Learning from [Kearns and Singh NIPS 1999]. - As mentioned in the "cons" section, the confusion matrix is still a somewhat simple model of reward noise. I was left wondering: what might be the next most complicated form of adding reward noise? How might the proposed algorithm(s) respond to this slightly more complex model? That is, it's unclear how general the results are, or if they are honed too tightly to the specific proposed reward noise model. I was hoping the authors could respond to this point. Section 0) Abstract: - Not immediately clear what is meant by "vulnerability" or "noisy settings". Might be better to pick a more clear initial sentence (same can be said of the "sources of noise..."") Section 1) Introduction: - "adversaries in real-world" --> "adversaries in the real-world" - You might consider citing Loftin et al. (2014) regarding the bulleted point about "Application-Specific Noise". - "unbiased reward estimator aided reward robust reinforcement learning framework" --> this was a bit hard to parse. Consider making more concise, like: "unbiased reward estimator for use in reinforcement learning with perturbed rewards". - "Our solution framework builds on existing reinforcement learning algorithms, including the recently developed DRL ones" --> cite these up front So, cite: Q-Learning, CEM, SARSA, DQN, Dueling DQN, DDPG, NAF, and PPO, and spell out the acronym for each the first time you introduce them. - "layer of explorations" --> "layer of exploration" Section 2) Problem Formulation - "as each shot of our" --> what is 'shot' in this context? - "In what follow," --> "In what follows," - "where 0 < \\gamma \\leq 1" --> Usually, $\\gamma \\in [0,1)$, or $[0,1]$. Why can't $\\gamma = 0$? - The transition notation changes between $\\mathbb{P}_a(s_{t+1} | s_t)$ and $\\mathbb{P}(s_{t+1} | s_t, a_t)$. I'd suggest picking one and sticking with it to improve clarity. - "to learn a state-action value function, for example the Q-function" --> Why is the Q-function just an example? Isn't is *the* state-action value function? That is, I'd suggest replacing "to learn a state-action value function, for example the Q-function" with "to learn a state-action value function, also called the Q-function" - "Q-function calculates" --> "The Q-function denotes" - "the reward feedbacks perfectly" --> "the reward feedback perfectly" - I prefer that the exposition of the perturbed reward MDP be done with C in the tuple. So: $\\tilde{M} = \\langle \\mathcal{S}, \\mathcal{A}, \\mathcal{R}, C, \\mathcal{P}, \\gamma \\rangle$. This seems the most appropriate definition, since the observed rewards will be generated by $C$. - The setup of the confusion matrix for reward noise over is very clean. It might be worth pointing out that $C$ need not be Markovian. There are cases where C is not just a function of $\\mathcal{S}$ and $\\mathcal{R}$, like the adversarial case you describe early on. Section 3) Learning w/ Perturbed Rewards - Theorem 1 builds straightforwardly on Q-Learning convergence guarantee (it might be worth phrasing the result in those terms? That is: the addition of the perturbed reward does not destroy the convergence guarantees of Q-Learning.) - "we firstly" --> "we first" - "value iteration (using Q function)" --> "value iteration" - "Definition 2. Phased Q-Learning" --> "Definition 2. Phrased Q-Learning". I think? Unless you're talking about Phased Q from the Kearns and Singh '99 work. - "It uses collected m samples" --> "It uses the collected m samples" - Theorem 2: it would be helpful to define $T$ since it appears in the sample complexity term. Also, I would suggest specifying the domain of $\\epsilon$, as you do with $\\delta$. - "convergence to optimal policy" --> "convergence to the optimal policy" - "The idea of constructing MDP is similar to" --> this seems out of place. The idea of constructing which MDP? Similar to Kakade (2003) in what sense? - "the unbiasedness" --> "the use of unbiased estimators" - "number of state-action pair, which satisfies" --> "number of state-action pairs that satisfy" - "The above procedure continues with more observations arriving." --> "The above procedure continues indefinitely as more observation arrives." Also, which procedure? Updating $\\tilde{c}_{i,j}$? If so, I would specify. - "is nothing different from Eqn. (2) but with replacing a known reward confusion" --> "replaces a known reward confusion" 4) Experiments: - Diverse experiments! That's great. Lots of algorithms, lots of environment types. - I expected to see Phrased Q-Learning in the experiments. Why was it not included? - The plots are pretty jagged, so I'm left feeling a bit skeptical about some of the results. The results would be strengthened if the experiments were repeated for more trials. 5) Conclusion: - "despite of the fact" --> "despite the fact" - "finite sample complexity of Q-Learning with estimated surrogate rewards are given" --> It's not really Q-Learning, though. It's a variant of Q-Learning. I'd suggest being explicit about that. Appendix: - "It is easy to validate the unbiasedness of proposed estimator directly." --> "It is easy to verify that the proposed estimator is unbiased directly." - "For the simplicity of notations" --> "For simplicity" - "the Phrased Q-Learning could converge to near optimal policy" --> ""the algorithm Phrased Q-Learning can converge to the near optimal policy"" - "Using union bound" --> "Using a union bound" - Same comment regarding $\\gamma$: it's typically $0 \\leq \\gamma < 1$. - Bottom of page 16, the second equation from the bottom, far right term: $c.j$ --> $c,j$. - "Using CauchySchwarz Inequality" --> "Using the Cauchy-Schwarz Inequality" References: Loftin, Robert, et al. "Learning something from nothing: Leveraging implicit human feedback strategies." Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on. IEEE, 2014. Krueger, D., Leike, J., Evans, O., & Salvatier, J. (2016). Active reinforcement learning: Observing rewards at a cost. In Future of Interactive Learning Machines, NIPS Workshop.<doc-sep>## Summary The authors present work that shows how to deal with noise in reward signals by creating a surrogate reward signal. The work develops a number of results including: showing how the surrogate reward is equal in expectation to the true reward signal, how this doesn't affect the fixed point of the Bellman equation, how to deal with finite and continuous rewards and how the convergence time is affected for different levels of noise. They demonstrate the value of this approach with a variety of early and state-of-the-art algorithms on a variety of domains,, and the results are consistent with the claims. It would be useful to outline how prior work approached this same problem and also to evaluate the proposed method with existin approaches to the same problem. I realise that this is the first method that estimates the confusion matrix rather than assuming it is known a priori but there are obvious ways around this, e.g. the authors first experiment assumes the confusion matrix is known, so this would be a good place to compare with other competing techniques. Also, the authors have a way of estimating this, so they could plug it into the other algorithms too. I also have some concerns about the clarity and precision of the proofs, although I do not have any reason to doubt the Lemma/Theorem correctness (see below). The weakest part of the approach is in how the true reward is estimated in order to estiamate the confusion matrix. It uses majority vote (which is only really possible in the case of finite rewards with noise sufficiently low that this will be a robust estimate). Perhaps some other approaches could be explore here too. Finally, there is discussion about adversarial noise in rewards at the beginning but I am not sure the theory really addresses it nor the evaluations. Nonetheless, given that I do not know whether the claim of originality is true (in terms of the estimation of the confusion matrix). If it is, then the work is a significant and interesting advance, and is clearly widely applicable in domains with noisy rewards. It would be interesting to see a more tractable approach for continous noise too, but this would probably involve assumptions (smoothness? Gaussianity?), and doesn't impact the value of this work. ## Detailed notes There is a slight sloppiness in notation in equation (1). This uses \\tilde{r} as a subscript of e, but r is +1 or -1 and the error variables are e_+ and e_- (not e_{+1} and e_{-1}). The noise levels in Atari (Figure 3) show something quite interesting which could be commented upon. For noise below 0.5 the surrogate reward works roughly similarly to the noisy reward, but when the noise level goes above this, the surrogate reward clearly exploits the increased information content (similar to a noisy binary channel with over 0.5 noise). This may have implications for adversarial noise. There are also some issues with the proofs which I spotted outlined below: ### Lemma 1 proof The proof of Lemma 1, I think, fails to achieve its objective. The first pair of equations is not a rewrite of equation (1). I believe that the authors intend for this to be a consequence of Equation (1) but do not really demonstrate this clearly. Also, the authors seem to switch between binary rewards -1 and +1 and two levels of reward r- and r+ leading to some confusion. I would suggest the latter throughout as it is more general but involves no more terms. I suggest the following as an outline for the proof. It would help for them to define what they mean by the different rhats (as they currently do) and explain that these values are therefore: rhat- = [(1 - e+) r- - e- r+ ]/(1 - e+ - e-) rhat+ = [(1 - e-) r+ - e+ r-]/(1- e+ - e-) from equation (1). What is left is for them to actually prove the Lemma, namely that the expected value of rhat is: E(rhat ) = p1(rhat=rhat-) rhat- + p(rhat=rhat+) rhat+ = E(r) where the probabilities relate to the surrogate reward taking their respective values. And just stylistically, I would avoid writing "we could obtain" and simply write "we obtain". Lemma 2 achieves this more clearly with greater generality. ### Theorem 1 proof At the end of p13, the proof of the expected value loses track of the chosen action a. I would suggest the authors replace: $$\\mathbb{P}'(s,s',\\hat{r})$$ with $$\\mathbb{P}'(s,a, s',\\hat{r})$$ then define it. Likewise $$\\mathbb{P}(s,s')$$ should be $$\\mathbb{P}(s,a,s')$$ (and also defined). I am also a little uncomfortable with the switch from: $$max_{b \\in \\mathcal{A}} | Q(s',b) - Q*(s',b)|$$ in the second to last line of p13, which refers to the maximum Q value associated with some state s', to $$||Q-Q*||_{\\infty}$$ in the next line which is the maximum over all states and actions. The equality should probably be an inequality there too. Throughout this the notation could be much better defined, including how to interpret the curly F and how it acts in the conditional part of an expectation and variance. Finally, there is a bit too free a use of the word "easily" here. If it were easy, then the authors could do it more clearly I think. Otherwise, please refer to the appropriate result in the literature. <doc-sep>The paper aims at studying the setting of perturbed rewards in a deep RL setting. Studying the effect of noise in the reward function is interesting. The paper is quite well-written. However the paper studies a rather simple setting, the limitations could be discussed more clearly and there are one or two elements unclear (see below). The paper assumes first the interesting case where the generation of the perturbed reward is a function of S*R into the perturbed reward space. But then the confusion matrix does *not* take into account the state, which is justified by "to let our presentation stay focused (...)". I believe these elements should at least be clearly discussed. Indeed, in that setting, the theorems given seem to be variations of existing results and it is difficult to understand what is the message behind the theorems. In addition, it is assumed that the confusion matrix C is known or estimated from data but it's not clear to me how this can be done in practice. In equation 4, how do you have access to the predicted true rewards? Additional comments: - The discount factor can be 0 but can not, in general, be equal to 1. So the equation in paragraph 2.1 "0 < γ ≤ 1" is wrong. - The paper mention that "an underwhelming amount of reinforcement learning studies have focused on the settings with perturbed and noisy rewards" but there are some works on the subject (e.g., https://arxiv.org/abs/1805.03359) and a discussion about the differences with the related work would be interesting.
This paper studies RL with perturbed rewards, where a technical challenge is to revert the perturbation process so that the right policy is learned. Some experiments are used to support the algorithm, which involves learning the reward perturbation process (the confusion matrix) using existing techniques from the supervised learning (and crowdsourcing) literature. Reviewers found the problem setting new and worth investigating, but had concerns over the scope/significance of this work, mostly about how the confusion matrix is learned. If this matrix is known, correcting reward perturbation is easy, and standard RL can be applied to the corrected rewards. Specifically, the work seems to be limited in two substantial ways, both related to how the confusion matrix is learned. * The reward function needs to be deterministic. * Majority voting requires the number of states to be finite. The significance of this work is therefore mostly limited to finite-state problems with deterministic reward, which is quite restricted. As the authors pointed out, the paper uses discretization to turn a continuous state space into a finite one, which is how the experiment was done. But discretization is likely not robust or efficient in many high-dimensional problems. It should be noted that the setting studied here, together with a thorough treatment of an (even restricted) case, could make an interesting paper that inspires future work. However, the exact problem setting is not completely clear in the paper, and the limitations of the technical contributions is also somewhat unclear. The authors are strongly advised to revise the paper accordingly to make their contributions clearer. Minor questions: - In lemma 2, what if C is not invertible. - The sampling oracle assumed in def 1 is not very practical, as opposed to what the paper claims. - There are more recent work at NIPS and STOC on attacking RL (including bandits) algorithms by manipulating the reward signals. The authors may want to cite and discuss.
This paper studies structural fairness on graph contrastive learning (GCL). The study is motivated by the finding that GCL is fairer to low degree nodes than GCN. Based on that, the authors first present theoretical analysis on such structural fairness for GCL through intra-community concentration theorem and inter-community scatter theorem. Guided by the theoretical analysis, the authors propose GRADE by enriching the neighborhood of tail nodes and purifying the neighborhood of head nodes. Experimental results on real-world datasets demonstrate the effectiveness of GRADE. Strengths: - The paper is well-motivated through empirical experiments. - The design of the proposed method is inspired from theoretical analysis. - The proposed method is effective in mitigating structural unfairness as shown by experimental results. Weaknesses: Please see below. - More understanding on the performance for graphs with heterophily is needed. - There are too many notations in the paper. The authors may want to find a way to organize them so the readers won't need to check notations back and forth. - The authors should make sure the theorems are self-contained. - Significance test on the experimental results would be helpful to showcase the effectiveness of GRADE. <doc-sep>GCN is primarily beneficial to high-degree nodes but biased against low-degree nodes, which causes a performance bottleneck. As a promising paradigm in the graph domain, GCL integrates the power of GCN and contrastive learning, displaying SOTA performance in a variety of tasks. This paper investigates the question of whether will GCL present the same degree of bias as GCN. They surprisingly find out that a smaller performance gap exists between tail nodes and head nodes in GCL methods than that of GCN. They intuitively and theoretically analyze the reason for this interesting finding. Particularly, Intra-community Concentration Theorem and Inter-community Scatter Theorem prove that node representations learned by GCL conform to a clearer community structure, and establish the relation between graph augmentation and representation concentration. These analyses yield profound insights into solutions to this important degree-bias problem and imply that GCL is a promising direction. Therefore, they further propose a GRAph contrastive learning for DEgree bias (GRADE) to concentrate augmented representations. Specifically, they enlarge limited neighbors of tail nodes to contain more nodes within the same community and purify head nodes by removing neighbors from a different community. Extensive experiments on various benchmark datasets and several evaluation protocols validate the effectiveness of GRADE. The paper is well-written in general and their finding is exciting. Weaknesses: 1. How to get a small degree of bias from a clear community structure needs more explanations. Theorem 1 and 2 prove that GCL conforms to a clearer community structure via intra-community concentration and inter-community scatter, but its relationship with degree bias is not intuitive enough. 2. There is some confusion in the theoretical analysis. Why is the supremum in Definition 1 \\gamma(\\frac{B}{\\hat{d}_{\\min}^k})^{\\frac{1}{2}}? Based on this definition, how to prove that the proposed GRADE reduces this supremum? 3. There is a lack of significance test in Table 1. Despite the weaknesses mentioned above, I believe that this paper is worth publishing. They consider an important degree-bias problem in the graph domain, given that node degrees of real-world graphs often follow a long-tailed power-law distribution. And they show an exciting finding that GCL is more stable w.r.t. the degree bias, and give a preliminary explanation for the underlying mechanism. Although the improvement does not seem significant in Table 1, they may inspire more future research on this promising solution. A learnable augmentation is an interesting direction for improvement. <doc-sep>This paper investigates the great potential of graph contrastive learning to solve the degree-bias problem, and proposes a new graph augmentation for further improvement. The motivation of this paper is clear. They discover that node representations obtained by GCL methods are fairer to degree bias than those learned by GCN, and explore the underlying cause of this phenomenon. Based on the theoretical analysis, they further propose a novel GCL model targeting the degree bias. Experimental results clearly show the merit of the proposed model, and the source code is attached (I have not run it though). Concerns: 1. This paper focuses on semi-supervised GCN to motivate their investigation. Do other semi-supervised GNNs suffer from the same severe degree-bias problem? In other words, I doubt the finding is limited. 2. Some simplifications have been made in the theoretical analysis. For example, they only consider the topology augmentation. Does it affect the analysis results? 3. There is only one train-test split in Section 2, but two in Section 5. 4. The complexity of the proposed model. The authors have pointed out the limitations of their work. <doc-sep>Node degrees of real-world graphs follow a long-tailed distribution, but GCN exhibits a performance disparity between high-degree nodes and low-degree nodes, i.e., degree bias. This paper discovers an interesting phenomenon that graph contrastive learning methods have already a smaller degree bias. Based on this discovery, this paper theoretically analyzes the reason and proposes a tailored contrastive learning method GRADE. Experiments validate the effectiveness of the proposed method. Strength: 1. The finding is interesting and may inspire a new paradigm for alleviating degree bias. 2. The paper is well-written and the conclusion is clear. Weakness: 1. The conclusion seems to be only for GCN. I wonder GAT[1] may exhibit a smaller degree bias, even smaller than graph contrastive learning methods. 2. From Figure 6 in Appendix A, the advantage of graph contrastive learning methods over GCN on Photo dataset is not obvious. The numerical values of their slopes are close. 3. There is a small gap between degree bias and theoretical analysis of clear community structure. 4. The improvement of the proposed method in Table 1 does not seem statistically significant because of high variance. 5. There are some related works designed for degree bias, such as SL-DSGCN[2]. But these methods are not set as baselines in the experimental comparison. [1] Veličković P, Cucurull G, Casanova A, et al. Graph Attention Networks[C]//International Conference on Learning Representations. 2018. [2] Tang X, Yao H, Sun Y, et al. Investigating and mitigating degree-related biases in graph convoltuional networks[C]//Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020: 1435-1444. In addition to the limitations mentioned in the paper, the generalization of the conclusion should be taken into consideration.
This paper identifies a fairness problem in graph contrastive learning (GCL), i.e., GCN often performs bad for low-degree nodes. The key to solve this problem is the observation that GCL can offer more fair representation for both low and high degree nodes. Authors also support their claims with theoretical analysis. All reviewers appreciate the contributions made by this submission. It is suggested that to simplify notations and make the theorems are self-contained in the final version.
This paper claims that in the context of multi-class image classification using neural networks, in the final classification layer, if we use randomly initialized parameters (with normalization) without any training, we can achieve better performance than if we train those parameters. This is an intriguing claim that can potentially have a very broad impact. The authors provide some motivations based on the error-similarity plots, but no theoretical backing. Without convincing theoretical support, such a claim can only be established through extensive and rigorous experimentation, and I find the experiment description in this paper is short on delivering strong evidence. For example, how many runs to achieve the results in Tables 1-3? What are confidence intervals on the results? Any statistical significance test done? How were hyperparameters selected? What about the performance on the ImageNet dataset, which has more classes than the datasets reported in the paper? What distribution was used to initialize the random weights in the classification layer? Is the performance sensitive to the distribution? Is the performance sensitive to the complexity of the model used to learn the representation? How does this compare to other ways of improve multi-class classification such as softmax temperature annealing, label smoothing, adding regularization, etc.? Or as a stretch, does this claim generalize to problems with categorical features? Details: 1. page 2, line 9: do you mean "maximizing the cosine-similarity"?<doc-sep>The paper explores deeper into the specific classification layer of a standard supervised learning system. The core idea of the paper is to randomly initialize and then fix the classification layer weights and train the network leading improved discrimination. The writing is satisfactory and the paper develops the ideas sufficiently well to help any reader who is a beginner in this area. One of the major concerns regarding the work is that it seems to have is the relatively limited amount of contribution given the context of the current venue. This is no doubt an interesting phenomenon, however, previous works investigating cosine similarity losses have tested their approaches on much larger problems such as large scale face recognition and full Imagenet. The paper currently derives its intuitions from object recognition problems which have very different behavior than problems like face recognition where the number of classes is large yet the number of samples per class is much lower. That said, given the limited scale of the experiments, the paper does offer a wider variety of results supporting its claims. Nonetheless, given the simplicity of the idea, the paper fails to push envelope of results on any of these datasets. Lastly, the performance gains in Table 3 seem limited, given that only one run was performed for each dataset. <doc-sep>**Summary:** This paper introduces a new approach to learn a multi-class image classification model by fixing the weights of the classification layer. The authors propose to draw the class vectors randomly and set them as fixed during training instead of training them. They analyze this approach when a model is trained with a categorical cross-entropy and or softmax-cosine loss. The proposed approach is tested on 4 datasets: STL, CIFAR-10, CIFAR-100, TinyImagenet **Reasons for score:** I do not think the technical contribution is strong enough for ICLR. The idea is interesting but the empirical validation of the idea should be improved and some claims should be proved. **Pros:** - The idea of using fixed-representation is interesting. It can help to reduce the training time. - The authors explain why cosine-similarity maximization models cannot converge to 0. **Cons:** The title looks very interesting: “Redesigning the Classification Layer by Randomizing the Class Representation Vectors”. But after reading the paper, it is only about mullti-class image classification. There is no study about other types of data or the multi-label setting. The authors should use a title more accurate about the content of their paper. Overall, the structure of the paper should be improved. It is quite difficult to read because several sections are a mix of model contributions and experimental results. Maybe using subsections can help to separate the model contributions and experimental results. Also, some information is not at the right place and some sections should be reorganized. For example, the datasets and models are presented in section 4.1 but some results are presented in section 2. The authors should also add a related work section to clearly state the motivations and explain the difference with other approaches. The authors proposed to randomly initialize the weights of the classification layer but they do not clearly explain how the weights are initialized. There are several standard approaches to initialize weights like uniform, normal, Xavier uniform, Xavier normal, Kaiming uniform, Kaiming normal. It can improve the paper if the authors compare these initialization mechanisms. Similarly, the authors should analyze the results for several runs to see how the fixed weights approach is sensitive to the random initialization. I have a conceptual problem with fixing the bias. The bias is sampled so it means it can have a large or small value. Let’s take an example with 2 classes. The class A can have a large bias (e.g. 0.5) but other class B can have a small value (e.g. -0.5). It means that the class B has a negative bias and will usually have lower scores than A just because there is a difference of 1 between these biases. I am not sure that it is a good idea and there is no motivation about that in the paper. The authors should analyze the bias initialization because it is important. It is important to show the variance when the model is evaluated on several runs (section 4). It can help to understand how the model is sensible to the initialization. It is well known that the SGD is sensible to its hyper-parameter and in particular the learning rate. The model will not converge if the learning rate is too large or too small. The authors should explain how they choose the hyper-parameters. I also wonder how the results are specific to the optimizer. Are the conclusions of the analysis the same for other popular optimizers like Adam. “These observations can provide an explanation as to why non-fixed models with S = 1 fail to converge.” (page 6): For me it explains why the model cannot converge to 0 but it does not explain why the model fails to converge. They are two different problems. In the abstract and in some other parts of the paper the authors claim they improve the compactness of the model. But they never show it. They did not define how they measure the compactness of a model. They should clearly present the definition of compactness, and what approach they used to compute it. Based on my knowledge, measuring the compactness of a model is not easy. The authors should results on low resolution dataset (less than 100*100). I wonder if the results can be generalized to larger resolution dataset. For example, does it also work on ImageNet that has more images, larger resolution images and more classes (1000). I also wonder if it works on other type of datasets like fine-grained datasets (e.g. CUB-200, Stanford Cars, FGVC Aircraft). Also, how does it adapt to new domains like medical images and natural scenes. I am not convinced that ignoring the visual similarities between classes is a good idea. I think it is important to build spaces that encode some semantic structure. For example, I think it is important to encode that two bird species are more semantically similar than a bird and a car. It is not clear why the authors decided to focus on the cosine-similarity maximization models. They should motivate this decision more because these models are not so popular. The authors claimed that “the low range of the logits vector is not the cause preventing from cosine-similarity maximization models from converging” (page 5) but they did not show results to prove it. The authors should analyze the range of the logits. The current analysis does not allow us to understand if it is because of the range of value, or the normalization of the weights or a bad tuning of some hyper-parameters. **Minor comments:** The authors should give more information on how they generated the figures 3 and 5. <doc-sep>This paper proposed a classification layer by randomizing the class representation vectors. This paper first analyses the class vector distributions between different training strategies, and then proposed the randomized class vector to improve the representation learning performance. The proposed model is further extended and analyzed for the fixed cosine-similarity maximization setting. The experiments demonstrate the effectiveness of the proposed method compared with the basic/vanilla baselines. Pros: The motivation of this paper is comprehensive. Some quantitative and visual experimental results introduced the motivation of the proposed model. The randomization weights also provide a novel view for solving more machine learning problems. This is a good point. Cons: My main concern is the experimental results. The experiments are mainly done for evaluating fixed and non-fixed models without any other state-of-the-art methods, and the exact performance is considerably low compared with other state-of-the-art methods. To this end, it is hard to confirm the effectiveness of the model. For example, 1) even though the visualization results are good (Figure 3), it does not mean the final performance is better. 2) the original model could be over-fitting, and a random and fixed weight layer could be considered as a regularizer. There are some experiments should be done: 1) Compare this method with relevant methods such as NormFace and ArcFace to proof the effectiveness of this approach. 2) Compared with exact performance in face relevant datasets and compared with other SOTA methods.
The reviewers are in consensus that this paper is not ready for publication: cited concerns include simple (interesting) ideas but need to be carefully analyzed empirically, contextualized (other similar studies exist), identifying convincing empirical evidences,. etc. The AC recommends Reject.
This paper deals with how to estimate the 'noise transition matrix' under a multi-label setup. Like a single-label setup, the authors attempted to make an idea without using 'anchor points' for matrix estimation. They leverage the 'sample selection' method (ie., GMM) for extra clean supervision and provide a detailed mathematical derivation for the matrix estimation. The experimental section includes VOC and MS-COCO datasets; they injected a synthetic label noise with two factors, controlling the flip ratio of '1'->'0' and '0'->'1'. Under the proposed synthetic noise, the method works well and outperforms other simple extensions of similar works (but developed under the single-label setup). ### Strengths 1. This paper is the first approach to estimating the noisy transition matrix for handling noisy labels with multi-labels. 2. Leveraging mismatched label correlation is useful. 3. The method shows higher performance than other simple baselines. ### Weaknesses I agree that estimating the noise transition matrix helps make a statistically consistent classifier. However, I felt some major weaknesses of this paper, as below: 1. **Assumption of Instance-independent Noise.** As the authors said, many previous studies make the same assumption for matrix estimations. But, there have been numerous recent studies to overcome this in the same direction [1-3]. I agree that dealing with 'instance-dependent' label noise in a multi-label setup is very difficult to address. At least, the author should mention what is more challenging in this setup compared to the single-label case. And, why making this assumption is reasonable with a more specific reason. Just saying 'the vast majority of current algorithms mainly focus' is not enough as a research paper. 2. **Noise Injection Protocol.** According to the paper, the authors assume the class-dependent label noise and aim to estimate an accurate noise transition matrix under this assumption. However, the noise injection protocol in Line 217 looks like 'class-independent label noise. Specifically, the $\\rho just means the probability of '0' (or '1') being flipped to '1' (or '0') without any consideration of class pairs like <i, j>; only class j associates with the flipping and no connection between classes. Therefore, this is not very realistic and not class-dependent label noise. Did I get it wrong? 3. **Unrealistic Evaluation.** Related to the second weakness, the author should have included at least one realistic real noisy data. There are benchmark data with noisy multi-label instances, called OpenImages database. The author can find other better datasets if possible. Without realistic evaluation, I am not convinced of the robust results in the paper since the injection protocol is very unrealistic. 4. **Class Imbalance Problem & Low mAP.** Unlike single-label classification, the biggest difference in multi-label setup is 'class imbalance'. The authors also mentioned that this problem is very severe in Lines 44 & 102. However, there is no detailed mention of how to resolve this issue with the proposed idea throughout the paper. Next, when I am looking at the results on MS-COCO, the mAP value is too low compared with other multi-label papers using the same ResNet-50 pre-trained models. As an example, [4] shows ResNet-50 with GAP head achieves around 80mAP on MS COCO dataset (refer to Figure 6). However, in this paper, the mAP was less than 70 with Standard at the easiest noise setup (0.0, 0.2); this is a 20% missing label scenario. There are 20% missing labels but I think 20% of missing labels do not make 10mAP drops. Could you report the performance of your method and others under (0, 0) setup? This is a very good reference to check whether your implementation is correct and whether your method is still comparable with a zero-noise setup. **[1]** Approximating instance-dependent noise via instance-confidence embedding, arXiv 2021 **[2]** A Second-Order Approach to Learning With Instance-Dependent Label Noise, CVPR 2021 **[3]** An Information Fusion Approach to Learning with Instance-Dependent Label Noise, ICLR 2022 **[4]** ML-Decoder: Scalable and Versatile Classification Head, arXiv 2021 Please see the weakness section above. In addition, the paper writing should be improved, particularly the theoretical section. Too many results are omitted and borrowed from [16, 24, 25, 31]. <doc-sep>The paper proposes a new estimator to estimate the transition matrix in noisy multi-label learning. The main idea of the estimator is to utilize the clean label co-occurrence as a constraint to help the estimation of the transition matrix. Specifically, authors derive a two-stage method to estimate then transition probabilities. At the first stage, the paper utilizes an existing technique to select clean labels for estimating the label occurrence. Although the estimated label occurrence is biased due to the selection bias, authors claims that the selection biased cannot lead to large estimation error. At the section stage, the paper obtains the label co-occurrence with frequency counting and use the mismatch of label correlation to estimate the transition matrix. Strength 1. The motivation of the proposed method is reasonable. The information of label co-occurrence is useful for the transition matrix estimation. 2. The paper reports extensive experiments to validate the effectiveness of the proposed method. 3. The paper conducts comprehensive analyses for noisy multi-label learning. Weakness 1. In section 3.2, authors claim that the label co-occurrence estimated by using selected clean labels is unbiased. This is based on the assumption that “given Y^j”, the features about class j is biased, while the features about another class i is unbiased. The assumption is unreasonable, since the features are biased with respect to class i and class j co-occur in an image. 2. Authors select a small number of clean labels for estimating the label co-occurrence. However, the estimated label co-occurrence may be imprecise due to the insufficient labels used for frequency counting, especially for positive labels. 3. There are many typos, such as page 4 line 168 Ys; Algorithm 1 line 4 Dt; page 2 line 48 “while “bird” and “sky” are always co-occurrence”. 4. The references are not cited properly. In experiments, the citations for multi-label with missing labels and partial multi-label are missed. None <doc-sep> This paper presents a way of estimating the noise transition matrix for noisy multi-label learning, which makes use of label correlations through a bilinear decomposition based on frequency counting. It first theoretically studies the identifiable problem of estimating the noise transition matrix under the multilevel setting, which then leads to the development of the bilinear decomposition-based method for estimating the noise transition matrix. Experiments results derived from several image datasets show that the proposed method can better estimate the transition matrix. Strengths Label-nosing learning is an important and challenging problem not just in multi-class classification and also in the multi-label setting. This paper presents a bilinear decomposition-based approach based a simple frequency counting with a clearly described algorithm. The proposed estimation method is well motivated with a set of theorems that study the identifiable problem in the multi-label learning setting, which seems to be thorough with proper proof in the Appendix. The proof is also easy to follow. Overall, the paper is written well, and most of the notations were clear and well-organized to guide the readers through the text. Weaknesses My major concern is around the performance of the proposed estimation method. In terms of estimation of the transition matrices, it outperforms the competitors in most of the settings, which is good. However, when coming to the classification performance, the proposed method is not a clear winner, particularly compared with those without considering noise labels across the datasets. For instance, for varying levels of noise, it is hard to conclude the behaviour of the proposed method. Ablation studies showing how the proposed model benefit from label correlation and deal with imbalance issue are missing. For example, there is a running parameter $\\tau$ used in Stage 1 to select the sample set $\\mathcal{D}^j_t$, which I believe will have an impact on the performance of the proposed method. The authors have addressed the limitations and border impact in the appendix. <doc-sep>This paper discusses the estimation problem of the transition matrices in the noisy multi-label setting. They study the identifiability problem of the class-dependent transition matrix in noisy multi-label learning and propose a new estimator by exploiting label correlations without both anchor points and accurate fitting of noisy class posterior inspired by the identifiability results. Experimentally, the effectiveness of the proposed method is also illustrated. **Strenghths:** 1. They utilize the mismatch of label correlations to identify the transition matrix without both anchor points and accurate fitting of noisy class posterior in noisy multi-label learning. 2. The method is effective to the issue with both theoretical analyses and empirical results support this method. 3. Clear writing and structure. **Weaknesses:** 1. Some formulas like in line 192 are recommended to be written in more simplified way. 2. The experimental results are recommended to be presented in different forms but not just tables. They address it adequately in Appendix.
Estimating the noisy transition matrix for handling noisy labels with multi-labels. Good experimental work illustrating estimating transition matrices. reviewers liked theory and the writeup. Paper has had improved citations and writing. There was some discussion about the assumptions. Nuances of this should be addressed in the revised paper, for instance your comments about class imbalance. Regarding reviewer KMFh's Q5: Note retrieval metrics (e.g., R@K) have been widely used in multi-label classification, although versions of F1 are probably more common. They give alternative looks at the errors. Regarding Reviewer n5cR's weakness 2: would be nice to do summary plots and/or win/loss tables and put some of the big tables in appendices.
This paper presents a novel algorithm for approximating posterior distributions via weighted subsets of the data. The proposed algorithm samples points uniformly at random and then computes the weights using a quasi-Newton method; crucially it scales well with respect to the number of data points, both in terms of time and storage. The authors provide a rigorous theoretical analysis of their proposed algorithm and empirically evaluate it against commonly-used methods for posterior approximation. Overall, I found this paper to be an excellent read and believe it to be a strong candidate for acceptance. 1. The proposed method is not particularly creative but frankly, I think the simplicity of the ideas being presented here are a strength of this work. The sum of the contributions in my mind are sufficient to get this paper above the originality bar for acceptance. 2. The theoretical and empirical analysis are compelling although I do have a few suggestions/concerns regarding the experiments that if addressed, could raise my score of this submission even higher: - I would have liked to see some comparison against SVI, even if that meant scaling down the experiments to settings where SVI is computationally tractable. It would be very telling if QNC was competitive with or even outperformed SVI in small-data regimes; if not, then finding some sort of cost-accuracy trade-off between the two methods could be useful for practitioners. - I would have liked to see some sort of sensitivity analysis to the model hyperparameters, specifically S, K_tune (about which very little is said) and tau. 3. The manuscript is very well-written; I genuinely found this paper to be an enjoyable read and applaud the authors' effort to include intuition wherever possible. 4. I believe this work will have significant impact on a non-trivial percentage of the NeurIPS population: the proposed algorithm is simple and easy-to-implement and thus, accessible to people who might want to use/expand upon it. Furthermore, the scalability makes this algorithm broadly applicable, in settings where previously-proposed methods might not be. Yes, I believe the authors' discussion of their work's limitations is sufficient; related to my question above, I would have found it helpful if the authors had provided some intuition about settings in which their theoretical results would not hold much like how they did for settings in which the assumptions do hold. <doc-sep>In the area of Bayesian Coresets, this paper presents an algorithm that proposes: 1. To uniformly select $M$ data points that will be used to approximate the posterior distribution, and, 1. to use a quasi-Newton method to refine the weights that will be assigned to each of the selected data points. For this quasi-Newton method, the authors propose using an estimated covariance matrix to approximate the objective’s Hessian. Finally, the authors provide theoretical justification for their algorithm via three different theorems. I think this is a very solid submission. The authors propose an algorithm and prove that their approximation might behave as the full posterior and provide a convergence rate. A notable feature is that assumptions are deeply discussed in the body of the paper (Section 4) or the appendix. In general, I would say that it is a very precise paper. On the weak side, I found the proof in the appendix quite hard to follow. Many inequalities were not evident at all and limited the possibility of doing a complete verification of the paper’s contributions [cf. Questions below]. It is important to highlight that when I was able to derive the bounds, the proofs are correct, as in Thm. 4.3. The authors mention some of the paper’s limitations. Above I'm suggesting mentioning a few more. <doc-sep>This paper addresses the problem of approximating a Bayesian posterior with a large number of datapoints. They do so by selecting a sparse, weighted subset of datapoints (a coreset). Previous methods have addressed this problem, but the authors note that previous methods suffer from requiring some subset of: (1) potentially expensive loops over the whole dataset, (2) user input or parameter tuning, (3) a large number of potentially expensive MCMC simulations. The authors address (1) by noting that a coreset can be formed on a random subset of the data, (2) by proposing a quasi-Newton method that requires one MCMC simulation per iteration, and (3) just by virtue of providing a different algorithm without the same tuning / user input requirements. The authors prove conditions under which (1) still allows for exact or approximate recovery of the posterior. And they prove conditions under which their optimization algorithm (2) does not require many iterations. In experiments, the authors show that their method is both faster and more accurate than other coreset methods. **Strengths** - The writing in the paper is good; the material is pretty technical, but I found it relatively easy to follow. - The method proposed by the authors intuitively fixes clearly explained problems with previous work and has desirable theoretical properties (mostly -- see weaknesses below) and this intuition and theory actually seem to hold up in practice in non-exhaustive, but at least detailed and careful experiments. That's a pretty solid combination. Based on this, I vote for the paper to be accepted. **Weaknesses** 1. The method is claimed to be "black box" with "far less ... user input required" than other algorithms. I think this is overselling things a little bit. I see five different hyperparameters (at least five, since one of them is a sequence): the number of coreset points $M$, the step size sequence $\\gamma_k$, the Hessian regularization $\\tau$ (or, equivalently, the condition number threshold mentioned on line 142), the number of iterations to tune the stepsize for $K_{tune}$, and the size of the subsampling set $T$. Since there are no sensitivity studies for these hyperparameters, I don't think it's fair to suggest that little user input is required. Relatedly, some competing methods are criticized for needing an initial, user-specified posterior approximation $\\hat\\pi$ as an input, such as the Laplace approximation. I don't think this requires much user input; given access to an optimization library and automatic differentiation tools, a Laplace approximation can be constructed without user input. 2. Theorem 4.3, which studies the convergence rate of the authors' proposed algorithm (an important selling point of the work), essentially assumes the smallest eigenvalue of the covariance $G(w)$ is not too small. For example if $\\xi$, is zero, and the optimal coreset perfectly approximates the posterior, then Theorem 4.3 predicts no convergence of their algorithm. But it seems -- and maybe the authors can correct me on this -- that as $N \\to \\infty$ we should expect $\\xi \\to 0$. For example, say we expect the posterior to concentrate to a point mass as $N \\to \\infty$ and some $w^* \\in W$ perfectly approximates the posterior. If the posterior is contracting to a point mass, won't $G(w^*)$ approach the matrix of 0's? (I'm writing this as if the inf over $W$ is actually the inf over $W_N$; if it's the inf over $W$, then the inf contains arbitrarily large $w$'s even for $N = 1$). **Smaller things** - There are a lot of derivatives with respect to $w$ taken throughout the paper. But $w$ sits in a constrained set $W$. How are these derivatives defined when $w$ is on the boundary of $W$? - Should the left hand side of equations 6 and 7 be indexed $1:M$ and $1:M, 1:M$, respectively? - Line 113: it's not intuitively clear to me why the first term should dominate the expression here. - Algorithm 1: is input $K$ supposed to be $K_{tune}$? - "This is sublinear in $N$". I usually take "sublinear" to mean $o(N)$, whereas this runtime is $O(N)$. - In the equation right after line 188, I don't think $A$, $K, X$, and $X_1$ were ever defined. - In Theorem 4.3: could the inf over $w \\in W$ be replaced by the inf over ${ w \\in W : \\|\\| w - w^* \\|\\| \\leq \\|\\|w_0 - w^* \\|\\| \\}$? Since there's monotone improvement towards $w^*$, it seems like you'll never leave this ball, and so you can ignore any bad behavior outside of it. - In the experiments, it would be good to include the runtime of FULL to see how marginal / non-marginal the gains here are. - In the proof of Theorem 4.3, there are a couple places where $G(w) + \\lambda I$ is written. Should this $\\lambda$ be a $\\tau$? - It would be helpful if the equations in the appendix were contiguously numbered with those in the main text. Otherwise it's not immediately obvious which equation "Eq. (1)" refers to. - The letter $t$ shows up in a few places in proofs. It seems like this is a number between 0 and 1, but I don't think this is ever stated. I think the authors have adequately discussed the social impact of their work. <doc-sep>This paper proposes a method for approximating a posterior distribution in the framework of Bayesian statistics. Specifically, this posterior distribution involves a sum over $N$ potentials $\\sum_{n=1}^N f_n(\\theta)$ which has to be evaluated each time an expectation wrt the posterior has to be calculated. It is proposed to replaced this sum of another weighted sum over $M \\ll N$ terms. This corresponds to selecting a subset of data, called a coreset. These points are sampled uniformly at random and their weights are computed by minimizing the KL divergence between the full posterior and the approximated posterior. A Quasi-Newton optimization method is proposed to calculate these weights. Statistical guarantees are given for the size of the coreset in order to reach a small KL divergence. The convergence of the Quasi-Newton method is also discussed. Numerical simulations illustrate the accuracy of the posterior approximation. Originality: This paper discusses interesting ideas. Being familiar with coresets in general but less with this specific application, I cannot completely assess the originality of this paper. Quality: From a global perspective, the main paper is well-written and the results are well presented. However, from the technical viewpoint, the interpretation of some mathematical claims is not straightforward, for instance, in the case of Thm 4.1. The numerical simulations are convincing. Clarity: The main paper is easy to follow. Concerning the technical details, I have several questions about the proofs of the theoretical results in Supplementary Material. These proofs are long (e.g., the proof of Thm 4.2 takes more than 8 pages). In principle, this is not an issue. However, these proves deserve extra clarifications at several places and are not easy to read; see the questions below. To guide the reader, it would also be beneficial if a brief description of the proof strategy is given before each proof. Significance: This method is certainly interesting and probably useful in the context of Bayesian inference. The limitations of this work are summarized in the checklist and are briefly discussed in section 5.4. I do not see any potential negative societal impact.
The paper has genereated unanimous enthusiasm and we are happy to recommend acceptance. Please make sure that all comments in the reviews/discussion threads are taken into account in the final version of the manuscript.
This paper studied self-supervised learning. A semantic shift problem in the aggressive augmentations of self-supervised learning is considered. This paper inherits the memorization effect in tackling noisy labels, and gradually reduces the weights of aggressively augmented pairs. Extensive experiments verify the effectiveness of the proposed method. Strengths: 1. Self-supervised learning is a practical and much important research topic in the community. 2. The motivation of this paper is clear. The semantic shift problem is common in multiple self-supervised learning methods and needs to be addressed. 3. Experimental results on benchmark datasets and real-world datasets show the effectiveness of the proposed method. Besides, ablation studies are provided to better understand the proposed method. Weaknesses: 1. The writing and organization need to be improved to enhance this paper. 2. The intuition of the advantages of the proposed method should be supplemented. The weaknesses of this paper are detailed below. See above weaknesses. <doc-sep>The data augmentation transformations used in some self-supervised learning models for vision can generate pairs that are not semantically consistent. For example, cropping or blurring too aggressively can produce an image that is not identifiable as its class label. This work addresses this shortcoming by decreasing the degree to which a model relies on aggressive augmentations later on in training. The authors show doing so improves Top-1 linear classification accuracy for ImageNet as well as on object detection/segmentation for COCO over other self-supervised baselines. The authors highlight a problem arising when data augmentations are too aggressive—thus producing samples that not semantically meaningful ("noisy samples"). The authors propose a solution to address this problem by reducing the weight given for aggressive augmentations later on in training. The authors motivate doing so by citing recent work illustrating that deep neural networks overfit to noisier samples later on in training. The authors define “weak” augmentation to be random crop and horizontal flip—while “aggressive” augmentations additionally include other color based augmentations such as blurring and color jitter. This delineation is 1) at odds with the original motivation since even “weak” augmentations can produce crops that are not semantically meaningful. Thus violating the motivation that only “aggressive” augmentations produce noisy samples, 2) at odds with more recent SSL methods (Masked Autoencoders, MAE) that only use “weak” augmentations. If we instead want “aggressive” to capture the extent to which the augmented sample is noisy, then a more precise approach is to control the magnitude of the cropping, degree to which gaussian noise is added, etc. Such a definition of an augmentation’s “aggressive” extent would more directly validate the authors’ claims. I find the method an unnecessarily complicated (and memory-intensive) approach to achieve the stated goal of controlling the extent to which a model relies on aggressively augmented samples. Rather than introduce two asymmetric networks with twice the memory footprint, did the authors attempt to adjust the extend to which the samples fed into the original models were “aggressively” or “weakly” augmented? Or other simpler variants before settling on this two asymmetric network approach? At the very least, I'd like to see stronger motivation for the proposed method's implementation. Overall the paper is well-organized and clearly written. I found the sentences motivating the impact of “noisy samples” on the online network around line 144 confusing. I think the wording here can be improved. Algorithm 1 would be easier to follow if it was self-contained—it doesn’t include the loss. The portion of the diagram in Figure 2 on the right with numerous arrows and boxes is not easy to follow. Overall I find the experiments well-motivated and convincing (putting aside the "weak" versus "strong" augmentation definition). Appropriate baselines are used, authors assess sensitivity to beta, and show performance across several tasks. However, several of the experimental results (Tables 3, 4, 5) are used as evidence of MSR’s superior performance, but do not include error bounds. The authors conclude MSR is a novel method to “property utilize aggressive augmentations.” If the motivation is that “aggressive augmentations” produce semantically noise samples, then the method’s success is instead to decrease models’ reliance on aggressive augmentations. The claim that is “property utilizes” or “neutarlizes the semantic shift problem” strikes me as an exaggeration. I suggest the authors appropriately qualify these claims. The paper’s title suggests the proposed method directly improves models’ robustness. Instead, the approach balances the extent to which models rely on “aggressively” augmented samples during training. There are no experiments directly measuring robustness to augmentations. I suggest the authors amend the title to more directly reflect the contribution. Overall, I find the problem of ensuring data augmentation produces semantically meanginful pairs for self-supervised learning important. The proposed method attempts to avoid overfitting to noisy samples shows some performance gains over existing baselines. On the other hand, I find the “aggressive” versus “weak” augmentation definition used here to miss the mark. I also find the proposed method to be needlessly complex based on the stated objective. Minor: - inconsistent notation: v and v' describe two augmented samples but their representations are designated using subscript z1 and z2. It's easier for readers to follow the same designation either subscript or prime is used for pairs throughout. - line 96 typo: “by utilizes” - line 157 typo: “To further against” Yes <doc-sep>The authors find that in most self-supervised learning methods, when applying aggressive augmentations to further improve the diversity of training pairs, there would exist severe semantic shift problem, thus degrading the transfer performance. To address this problem, the authors propose a new SSL paradigm, which counteracts the impact of semantic shift by balancing the role of weak and aggressively augmented pairs. With the training going on, the authors gradually reduce the weights of aggressively augmented pairs. Experiments have been done on the small datasets (CIFAR10/100) and medium datasets (STL-10 and Tiny ImageNet), and large datasets ( ImageNet-100 and ImageNet-1K), which have validated the effectiveness of the proposed method. The motivation of the proposed method is strong and clear. The authors fully consider the semantic shift problem, thus propose to minimize the negative impacts of noisy positive pairs from aggressive augmentations while taking advantage of aggressive augmentations. It has been achieved by different weights while training. But the weakness is that, I don't think the novelty is strong enough. As for compared with the previous work BYOL, I can find that the only difference is to add an extra aggr. aug. stream. The main architecture almost keeps the same. And for the re-weighting strategy, the authors use equ.(6) to achieve it. It is not flexible enough, which doesn't consider the quality of each augmentation pair. And in Figure 2, is there a mistake? I think in the row three, v_m and v_a should be exchanged. yes <doc-sep>This paper proposes a new learning strategy that assigns different weights for aggressively augmented data pairs at different training stages, to deal with the semantic shift problem caused by aggressive data augmentation methods for SSL. --Strengths: 1. This work contains extensive experiments to verify the effectiveness of the proposed strategy. --Weaknesses: 1. Although this work has a good motivation, the proposed method is quite incrmental, which does not meet the standard of NeurIPS. For example, this work claims that aggressive augmentations are retained, which is different from ReSSL [a] (lines 39-42). However, ReSSL does not give up the aggressive augmentations. ReSSL also feeds the weakly augmented images to the teacher model ( similar to the target network) , and inputs the aggressively augmented images to the student model (similar to the online network), which is exactly the same as MSR. The proposed learning strategy is more like a trick, which cannot support a paper to be accepted by NeurIPS. 2. According to Figure 3, the proposed strategy seems not to work very well on some datasets, as the increments brought by the decayed $\\beta$ seems to be negligible. 3. As for the experiment, it would be great to show some results compared to ReSSL [a] and W-MSE 4 [b] with the same batch size and training epochs. Concerning the second weakness, it is not quite clear where the improvement comes from. Does it come from the different training settings (The batch size usually has a large impact on the performance in SSL) ? Overall, since I have some concerns about the effectiveness of the proposed strategy, the missing comparisons (I think it is hard to supplement them during the rebuttal stage), as well as the limited novelty, I vote for a rejection for this paper, but I may increase my score after the discussion stage, especially if I misunderstand something. **Ref** [a] Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, and Chang Xu. Ressl: Relational self-supervised learning with weak augmentation. 2021. [b] Aleksandr Ermolov, Aliaksandr Siarohin, Enver Sangineto, and Nicu Sebe. Whitening for self-supervised representation learning. In ICML, pages 3015–3024, 2021. The novelty is limited, and some comparisons are missing. Please refer to the comments in the Weaknesses for more details.
This paper aims to improve SSL pretraining by adjusting the strength of augmentations applied at different points in training, providing a large number of aggressive augmentations early in training with this rate decreasing over time to prevent the model from overfitting to noisy examples. Using this approach, the authors demonstrate substantial improvements over prior methods. All reviewers recognized the soundness of the motivation and were generally convinced by the experiments, though there were some concerns about whether the approach is too incremental since it is relatively simple. I strongly agree with the authors that simplicity is not a downside of an approach, but rather a benefit, and the fact that the approach works with such a small modification makes it more likely that this result is not caused by an obscure mix of hyperparameters. I also note that the authors engaged extensively with the reviewers, providing a number of additional experiments comparing to other approaches and providing further tests of the impact of the hyperparameter they introduce. I think this is an worthwhile paper which will have impact going forward, and I recommend acceptance.
This work aims to study the characteristics of the class manifolds provided by a multiclass classifier. In particular, the main goal is to determine the effective dimensionality of these manifolds for the case of neural networks that outputs normalized probabilities. To achieve this, authors introduce the cutting plane method that, following some assumptions, allow them to relate the dimensionality of a random affine hyperplane to the effective dimensionality of the manifold for each class. Authors support their main findings including extensive experimentation. The theoretical foundation behind the paper seems to be sound, however, as a disclaimer, this research area is not close to the main expertise of this reviewer. In terms of writting, the exposition is clear, however, my main doubt is related to the process to infer the manifold dimensionality for a whole class using a process that depends on each specific instance. It will be good to clarify this point and also the computational complexity involved during the process. As a recommendation, it will be great to test the method using an artificial case with known ground truth about the effective dimensionality of the class subspaces, so it will be possible to directly validate the findings of the cutting plane method. The current analysis focuses on the case that the cutting plane dimension leads to 50% of the target class (d_50%), which is not the only choice, specially considering that a highly relevant goal is to quantify the effect of manifold dimensionality on generalization. The use of metrics or scores related to generalization will lead to valuable conclusions. In summary, this is an interesting area of research to shed lights on the process used by learning models to transform from input space to class-probability space. In particular, the potential relation between manifold dimensionality and generalization is worth to pursue. This work will be of interest to ICLR and I recommend to be accepted as a poster contribution. <doc-sep>This paper proposes to understand the behavior of deep networks for classification tasks by studying the dimensionality of the "class manifolds", i.e., regions in the data space that are mapped to the same one-hot output. To measure such dimensionality, the paper proposes a method that is based on intersecting the class manifold with a random affine subspace of varying dimension. The idea is that when there is a intersection then the dimension of the random affine subspace is roughly the codimension of the class manifold. The paper then studies how different factors in data, architecture, training, etc., affects such dimensionality. Strength: The development of the paper is solid in the sense that it studies the effect of a wide range of design choices (see the list 1) - 9) in paper abstract). Weakness: The whole paper is based on the assumption that each "class manifold" is a low-dimensional manifold. However, the paper did not provide a justification for this assumption nor do I think it is a valid assumption. The manifold assumption is a fundamental assumption for machine learning and data science, and that assumption is made for *data*, rather than *classes learned by neural networks*. One intuitive justification for that assumption in the case of data is that if I take a data point (say an image) and I do a perturbation, then if the perturbation is in a direction that is "meaningful", say by a translation, rotation and distortion, then the class label for that data point remains the same, but if you go towards another direction then likely the image is no longer meaningful and the class label changes. However, this same line of argument does not seem hold for the class manifolds learned by neural networks: if I consider a random input to a network then because the decision boundary is piecewise linear, it is with high probability that you can go towards all directions and maintain the class label. If the low-dimensionality assumption is not valid, then the premise of the entire paper becomes problematic: the intuition given in Fig. 1 is no longer valid, and the theory in Sec. 3 is no longer meaningful. Even if the low-dimensionality assumption is true to some degree, the proposed dimension estimation is still very much problematic: both the intuition in Fig. 1 and the theory in Sec. 3 are based on assuming that such low-dimensional manifold is (close to being) linear. But the ability of deep networks for performing complicated nonlinear mapping, which is the key to its great success, likely makes such low-dimensional manifolds to be highly nonlinear. Therefore, a discussion of how such nonlinearity affects the proposed dimension estimation is quite necessary but is missing. Additional comments: - How is X_0 in the cutting plane method generated? It is said in the paper that it is generated at random, so perhaps that means a i.i.d. Gaussian vector, but presumably the variance of the Gaussian distribution could have an impact on the result as it captures how far the affine subspace is from the origin. - Sec. 3.1, which contains the main theoretical result of the paper, is presented in vague terms (e.g., what is "highly likely", under what statistical model?). Perhaps it is better to make it precise by writing the result as a theorem. **Update after rebuttal** I would like to thank the authors for the detailed rebuttal, but my feeling now is that the rebuttal is making it even more complicated and sometimes conflicting with itself. I believe the paper needs some careful rewriting and updates to clarify its points and assumptions. Concretely, the paper is built upon the premise that each class manifold is a submanifold with dimension lower than that of the ambient space. I pointed out in my review that this premise may not hold at al, therefore the paper is fundamentally problematic. Then, R2 in one of his/her responses raise the same question, perhaps after reading my question. Then, I see a difference in response to R2 and my comments. For R2, the response is "The intrinsic dimensionality of class manifolds is absolutely the full dimension of ambient input space", which is effectively acknowledging that my critique is valid. However, the response to me is "This is very easily refuted by the ubiquitous and universal existence of adversarial examples". I don't really see why there is a discrepancy here. Besides, the argument that is used to refute my argument, namely existence of adversarial examples implies class manifolds are lower dimensional than the ambient space, is apparently wrong and can be easily refuted. By and large, the existence of adversarial examples only means that the decision regions are thin at every location, I can totally have a fine mesh of the data space that achieves this. <doc-sep>This paper proposes an empirical method for estimating the dimensionality of a class manifold, defined here as a collection of points for which the last (softmax) layer of a neural network maps them to a membership probability vector associated with a specific class. Their approach involves the generation of a randomly oriented 'cutting plane' of dimension $d$, passing through a randomly generated source point. The authors note that if the sum of the dimensions of the class manifold and the cutting plane exceeds the full spatial dimension, the chance of an intersection of the two is high. Conversely, if the sum falls short of the full dimension, the chance of an intersection is very low. Using a gradient descent technique starting at the source point, a location within the cutting plane is sought that minimizes the cross entropy loss between it and a target class membership vector representing the class manifold. A low minimum loss would indicate a likely intersection ($d$ too high), whereas a high loss would indicate a likely miss ($d$ too low). Although the dimension of the class manifold is in general unknown, the process is iterated for many choices of the initial cutting plane, and many choices of the cutting plane dimension $d$. The value of $d$ achieving the median loss value is chosen as the estimate of dimensionality. In their experimental validation of the approach, the authors examine the effects on the estimated manifold dimensionality due to various factors, including data noise, label noise, training set size. Interestingly, their method also allows them to produce an estimate of the class boundary dimensionality by specifying the average of two class one-hot vectors as the target probability vector. Pros: ----- 1) This is an interesting approach to the problem of dimensional modeling. Estimation of dimensionality using cutting planes, without an explicit parameterization of the subspace, is an attractive idea that (if performed efficiently and reliably) could be particularly impactful. A strong point of the model is that it considers as its 'class manifold' the regions of deep latent space that have sufficient probabilities of being in one or more classes of interest. The method thus supports assessments of dimensionality in border regions in an elegant way. 2) The optimization procedure proposed does seem practical enough - each optimization run is efficient, and the number of runs can be tailored to an execution budget. 3) The paper is generally well organized and presented. The descriptions are clear and accessible. Cons: ----- 1) As acknowledged by the authors in the caption of Fig 2, the dimensional estimates seem much higher than the typical estimates of intrinsic dimensionality as determined by local estimators (e.g. LID / Levina & Bickel, etc). This discrepancy could be due to a number of factors that are not taken into account: curvature of the class manifold, its boundedness, its disconnectedness, etc. All these factors could cause the gradient descent to terminate at high cross-entropy loss values, which would drive the estimate of dimensionality too high (even approaching the representational dimension of the latent space?). 2) Following from 1), some of the conclusions reached from the experimental analysis are not fully convincing. For example, in 4.5 an inverse relationship is reported between the training set size and the 'effective' dimension. However, non-uniformity of distribution within the manifold could lead to configurations that trap solutions at unrealistically-high values of $d$. In 4.6, adding Gaussian noise to each pixel is a full-dimensional transformation that is known to strongly bias the local intrinsic dimensionality upward, to unrealistically high values. 3) Again following from 1), the authors have not situated their work with respect to the recent literature on the use of intrinsic dimensional estimation in deep learning settings. For example, local intrinsic dimensionality has been proposed as a characterization of learning performance (Ma et al, ICML 2018), adversarial perturbation (Ma et al, ICLR 2018, Amsaleg et al, WIFS 2017), and in GAN-based image infilling (Li et al, IJCAI 2019). How does their estimator compare in practice to other estimators already in use? Other comments / questions: --------------------------- 1) The paper should be more self-contained in places. For example, Equation 6 is referred to in the main paper, but appears only in the appendix. 2) Like distances distributions themselves, loss functions may exhibit a bias due to the local intrinsic dimensionality. Discuss? <doc-sep>The authors propose a cutting plane method, inspired by intersection theory from algebraic geometry, to analyse the properties of neural networks. Specifically, the method allows to estimate the dimensionality of the class manifolds induced by the networks. An empirical analysis shows that the estimated dimensionality correlates with the generalisation performance and the robustness of neural networks, thus opening the door to potentially new perspectives on deep learning theory. The paper is well structured and clearly written. Also, the authors are planning to release the code to reproduce their experiments. Last but not least in terms of importance, the paper provides an original and novel method for the analysis of the properties of neural networks. In fact, while previous works have used a similar strategy to estimate the intrinsic dimensionality of the loss landscape in the weight space (see cited works in the paper), this work focuses on the analysis of neural networks in the input space. In general, there are no major issues with the paper. However, there are some points that need to be discussed, which can be helpful (i) to identify more precisely the conditions about the validity of their results and (ii) to relate with other existing work on the analysis of deep learning using spline theory. Please, see below for more detailed comments and also suggestions to increase the quality of the paper. Based on these considerations, I recommend for the acceptance of the paper with an initial score of 6. I'm willing to considerably increase the score and award the authors, if they can address my questions. DETAILED COMMENTS Please, let me make two simple pedagogical examples to analyse the behaviour of the proposed method and to possibly seed further thought. FIRST EXAMPLE Consider a two-dimensional real space, where the class-manifold is a line. Then, generate a second line by randomly sampling its intercept and slope coefficient and refer to them as the line parameters. Now if you consider the parameter space of this second line, you have two regions, one of zero measure, which contains all the cases where the two lines are parallel, and the remaining one, which contains all the intersecting cases. This means that the two lines are almost always intersecting each other. Consequently, the estimated dimension of the class-manifold is correct. SECOND EXAMPLE Consider the same example as before, but now the class-manifold is a parabola. Similarly to the previous example, there are two regions, namely the ones defined by the intersecting and the non-intersecting cases between the parabola and the randomly generated line, but differently from the previous one, both regions have non-zero measure. Therefore, we may end up to generate lines that do not intersect the class manifold. This would result in considering a higher dimensional object (in this case a plane) to guarantee the intersection with the parabola. Consequently, we would underestimate the dimension of the class manifold. This phenomenon can be even more pronounced when moving to higher dimensions. Therefore, I agree with the authors that the whole analysis is exact when considering hyperplanes. But what are its limitations when moving to the nonlinear regime? How can we guarantee that the estimated dimension is accurate? It seems that the proposed method provides a lower bound on the true dimensionality of the class manifolds. Is that correct? If so, when can this bound be tight? Also, there is some recent line of work trying to analyse the behaviour of deep leaning in terms of decision boundaries and their curvatures from the perspective of spline theory [1-2]. Could you please discuss this and add the explanation in the paper? SUGGESTIONS TO INCREASE THE QUALITY OF THE PAPER I proceed following the order of the sections. Section 2.2. Is it possible to provide the full details of the algorithm to estimate the cutting plane dimension, like an algorithmic table? Also, where is Equation 6 (in the appendix)? What are the mean and covariance parameters? Section 3.1. Can you be more precise when you use the terms 'highly likely' and 'generically' and discuss what happens in the nonlinear regime? Section 3.1. D-(d_A+d_B) should be 2D-(d_A+d_B)? Section 3.1. Can you rephrase the sentence "For the subspaces A and B intersecting transversally...satisfying the upper bound and therefore leading to Equation 2" and make it more clear? Specifically, which upper bound and how does this lead to Equation 2? Section 3.2. Can you consider to remove it, as the purpose is not clear and it does not seem to introduce any additional information? Section 3.3. Can you make an example to explain the difference between dimension and effective dimension? Section 4.3. Is there any concrete insight on the analysis of the class-boundary and multi-way class-boundary manifolds? I would appreciate to see more discussion on that. Section 4.6. Is there any specific reason why you chose to show only class 0 and 1 in Figure 8? Can you provide the figures for the other classes as well, maybe in the appendix? Section 4.7. Similarly to Figure 8, can you add the other cases for Figure 9? Always for this subsection, which initialisation did you use in the experiments? This is an important information that could be of interest for those studying initialisation strategies for deep learning? Section 4.8. Do you have any experiment with ensemble of classifiers with different architectures? If so, do the same findings hold? May it be possible that you are underestimating the dimension of the class manifold in the set of experiments shown in the paper? Section 4.8. Can you provide a plot of the generalisation performance versus the ensemble size? Or better correlating the cutting plane dimension with the generalisation performance for the different ensemble size? [1] Balestriero and Baraniuk. A Spline Theory of Deep Learning. ICML 2018 [2] Balestriero et al. The Geometry of Deep Networks: Power Diagram Subdivision. NeurIPS 2019 ######################### UPDATE The discussion phase has highlighted several major issues: 1. There has been a significant conceptual shift in the problem definition (i.e. from estimating the intrinsic dimensionality of the class manifold to quantifying its geometrical properties). 2. I'm not convinced about the validity of some arguments/statements used by the authors to support point 1. For example, the statement "The intrinsic dimensionality of class manifolds is absolutely the full dimension of ambient input space, but this is a completely uninteresting observation" is not fully supported and I'm not even sure that is true. 3. Furthermore, the paper is still in its original form. It has been difficult to keep track about the modifications that the authors should do. To conclude, the article is not ready for publication, yet and therefore recommend for its rejection. I encourage the authors to further investigate the topic and carefully consider whether the statements provided in the discussion phase are true.
This paper aims to study the dimension of the Class Manifolds (CM) which are defined as the region classified as certain classes by a neural network. The authors develop a method to measure the dimension of CM by generating random linear subspaces and compute the intersection of the linear subspace with CM. All reviewers agree that this is an interesting problem and worth studying. However, there are major concerns. One question raised by several reviewers is that the goal of this paper is to analyze the dimension of the region that has the same output for the neural network; while the method and analysis are for a single datum. It is not clear if the obtained result is what the paper really aimed at. Another issue is the experimental results are different from that of local analysis. The dimension estimated by using the method in this paper is much higher. Based on these, I am not able to recommend acceptance. But the authors are highly encouraged to continue this research.
The authors present ISAGrasp, a method for performing dexterous grasps on arbitrary objects using a pre-trained network from a large dataset augmented by a generative model which outputs corresponding grasp points on deformed meshes. This is achieved with a combination of labeled human grasp demonstrations with pose-retargeting, to get corresponding robot (Allegro) grasps. To generate novel objects and grasps, they use DIF-Net, building upon prior work by Deng et al. '21, which returns point-wise deformations of the objects used in the human demonstrations. They compute new grasp locations that minimize the total offset to the original grasp over a local patch of points. Experimentally, their method outperforms RL, heuristic, and grasping baselines. Strengths - Their grasping policy generalizes to novel object instances, provided that an accurate point cloud of the object can be generated from the scene. - Their method extensively compares to different grasping approaches with a wide range of objects, in addition to ablating the input features used by the grasping policy. These comparisons show strong benefits to shape augmentation. Weaknesses - By using an open-loop policy, their method is unable to react to dynamic/cluttered environments or recover from failed grasps - Their simulation does not account for kinematic infeasibility with the environment, both for the arm and the hand. - This method does not consider functional/dexterous grasps of objects, and therefore does not leverage the dexterity of the hand. - Their method uses demonstrations, but does not leverage an RL approach which is designed for use with demonstrations <doc-sep>This paper studies the problem of dexterous grasping. It proposes to use correspondence-aware implicit deformation networks to propagate a small number of human grasp demonstration to grasping configurations on a diverse deformed object set. Trained with the generated objects and corresponding grasp configurations, the grasping policy can better generalize to unseen objects in human demonstration. Strengths: 1. It's a smart idea to augment human grasping demonstrations with correspondence-aware shape deformation networks. It simultaneously creates novel shapes and the corresponding grasp configurations. 2. Simulation experiments demonstrate the efficacy of the proposed shape augmentation in the comparison between last two rows in Table 1. 3. This paper conducted real-world experiments of dexterous grasping, which makes the results much stronger. Weaknesses: 1. Table 1: The comparison with baselines seems not very fair. Random, Heuristic and GraspIt! should be compared with transformed grasps Gd when performing the rejection sampling (133). Collect data with rejection sampling on augmented shapes using Random, Heuristic and GraspIt! with a same budget and compare the result. 2. 248: what does refinement rate mean? Minors: 220: baseelines -> baselines Fig 8 caption: miss one '(' before "Middle Panel)", near the end of the second line <doc-sep>The paper proposes a system, Implicit Shape Augmentated Grasping (ISAGrasp), to augment limited human demonstrations of dexterous grasps.The implicit shape augmentation is built on DIF-Net [11], a correspondence-aware implicit generative model. Novel shapes are generated via deformation and the resulting dense correspondences helps transfer the human demonstration to novel objects. The transferred grasps are refined via simulation and a grasp prediction model is trained on the augmented dataset by supervised learning. Strengths: * Human demonstration of dexterous grasping is expensive to collect and always require specialized set up. The paper proposes an interesting way to extrapolate the limited demonstrations to a large dataset of novel objects. * The correspondence-aware generative model is used in a reasonable way, and the effectiveness of the method is shown by experiment. * The system transfers well to real world and achieves a decent result. Weakness: * Analysis of the results is insufficient. The authors only show that "method A gets a better score than method B", but doesn't really explain why. * Why do the baselines, such as Heuristic and GraspIt perform so poorly? Are there any specific failure mode? * Why doesn't data augmentation improve the performance on RescaledYCB? Is it because the distribution of augmented objects are different from the YCB dataset, or it's something else? * I can understrand that augmentation can make the grasping policy generalize better to novel objects, but what are the source of remaining perfomance gap (~30% on 3 datasets)? * The limitation part is not very satisfactory. I want to see more limitation on the algorithmic side, for example, what are the failure mode of the method and how can we further improve the success rate. Questions: * I like the second term (relative poses) of the regarting optimization. My question is why do you choose the center of the object as anchor point, instead of '''closest point on object surface'. Isn't the latter one more robust to shape variation and better modeling the contact? <doc-sep>This paper proposed a data-augmentation framework to learn dexterous grasp pose from point cloud observation. First, the DexYCB dataset which consists of human hand object interaction trajectories is used to provide an initial demonstration. Second, hand motion retargeting is utilized to convert the human hand motion to robot hand joint angles. Third, DIF-Net, an implicit neural network that can keep dense correspondence while deforming an implicit shape representation, is used to deform the original object mesh with sampled latent gaussian vectors. With correspondence after deformation, the grasp pose can also be modified according to the object. Then a rejection-sampling-based grasp refinement step is performed inside PyBullet simulator to eliminate physically-unfeasible grasp candidates. Finally, the author trained a PointNet++ to predict the palm pose and finger joints from point cloud input. This paper proposes a data augmentation strategy for dexterous grasping and motivates the problem well in the introduction. The proposed object-centric deformation method is very general and agnostic to tasks and manipulators. There are also several issues that need to be addressed in the paper as follows: ### Strength: 1. The proposed method is simple and elegant. It does not rely on dynamics analysis of the contact wrench space or other force-based measurements. The dynamics correctness checking is achieved solely by a physical simulator while the underlying troublesome computation is hidden below. 2. In principle, the implicit-based deformation network can generate infinite object mesh and corresponding grasp pose. This method is quite general. 3. The observation that “appending object points with additional information regarding the153 alignment between the robot hand and the local object surface” is beneficial for many robot-orientated regression tasks with point cloud input. ### Weakness: 1. The original demonstration dataset (DexYCB) seems not so useful in the whole pipeline, it only provided the initial object-grasp pair. The following data generated is achieved in a sample and reject fashion: sample with DIF-Net and reject with physical simulator. So the original dataset can be replaced by any method that can provide roughly okay grasp proposals, even if the grasp proposal itself is not dynamically-correct or successful. For example, the author can use any grasp proposal network, e.g. Contact Grasp Net, to generate a grasp pose and use the same data augmentation procedure for the following steps. In this case, the training data is not even limited to only YCB objects but any object meshes, which may lead to better generalization performance due to the increasing diversity of the training data. I assume that maybe one benefit of using human demonstration is for functional grasp when you do not want some grasp poses even if they are successful in simulator dynamically but not semantically feasible. 2. The PPO baselines with **sparse reward** seem too weak. For manipulation tasks with a dexterous hand, sparse reward will lead to nearly zero success rate. In this task, writing a distance-based dense reward is also very simple and straightforward, especially the authors can train the algorithm inside a PyBullet simulator. 3. The GraspIt baseline seems not used in a fair way: in the experiments, the author utilizes the results predicted by GraspIt and other heuristic to evaluate the grasp performance. However, the major contribution of this paper is the implicit shape-based augmentation, so a better way to compare GraspIt is as follows: i) replace DeYCB and implicit shape augmentation with GraspIt to generate diverse grasp poses ii) use PyBullet to reject sample the unfeasible grasp poses iii) train the same PointNet++ based on the data generated by GraspIt. Since the contribution is on the training data side, the GraspIt should also be used to generate training data but not evaluation. Otherwise the experiments may show the value of the Grasp Refinement for Dynamics Consistency but not Implicit Shape Augmentation.
All the reviewers acknowledge somehow the novelty/originality of the paper but have all questioned the proposed approach in terms of baselines and fair comparisons, e.g., PPO, GraspIt. In the phasse of rebuttal, the authors have significantly reinforced the comparison of some well-known baselines suggested by the reviewers. As a result, all the reviewers achieved a consensus for weak accept
I was searching for relevant work but found an arXiv paper that is very similar and being reviewed under Neurocomputing Journal: https://arxiv.org/abs/1911.06475 This paper is poorly written and not well organized. It is unclear to me how the method works and the results section is also not informative. <doc-sep>This short paper proposes exploit dependencies among abnormality labels and used the label smoothing regularization for a better handling of uncertain samples. Pros: 1. The proposed model gains 4% improvement in AUC from the label smoothing regularization compared with pure U-Ones. 2. The proposed work achieves the highest AUC for 5 selected pathologies 3. The proposed work is on average better than 2.6 out of 3 other individual radiologists. Cons: 1. All 14 labels are trained, but the model only has 14 outputs. Does that mean "parent labels" in the paper are labels included in the dataset? If so, is it guaranteed that parent is positive when at least one child is positive? This is the essential assumption in the adapted model (Chen et al. 2019). 2. Terms not consistent: "we propose the U-zeros+LSR approach" at the end of Section 2.2. But U-Ones+LSR is evaluated in ablation study. 3. Lack ablation study with the model ignoring all uncertain cases. (defined as U-Ignore in the paper)<doc-sep>The authors present a work that classifies chest x-ray images with 14 different labels and uses hierarchical labelling and label regularization in an attempt to improve results. A leading performance on the public chexpert challenge is claimed, but while the authors may have created a nice model the claims they make in this paper are not well proven or explained. The method for using hierarchical labelling appears to follow a previously published scheme (cited) except with a different hierarchy (no details of the new hierarchy are provided). The method for label regularization is also previously published (and cited), therefore there is not methodological novelty in the paper. The authors apply their methods to the chexpert public dataset. From section 2.3 it is not clear to me precisely what experiments were carried out - were all of these models trained with/without the hierarchical labelling and also with/without the label regularization? That is not described at all. Section 3 claims that extensive ablation studies were carried out, however there is not a single table or figure to illustrate the results of these. The text provides a few AUC values but the precise gain from the hierarchical labelling and from the label regularization is unclear. What is meant by "U-ones+CT+LSR" - this is mentioned in results but not explained. The paper has no abstract. <doc-sep>This paper presents a multi-label classification framework based on deep convolutional neural networks (CNNs) for diagnosing the presence of 14 common thoracic diseases and observations in X-rays images. The novelty of the proposed framework is to take the label structure into account and to learn label dependencies, based on the idea of conditional learning in (Chen et al., 2019) and the lung disease hierarchy of the CheXpert dataset (Irvin and al., 2019). The method is then shown to significantly outperform the state-of-the-art methods of (Irvin and al., 2019; Allaouzi and Ahmed, 2019). The paper reads well and the methodology seems to be interesting. I only regret the fact that this is a short paper, and there is therefore not enough space for a more formal description and discussion of the methodology.
First of all, this paper does not follow the MIDL template, with missing the Abstract section. Major concerns from the reviewers lie in the unclear presentation of results and large overlapping with an arXiv paper. Nevertheless, I think that taking into account the structural dependencies of labels is interesting.
The novelty of the network structure is marginal. The decomposition way of feature is very common in computer vision. Just utilizing the latent vector of the encoder with only the comparator loss to decompose the feature into two feature types is limited. The authors should show the visual differences between these two feature types. The expression of the article is very clear, but some basic theories need not be explained in detail (Such in Section 3.4) One more concern : h_id and h_or are both used for reconstruction. It’s best to prove that only using identity feature h_id is better than the overall latent vector h_id + h_or. <doc-sep>- It is well presented. The idea of splitting the encoding feature space into task related features and non-task related features is probably not new. But the use of it in estimating rank might be new and intuitively it makes sense to use it. They also propose an extension to the clustering algorithm using a repulsive term and propose MAP estimation algorithm to assign a rank based on the output probabilities of the comparator when the max possible rank is known. - Experiments are conducted on 3 data sets. The results show the effectiveness of the approach. The experiments, I feel, are sufficient to show that clustering instances based on non-rank related features will help improve effectiveness of comparison based ranking of new instances. They also show the effectiveness of their proposed MAP estimation rule for assigning a rank. - The effectiveness of the repulsive clustering on ranking performance is not clear. The authors discuss that using the repulsive term in the objective for clustering produces more distinct clusters but how does this "improved" cluster quality translate to better performance in ranking? As this is one of the key contributions of the paper, a comparison of ranking performances with and without the use of the repulsive term in clustering would be useful. - How sensitive/robust is the proposed approach to the number of clusters chosen? How can one choose the right number of clusters to use? A discussion on these would be useful. - In each experiment, what was the dimensions of the order-related feature and identity-related feature? In general, I think this paper is above the borderline. But I would also like to see the comments from other reviewers. <doc-sep>**Summary of paper**: This paper considers the task ordered learning, making predicting a class label for a point among an ordered graph of classes. The paper proposes a clustering objective that encourages the model to separate data into groups such that classification prediction is easier within each cluster. The method is intuitive, clearly explained and well motivated. The paper indicates state of the art results on a task of estimating ages of individuals from photographs. **Review summary**: Missing *crucial* discussion on discussion of use cases / broader impact of task of estimating ages from photographs. Otherwise intuitive and effective method for ordered data; effective empirical results; limited novelty / exploration of methodological approach. **Strengths**: The authors describe an intuitive and effective method for making predictions on ordered data. The approach uses a intuitive clustering-based method that groups data into subsets where items are easier to order. The paper is clearly written and explains the approach clearly. The paper shows several examples of predicted output of the method and shows results on two tasks (estimating ages, aesthetic score regression). The method achieves state of the art results on the task of estimating ages and is competitive on the other task. The authors show further results on age transformation. **Weakness**: **Broader Impacts of Applications**: One of the primary applications of the paper is estimating ages of individuals based on their photographs. While this is paper is not the first to focus on such a task, it is very remiss of this paper to not discuss the motivations for this task and the broader impacts and ethical considerations of this task. I would very strongly encourage the authors to add a discussion of the potential uses of their system and the benefits (as well as harms) that come from these uses. I think that it is crucially important to discuss this both in the context of this work as well as previous work on the task. In particular, it would be important to mention how the use of clustering (into groups based on gender/race) in this model factors into potential biases when the model is used. I think it would be necessary to include this discussion in the body of the paper itself rather than an appendix. I greatly believe that this discussion is necessary and the lack of it is one of my top concerns about the paper. **Distinctions between total ordering and partial ordered / related work**: The presentation of the approach indicates that observations are not directly comparable across clusters. However, the overall model does in fact provide a total ordering -- each point is mapped one of the clusters and then compared within that cluster. I think the presentation would be greatly improved if it were described not in a way that implies a partial ordering (only within each cluster) is there, but instead that the total ordering function is this multi-modal, cluster-based ordering. Further, I think it would important to discuss the relationships between this work and work on partially ordering sets, particularly work on combining partially ordered sets. It might also be good to consider more related work on ordering, such as, Learning to Order Things (https://papers.nips.cc/paper/1431-learning-to-order-things.pdf). Also, I think that it is especially important to address other work (such as that in extreme classification) that organizes class labels into groups that are easier to discriminate between (i.e., Logarithmic Time One-Against-Some ( https://arxiv.org/abs/1606.04988)). **Novelty of approach / depth of exploration**: The core novelty of the approach is in the use of clustering to separate the data into groups that are easier to rank. This is a nice idea and appears give strong empirical benefits. I worry that since the clustering component is the core contribution of the paper, that the analysis of the method of clustering is not very deeply explored empirically. The idea is intuitive, but I feel the limited deviation from classic approaches that combine clustering + classification would benefit from additional analysis of the approach, along the dimension of the clustering objective that is selected. **Questions for the authors:** * What are the potential use cases for the system & its applications to age prediction? * What are the fairness/ethical/safety concerns of such an application? * Were clustering objectives other than the repulsive-based one considered? * How does your work connect to papers such as Logarithmic Time One-Against-Some ( https://arxiv.org/abs/1606.04988) which also organize classes into clusters ? <doc-sep>Summary: This paper considers the problem of order learning, which learns an ordinal classification function. This paper proposes to learn separarted order-relavent and order-irrelavent latent representations to improve the performance of existing methods, which is a very interesting and promising idea. However, the approach lacks novelty and convincing theoretical guarantees, as well as not showing convincing performance even through the insufficient empirical evaluation. Main concerns: - The ORID model structure: The latent representation is separated to h_{or} and h_{id}, and the comparison loss is defined on h_{or}. However, this need not to exclude order-relavent information from h_{id}. Also, it needs to be clarified that to what exent introudcing a discriminator helps, as this turns a minimization problem into an unstable min-max optimization problems. How it works without the discriminator? - Normalization of h_{id}: Normalizing vectors in a space may result a totally different cluster structure, different clusters may appear to be overlapped with each other by normalization. Euclidean distance can be the natural dissimilarity metric without normalization. - The DRC algorithm: The idea of encouraging inner-cluster similarity and iter-cluster dissimilarity of Eq. (9) is not new. Also, right after Algorithm 1 in the paper, "DRC is guaranteed to converge to a local maximum" is quite suspicious. Is it true that different rules optimiziming the same objective alternatively is guaranteed to converge? At least some references need to be provided as it is a crucial point of the main contribution. - The decisioin rule: Eq.(15) loops over all y, so what is the point of selecting a y_i in Eq.(13)? - Experimental results seem to be fine, and authors are honest to report unfavorable results. However, in my humble opinion, results for a sufficient number of repetitions (5 or 10) is needed to achieve a least convincibility. Minor comments: - In Eq.(4), the rightmost inequation should be \\theta(x) - \\theta(y) < -r.
This paper is overall well written and clearly presented. The problem of ordered data clustering is relevant, and the proposed method is effective. During the discussion, all reviewers agree with the strength of this paper and share the positive impression. The authors successfully addressed reviewers' concerns by the careful author response, which I also acknowledge. One of the reviewers raised the concern about the broader impacts, while it is also well addressed in the author response. I therefore recommend acceptance of the paper.
The authors formulate a general framework that unifies inference, action/perception, control, and several other tasks. The framework is based on minimizing the KL divergence between a parameterized "actual" distribution and a "target" distribution. The authors argue that this formulation unifies a wide range of previously proposed objectives. They also argue that it has some advantages when compared to Friston's "free energy principle" framework, with which it shares many similarities, in particular that probability matching is preferred to surprise minimization. The paper is clearly-written and provides a very thorough literature review. However, generally I question the scientific value of such all-encompassing unifying frameworks, and this paper in particular offers no concrete formal or empirical results, while promising a lot. At the end of the day, the divergence minimization objective is nothing more than MaxEnt, decorated with various interpretations and decompositions. Without empirical support, I do not find the interpretations and decompositions very convincing -- as one example, does divergence minimization *really* mean that "expressive world models lead to autonomous agents that understand and inhabit large niches"? One of the issues is that the paper appears to treat the "heart of the matter" (i.e., the source of interesting solutions) as if it lay in the elegant and generic objective. In my opinion, however, the real heart of the matter will be encoded in (1) the structure of the target distribution, (2) the structure/parameterization of the actual distribution, and (3) the optimization algorithm that can actually minimize the (typically) high-dimensional objective. The quality of resulting solutions depend on 1-3 -- all of which need to be exogenously specified --- because divergence minimization cannot on its own produce interesting behavior. At the end of the day, I do think there is some value in providing a unifying framework, and developing information-theoretic decompositions and interpretation. However, I think the paper would be *much* stronger if it was considerably longer and had more room to breathe (which it doesn't have right now -- given all the connections it tries to make), and if qualitative statements (of the type discussed above) were accompanied by empirical results (even if simulations with simple toy models).<doc-sep>The authors proposed to use the joint KL divergence between the generative joint distribution and the target distribution (containing latent variables which could correspond to latent parts we wanted to model (e.g. beliefs). It was illustrative to discuss decomposing the joint KL into different ways and thus forming information bounds in different scenarios. The decomposition of past and future in Eq.6 also provided a unified perspective for looking at the most currently used objectives. The examples shown in the paper and appendix give a good illustration of how people can make assumptions or design the terms to convert prevalent objectives into objectives that follow from this joint KL divergence framework. This is, in my mind, one of their key contributions for connecting the past progress in a general and unified way. However, one concern about this paper is that the proposal of such a unified KL minimization framework is in fact a bit too general and abstract. In fact, many methods mentioned in this work shared a similar insight of deriving objectives from a KL-minimization perspective, but some factors are omitted to better fit the corresponding tasks. The general decomposition discussed in this paper provides little hint on how new objectives could be derived for problems. The general framework does somehow serve as the guideline, but my worry is that its effect will be limited as we still need to design the mapping for the terms in the general objective accordingly in different tasks. Given the pros and cons of this paper, I'm putting a borderline decision for now. The authors should clear any of my misunderstandings and perhaps show the potential for this general framework as a source for new objectives. ===================================================================================================== After reading the authors rebuttal, my major concerns are fully addressed and I decide to keep my decision as weak accept<doc-sep>The authors of this paper propose a unified optimisation objective for (sequential) decision-making (i.e., _action_) and representation learning (i.e., _perception_), built on joint (KL) divergence minimisation. As also mentioned by the authors, this is a concept paper and it includes no empirical study. In particular, the authors demonstrate how existing ideas and approaches to (sequential) decision-making and representation learning can be expressed as a joint KL minimisation problem between a target and "actual" distribution. Such examples are (a) MaxEnt RL, (b) VI, (c) amortised VI, (d) KL control, (e) skill discovery and (f) empowerment, which are all cases of the KL minimisation between a target and an ``actual'' distributions. **Concerns**: 1. Although the proposed perspective and language is rich and expressive, I question the novelty of the proposed framework, since the information-theoretic view of decision-making and perception is a rather established and old idea, even the term/idea of perception-action cycle is already defined [1]! 2. The power of latent variables for decision-making and their interpretation is also a known idea [1]. **References** [1] Tishby, N. and Polani, D., 2011. Information theory of decisions and actions. In Perception-action cycle (pp. 601-636). Springer, New York, NY. <doc-sep>########################################################################## Summary: In this manuscript, the authors propose a unifying framework for a large class of inference and reinforcement learning objectives, which have been studied in prior works by various authors. They demonstrate that approaches and central ideas from many different fields in the ML/AI community can be derived as limiting cases of their framework. ########################################################################## Reasons for score: Overall, I vote for acceptance (7). Like many, I have employed various variational approaches in the past and see its merit. While I agree with the main idea, this work is not without problems. This is especially problematic for such a broadly applicable work that will most likely influence plenty of future research. My main problems with this submission are: 1. Presentation. While the paper is, for the most part, well written and well organized, there are some gaps/ jumps that render understanding difficult. Two examples: A) The parameters \\phi. The authors start by introducing parameters \\phi as abstract placeholders for (i) parameters of the true joint distribution of data and latents of the underlying system and (ii) a set of actions an agent can perform to interact with this world. The agent's target distribution has no explicit parameter dependence. So far, so good. Then, one is redirected to the appendix A.1 and A.2. Section A.1 is already a bit confusing because suddenly, additional latents w are introduced that were not mentioned before. Then, suddenly in A.2. the target \\tau is suddenly dependent on the parameters \\phi, which were initially parameters of the underlying system's true joint distribution. This also happens in Figure 2. C), which is also never referenced in the text. I find this strange mixing of parameters of agent and system very confusing. It also sheds some doubt about the generality of the framework. B) I have read the paper carefully and still do not understand Figure 1 completely. This may also be due to the reason that it is only referenced in the appendix. Related: Why does Information gain play such a central role if all derived objectives only contain (upper) bounds for it appear? 2. (Unsupported) Claims: In the abstract, the authors promise to offer "a recipe for designing novel objectives". As much as I can see, they only come back to this promise in the conclusion, where they say that one could look at other divergence measures to arrive at new objectives, and they will leave it for future work. I would not call this a recipe, but an outlook at most. 3. Too many ideas: It is hard, if not impossible, to explain a broad framework well in a conference proceeding. This work contains so many ideas and establishes many connections that following this work, and understanding them in detail becomes very hard. I would suggest sacrificing some connections in favor of a more concise presentation. 4. Fixation on KL-divergence: This is more of a suggestion. I understand that many works use the (non-symmetric) KL due to its favorable analytic properties. Thus, I agree that it makes sense to focus this framework on this measure. However, I believe this work's main idea still holds if one would exchange the KL with some other measure of similarity between distributions. Maybe it would make sense to first introduce and discuss the abstract idea of aligning target and belief before fixation on a particular measure. This would also go well with resolving my concern 2.. ########################################################################## Pros: 1. Unifying framework of many inference, and RL objectives. 2. Well written. 3. Will be impactful to a lot of future research. ########################################################################## Cons: 1. See my Reasons for score. ########################################################################## Questions during the rebuttal period: Please address and clarify the cons above. ######################################################################### Minor: · Please consider citing Toussaint, M., & Storkey, A. (2006). Probabilistic inference for solving discrete and continuous state Markov Decision Processes. International Conference on Machine Learning (ICML), 945–952. https://doi.org/10.1145/1143844.1143963 in the "control as inference" section. To my knowledge, it is one of the first to establish the connection between planning and inference.
The paper presents an KL-divergence minimisation approach to the action–perception loop, and thus presents a unifying view on concepts such as Empowerment, entropy-based RL, optimal control, etc. The paper does two things here: it serves as a survey paper, but on top of that puts these in a unifying theory. While the direct merit of that may not be obvious, it does serve as a good basis to combine the fields more formally. Unfortunately, the paper suffers from the length restrictions. With more than half of the paper in the appendix, it should be published at a journal or directly at arXiv. Not having a page limit would improve the readability much. ICLR may not be the best venue for review papers.
This paper focuses on improving the adversarial robustness of prefix tuning (Li et al. 2021), which is a recent parameter-efficient tuning method. Specifically, the paper proposes to add extra batch-level prefixes that are tuned for each test batch on the fly, to minimize the distance between hidden activations of the test samples and the canonical manifold obtained from the hidden activations from correctly classified training samples. The intuition is to optimize the added batch-level prefixes so that the geometry of hidden states from adversarial examples is closer to that of training examples. Experiments on three text classification benchmarks across several different adversarial attacks demonstrate the effectiveness of the method. Below are the detailed strengths and weaknesses: *Strengths:* 1. Adversarial robustness is an important problem and has not been explored much for relatively new prefix/prompt tuning approaches. Thus the topic of this paper can be of interest to a general audience. Also, this paper is timely given the recent attention on prompts. 2. The idea of optimizing the geometry similarity to defend against attacks is interesting and novel from my perspective. Particularly I like test-time tuning which could adapt to different types of attacks on the fly. 3. The experimental results are strong. ~~~~ Updates after Rebuttal: Most of the following concerns have been addressed in the revision, and I have increased my score ~~~~ *Weaknesses*: 1. What is the batch size at test time tuning? Is the added robust prefix the same for the entire test set, or the same within a batch but different across batches, or unique for every test example? This is an important point to assess whether the experiments are in an online setting where test data arrives in-stream or not. 2. Section 5 is not very convincing to me: (1) there are only several case studies without any quantitative results; I think it may appear in the appendix only or just uses a short paragraph in the main body (because the statements in Section 5 can only be good hypotheses which are not well-supported by quantitative evidence, also, attention is not a convincing proxy for the explanation, see point (2)), yet it takes more than one page in the current version; (2) the interpretation from the perspective of attention weights bothers me a bit, attention as an explanatory tool is known to not be faithful [1] – a larger attention weight does not necessarily mean the final prediction depends on it more than others, and vice versa, thus maybe not overread it too much. As said above, it is ok to do it this way in the appendix, but using over one page of the main content for this is not convincing to me. 3. In the attention visualization figures (e.g. Figure 4), why does a word attend itself as well instead of only attending to the contexts? For LM, I think that the diagonals in the attention figure should be zero, or am I missing something? Also, the presentation could be improved here, in the figure caption you can explain what the rows and columns mean in the visualization to make it easier to read. 4. Section 6 is difficult to follow without reading the appendix, and it is disconnected from the rest of the paper in terms of the paper structure – having a theoretical interpretation section after experiments at the end of the paper is not a good presentation structure in my view. If you think Section 6 is important to have, move it before the experiment section and clarify more details to make it more self-contained; if it is not very important, you can just put it into the appendix while briefly mentioning it in the main body. 5. Besides the presentation issues above, there are some other minor presentation places which could be improved, e.g. (1) Eq 4, an undefined variable X_C suddenly comes in without explanation (2) Better to explain Eq (8) with more text given that this is an important equation for the proposed method [1] Serrano et al. Is Attention Interpretable? ACL 2019 There are novel contributions of the paper both technically and empirically, however some major analysis sections are not convincing and a significant portion of the presentation needs to be improved. <doc-sep>The paper investigates the robustness of prefix-tuning methods and proposes a simple yet effective method to improve the robustness. The experiments show that the proposed method can largely improve the performance in adversarial settings and slightly improve the performance in clean settings. The authors study a novel problem in lightweight fine-tuning methods. Most studies aim to match the performance of full model tuning via updating a subset of parameters but rarely study the robustness of lightweight fine-tuning methods. Strengths: 1. The authors study an important and novel problem. 2. The authors provide a simple yet effective method with motivations. The proposed method is shown to be effective even combining with adversarial methods. Weakness: 1. The authors argue that robustness is important for lightweight tuning methods. But I still think it is better to provide a comparison between lightweight tuning methods will full tuning methods. Provide a basic starting observation about whether lightweight methods bring more challenges on robustness or not/ The studied problem is important and novel. The proposed method is simple and clear. The experiments justify the effectiveness of the proposed method. <doc-sep> This paper introduces a tweak to Prefix-Tuning to make it more resilient to adversarial perturbations of the input. The idea is to add a batch-level prefix at inference to the original one which enhances robustness. Critically, Robust Prefix-Tuning (RPT) does not require auxiliary model updates or storage, in contrast with other robustness methods.Thus, this approach makes prefix-tuning more robust while preserving its modularity and low storage requirements. The authors conduct experiments on 3 text classification tasks, 5 textual attacks and different training regimes (normal training, adversarial training and adversarial data augmentation). In nearly all instances, their method improves robustness (sometimes considerably so) while preserving the accuracy on the original text. The authors also present RPT from an optimal control perspective and conduct a qualitative study that shows how RPT impacts attention weights. Review note: I am not familiar enough with Optimal Control Theory to evaluate the soundness of Section 6 / Appendix E. I will leave it outside the scope of my review and hope other reviewers can fill in the gap. Pros: - The results are quite convincing in nearly all settings (aside from important caveats in cons). While some scores are still quite low after RPT, they are consistently better than the baselines. Notably, the method can provide additional gains when used with other defense methods, such as adversarial data augmentation and adversarial training. - The need for this method is well-motivated - The paper is good at emphasizing different priorities rather than only the end accuracy. For instance, time to accuracy in Fig. 2 - Although there are some questions about the validity of using attention weights (see https://arxiv.org/abs/1902.10186 which likely should be mentioned as a caveat), I found Section 5 insightful. Cons: - Inference Batch, batch size and its importance: The batch size at inference seems like an important variable. Indeed, my understanding of Section 3 and Eq. 8 is that the specific batches used at inference will play an important role as they impact P_psi. This is my **key concern with this paper** as it brings up quite a few issues that should be mentioned. - Inference for a datapoint depends also on other datapoints in the batch. This is different enough from other ML setups that it should be highlighted. It also causes reproducibility issues. For instance, batch norm avoids this by fixing batch statistics at inference. - It is not clear that the method works for low inference batch size (opt of Eq. 8). The inference batch size used is not mentioned anywhere. If the method requires batch size > some N (or if performance varies widely with batch size), this seems a strong assumption that should be made clear. You do not always get several samples of the attack on your system. - It is not clear how well the method works when the inference batches are a mix of unperturbed samples and perturbed ones. This seems like a more realistic attack scenario. - It is not clear how well the method works if there are different attacks in the inference batch, which also seems like a more realistic threat model. Overall, I feel like some answers to the above would diminish my concerns, most notably: - What is the inference batch size used in the experiments? - How does performance vary with inference batch size (one or two settings should be enough) - Does the method work when only x of N samples in the test batch are adversarial? Does the example work when they are two types of attack Dataset statistics and mode prediction It would be helpful to remind the reader for each dataset how many classes they are / what the mode/random pred accuracy is to help interpret results. For instance, on Table 1 under PWWS improves from 16.64 to 50 for SST-2 and 25 to 34 on SNLI but that is the same as the accuracy of a random predictor. Writing: The writing could be improved substantially. Some examples: p2 “remaining the pretrained models unmodified” -> “keeping”. Also on p3 “remaining its lightweightness”. I suggest doing another pass as it does not align with the quality of the rest of the paper. Related work: I feel like other methods for Parameter efficient transfer learning, such as Adapters, normal prompting should be quickly mentioned. Same goes for comparable approaches such as P-tuning. As mentioned earlier, I would caveat Section 5 with discussions of the validity of using attention for explanation, such as the Attention is Not Explanation paper. Edit: Given the author's response, I am raising my score slightly. There are still some concerns over the threat model but some of my questions on the impact of test batch size have been answered. The motivation, experimental settings and results look good overall. One major caveat however is that it is not clear how flexible the inference setup is. This is critical as the predictions for a datapoint at inference depend on the other datapoints in the batch. Currently, it seems like the inference is being done with a batch of all the same attack. These assumptions are simply not realistic as a threat model. Without understanding how this performs under a more realistic threat model, I cannot score this paper higher. I have highlighted experiments that would provide more realistic results. Despite my "marginally below" score, I do not think the paper can be accepted without an answer on this point. Reject is too harsh for a paper that is otherwise promising. The paper would also benefit from another writing pass, mentioning missing related work and clarifying some properties of the dataset. <doc-sep> The paper is a focused contribution at the intersection of defending against text attacks and prompt tuning. The paper requires the reader to understand the context and motivation of several different things before understanding the contribution of the paper. First, adversarial examples can attack a text classifier, such as UAT. Second, various techniques defend against these attacks in different way. However, these techniques requiring modifying the parameters of the LM or other additional computational burdens. These techniques can be used with prompt tuning, but then the benefits of prompt tuning go away. Hence, there ought to be a technique that improves the robustness of prompt tuning without removing its benefits over regular finetuning. The paper proposes such a technique and do experiments for three text classification tasks and various adversarial attacks. Strengths - The problem being addressed is cutting edge (we are barely understanding prompt tuning, and this paper already jumps ahead to adversarial defenses!) - The approach seems novel, though I haven't searched for related literature carefully. - Clear logic motivating the problem and what constraints need to be accounted for while solving it. - Clear research question. Weaknesses - The experiments are OK but could be a bit more streamlined. I guess two things came up when I was looking at them. - It may be good to expand the scope of the paper to generation tasks, as these are likely more suceptible to adversarial attacks. What I mean by this is that the worse case scenario is much worse for generation tasks: while in binary classification the worse case is bad accuracy, for adversarial attacks on generation tasks, potentially very harmful text could be generated, which is much worse than simply getting the answer wrong. - The other thing is that it is hard to contextualize how good the numbers in your framework are. For example, I have no idea how good 52% on VIPR for SST2 is. If previously proposed defenses are able to get 90% on the same task, then it is unlikely that the proposed method will gain traction, even if it is more computationally efficient with prefix tuning. So it would strengthen this paper a lot to have this comparison. Other comments - I think the paper may benefit from spending just a little more time describing how susceptible prefix tuning is to adversarial attacks compared with regular finetuning. It seems like it is very susceptible, especially for easy tasks like text classification. - Appreciate the candidness in Figure 2 in showing that adversarial training takes longer. - It will be good to see if the proposed method not only improves performance against adversarial attacks, but also if it improves performance against different paraphrases of expressing the context and question, for example. Minor points - The grammar of the paper will benefit from having a native english speaker proof-read it. E.g., "as well as remaining the pretrained models unmodified" can be re-written as "without modifying the pretrained model parameters". - Since a lot of work was put into the paper, I will pay my respect and make a picky point: the bibliography could be cleaned up a bit. E.g., capitalize RoBERTa correctly, add URLs consistently, capitalize all conference names (e.g., IEEE transactions). The paper studies the very timely topic of adversarial attacks against prefix tuning. The paper proposes a method that maintains the advantages of prefix-tuning against finetuning, a formidible problem. The experiments are OK as of now, though expanding the scope to generation tasks could make the impact substantially larger. The characterization of the method could also be further improved by providing finetuning (with and without defenses) as baselines. Overall, I lean towards acceptance, though I am not an expert in adversarial attacks/defenses.
This paper tackles a relatively novel problem that is the result of recent work on prefix tuning - specifically the need to be robust to adversarial perturbation in the context of prefix tuning and they show a method for achieving this without requiring more storage and obtain good results. There were some clarity issues that were addressed by the reviewers during the rebuttal. The main issue that was pointed out was the effect of batch size on the success of the model. The authors gave experiments with batch size 1 where results are less impressive but still outperform the baseline. Also the authors say that for now they are not considering the case where only some of the elements in the batch are adversarial, which I think is ok for a research paper on such a cutting-edge topic. Thus, the result of the discussion is to lean to accept this paper given that it is now more clear, has experiments that make it clear what the benefits are in realistic settings and obtains improvements.
This paper studies the problem of graph generation, and proposes a new model using both micro and macro level supervision information in GraphVAE architecture. Fitting adjacency matrix is the micro supervision, and three kinds of graph statistics, i.e., degree histogram, number of triangles, and higher-order proximity relations, are adopted as macro supervision. The object consists of ELBOs modeling micro-macro loss and a KL-divergence between the prior and the approximate posterior of hidden representation. The proposed model is validated on 3 synthetic, and 2 real-world graph datasets. The experimental results the proposed model generates graphs with a lower discrepancy between generated and test graph embeddings than graphs generated by competitors in terms of MMD RBF and F1 PR. Strong points: S1. The macro objective of fitting graph statistics in graph generation is novel to me. S2. The paper proposes a general micro-macro ELBO as the objective, and then implements the ELBO by graph neural networks. S3. The experimental results show the proposed model outperforms the competitors. Weak points: W1. It is not clear how the graph generation task benefits from fitting graph statistics. In other words, what is the limitation to only fitting adjacency matrix in graph generation? From this line, I have a concern about what kind of graph statistics should be chosen as targets? This paper selects three graph statistics, but does not present an explanation for this selection. W2. The efficiency. Both calculating and fitting graph statistics bring new computing costs, e.g., the complexity is $O(n^3)$ to compute the transition probability matrix. W3. It is not clear how to form descriptor functions with respect to vector label histogram and triangle count. And how to guarantee the descriptor functions are differentiable. Yes. <doc-sep>The contributions of this paper was to model graph data jointly at two levels: a micro level based on local information and a macro level based on aggregate graph statistics. Positives: 1. The idea of this work is interesting and novel, which trys to use probabilistic model to explore the local and global graph statistics. 2. The performance of this work is very good, comparing to the existing GraphVAE. And the code is available. Negathive: 1. The scalability of this work may be a challenge, the compexity of the descriptors is either O(N^2) or O(N^3). Also the algorithm reuqires pre-define a graph descriptors to compute the graph statistics. 2. The algorithm part is straightforwad. Basically, it designs a MM loss into one unified fraemwork. It seems that many GAN-based models can achieve the similar function. Any discussion? Yes <doc-sep>This paper jointly models micro and macro level graph information for graph generation. A principled joint probabilistic model for both levels is proposed and an ELBO training objective is derived for graph encoder-decoder models. Extensive experiments and visualization results validate the efficacy of adding micro-macro modelling to GraphVAE models for graph generation. Strengths: 1. This paper is well motivated and the idea of utilizing node-level properties and graph-level statistics to constrain graph generation seems reasonable. 2. The design of micro-macro (MM) loss is clear and theoretically solid. 3. The authors have done a thorough analysis of the proposed model and validated its effectiveness through qualitative and quantitative evaluation. The main claims are supported by the experimental results. Weaknesses: My main concern is that the proposed objective function is only applied on GraphVAE following an AB design. Although the experimental results are satisfactory on graph generation, it remains unclear whether the benefits of micro-macro modeling would generalise to other models. The authors have adequately discussed the limitations of their work. <doc-sep>The authors of this paper newly presented a function that can reflect graph statistics in the graph generative model. They have shown various experiments and visualizations proving graph statistics are well-reflected. In addition, designing an objective function to reflect different graph statistics simply is a significant contribution. Originality: (Yes) The proposed method seems to be original in that the authors proposed a new but simple VAE-based objective function to reflect graph statistics. Quality: (Neutral) Since the purpose of this study is to generate graphs that reflect graph statistics, theoretical support and experiments for the purpose are well shown. However, the performance on real-world datasets such as the molecule is marginal. In particular, when only one graph statistic is used, the performance degradation is greater than that of GraphVAE, which needs clarification. Since it shows good performance only when all three statistics presented in the paper are written, it is necessary to explain why these three combinations were selected and what synergy they show. Clarity: (Yes) There was no difficulty in understanding what the paper was trying to say, and it shows sufficient proof of the formula. I think it would be easier to understand if the architecture overview was attached. I suggest adding a picture of the structure for the reader's clear understanding. Significance: (Neutral) This model seems to have particular strengths in experiments using Synthetic datasets. In addition, it seems to be a good contribution that it showed a higher performance improvement compared to GraphVAE. However, as discussed in the paper, performance in real-world datasets seems to be more important to contribute in practical areas such as molecule and medical discovery. However, the experimental results presented in the paper do not support this. Additional experiments will be needed to show that graphs are well generated using the QM9 dataset shown in GraphVAE. The positive social impacts presented in this paper include molecular presentation and medical discovery. However, since the model proposed by real-world dataset shows weak performance, it is seen as an important limitation.
This paper proposes a new generative model for the generation of graphs. Different from most of existing approaches, the proposed method considers both node and graph level properties to capture high-order connectivity and overcome sparsity of any observed graph. The writing is general clear and the results are convincing. The reviewers are overall positive, with some concerns on the motivation, which has been addressed well by the authors in the rebuttal. Some other questions raised by the reviewers are also appropriately addressed, which leads to the increase of some scores. The downside of the approach lies in the time complexity in collecting the macro-level statistics. But overall, it is a good paper worth accepting.
The authors study a prevalent medical problem where the treatment aims to keep a physiological variable in a safe range and preferably close to a target level. They propose ESCADA, a multi-armed bandit algorithm tailored for the above leveling task, to make safe, personalized, and context-aware dose recommendations. Strength: 1. They consider constraints on instantaneous outcomes and propose efficient algorithms to achieve their goal. 2. They provide safety guarantees and upper bounds on cumulative regret. Weakness: 1. Is \\alpha a tuning parameter? There many too many tuning parameters, which makes the method hard to be used in practice. How would the choice of Tmin Tmax affect the results of the algorithm? 2. What is the complexity of the algorithm? How does it depend on the cardinality of dose sets? Please see the weakness and questions. <doc-sep>The authors investigate a problem that they refer to as leveling. Shortly, it is a prevalent medical problem in which the treatment aims to keep a physiological variable in a safe range and preferably close to a target level. Their proposed algorithm is a multi-armed bandit-based for the leveling task, which aims to make safe, personalized, and context-aware dose recommendations. As a theoretical contribution, they derive probability upper bounds on its cumulative regret and safety guarantees. Additionally, they conducted in silico experiments on the bolus-insulin dose allocation problem in type-1 diabetes mellitus disease by comparing their algorithm against the GP-UCB baseline. The dose-finding problem is challenging, given the ethical concerns from online experimentation with real patients. This paper uses a simulator to bring experimental results. However, it compares its performance against a clinician for virtual adult patients. It would be worth including the qualifications of the clinician that evaluate the unseen meal events. Possibly, in the appendix. As a minor improvement, the authors may increase the font size of figures, such as Figures 3-6. The authors need to include the limitations of their method in the concluding remarks. <doc-sep>Consider the safe dose allocation problem in precision medicine. This paper proposes a contextual multi-armed bandit algorithm with the objective to keep the outcomes close to a target level. The proposed algorithm has high probability upper bounds on cummulative regrets and also possess a two-sided safety guarantee. # Strengths - The paper is well organized - The objective is an interesting problem. Instead of maximizing the outcome, this paper aim to keep the expected outcome close to a target level. - The 'TACO' algorithm is novel. Both exploration and exploitation are addressed. The action when all safe doses are sub-optimal is interesting. #Weaknesses - The figures in section 5 are too small to read. None. <doc-sep>The paper introduces and evaluates a multi-armed bandit (MAB) algorithm for insulin dose allocation related to Type-1 Diabetes (T1D). Its technical contributions in enhancing safety in T1D by preventing hyperglycemia and hypoglycemia by context-aware and personalised dosing of insulin before meals is fairly convincing, based on evaluations on an appropriately chosen simulator. The paper introduces and evaluates a multi-armed bandit (MAB) algorithm for insulin dose allocation related to Type-1 Diabetes (T1D). Its technical contributions in enhancing safety in T1D by preventing hyperglycemia and hypoglycemia by context-aware and personalised dosing of insulin before meals are fairly convincing, based on evaluations on an appropriately chosen simulator. However, the evaluation should have differentiated the 30 simulated patients instead of presenting averaged results over all these patients, regardless of their age. For instance, glucose control of adolescents is substantially harder than for adults, and hence, I am used to separating adults, adolescents, and children in evaluating algorithms and medical interventions. Similarly, in order to contribute to precision medicine, as argued by the paper, for example, the target PPBG level should have been adjusted by this age group. Finally, I would have wanted to know more about the clinician that was one of the comparisons in the evaluation; I really liked this more human-centric aspect of the evaluation but would like to know more about this experiment so that future studies could follow the same experimental design if they wanted. Also it was unclear how the simulate patient cases to be analysed by this clinician were chosen and how the human judgement and decision-making was implemented in this part experiments. Last but not least, I could not find details about this experiment (e.g., the cases analysed, implementation of the assessment task, etc.) involving a human participant (i.e., the clinician) as a participant, or obtaining the related ethics approval and informed consent, from the paper or its supplement. Clinical aspects of the study could be clearer. For example, the guidelines by https://www.jmir.org/2016/12/e323/ may help. In particular, I encourage clarifying the rationale, clinical implications, and limitations of the model; see the JMIR paper for further details about these topics.
I have read all comments and responses carefully. The reviewers recognized that the problem was a challenging one, and that the paper provides both practical and novel tools and theoretical analysis. However, the reviewers pointed to the lack of numerical studies in the paper (for example, more details about the human clinicians and the patients). That being said, the authors have addressed most constructive comments given by reviewers. Overall, reviewers agree that this is an important and yet underexplored problem and the authors have provided useful contributions. I, therefore, have decided to recommend the acceptance of the paper.
Summary: They investigated the effectiveness of self-supervised learning (SSL) for one class classification (OCC). Here is what I think are contributions relative to existing literature - Empirically improved AUC for multiple OCC datasets, here are the techniques that were useful - Used “distribution augmentation” [DistAug] for learning representation for OCC, and in ablation studies show DistAug leads to improvement over standard augmentation Used KDE and OCSVM on top of learned representation, and showed improvement over using the classification head training during SSL Used a smaller batch size (=32) Used a MLP head during SSL The authors also included a section of visualizing explanation using existing techniques to illustrate how their method leads to more reasonable decisions. Strength: The paper is well written. I appreciate the clarity, and good coverage of the current literature. The ablation studies are thorough, which make the empirical improvement solid. Concerns: The uniformity argument is weak. The authors state the empirical improvement on OCC using their method hinges on the DistAug technique, which is motivated to reduce the uniformity of the learned representation. When achieved the inliers will live in the dense regions on the hypersphere, and outliers will live on the non-occupied region. This assumes all the test inputs are projected onto the hypersphere, including the outlier. From my understanding, the authors used f() for OCC, not \\phi() which is the normalized (i.e. hypersphere) output. In this case, there are many ways that OCC can be achieved even if \\phi() of the training inputs are uniform on the hypersphere. Suppose both the inliers and outliers after f() live on hyperspheres, just with a different radius, then after normalization they can both be uniformly distributed on the same hypersphere. One question is if there is a difference in using f() or \\phi() for OCC. Furthermore, the authors try to back this claim up using Figure 4, but I cannot seem to connect the dots here. They authors used MMD to a uniform distribution to measure how uniform the representations are. The less uniform (i.e. higher MMD), the better it should be for OCC. The correlation between MMD and AUC does not seem to be very strong. E.g., for the (DA) gf variant, the 2 metrics actually seem negatively correlated. This again, makes me wonder if “less uniformity” really is why their technique led to an improvement in OCC. If this is not why, then we should find another explanation for why there was an improvement. There is always the concern that the improvement comes from extra hyperparameter tuning. Did the author also tune for good hyperparameters for the non DistAug version as described in A.3? Overall, a fairly thorough empirical investigation into better techniques for using SSL for OCC. It can be a decent contribution along the lines of one of the “improve techniques …” papers if the above concerns can be addressed. In fact, I think not focusing on selling DistAug, but really identifying what contributes to the gain empirically makes this paper stronger. References: [DistAug] Heewoo Jun, Rewon Child, Mark Chen, John Schulman, Aditya Ramesh, Alec Radford, and Ilya Sutskever. Distribution augmentation for generative modeling. In Proceedings ofMachine Learning and Systems 2020, pages 10563–10576, 2020. <doc-sep>This paper presents a two-stage representation learning approach to deep one-class classification. In the first stage, a mapping f to a versatile high-level latent representation is learned using self-supervised learning for a contrastive learning proxy task. In the second stage, the same mapping f is used to map the data to the latent space, whereafter a traditional one-class classifier such as OC-SVM or KDE, is applied. It is shown that the one-class task puts somewhat different requirements on the representation than with a multi-class classification task, both 1) in terms of uniformity of the data points in the representation, which is desired for multi-class tasks but not fully beneficial for one-class tasks, and 2) in terms of minimizing or maximizing the distance between different instances of the negative class - for multi-class tasks you want the distances maximized, while for one-class tasks you want the negative (inlier) examples close together. 1) is addressed by using smaller batch sizes in training while 2) is addressed by distribution augmentation that will render a compact inlier distribution in the representation. This paper is overall a good paper that will be interesting to a certain audience at ICLR. + It is well written, well motivated, with a clear argument and as far as I can see, technically correct. + The experiments are well designed, valid and exhaustive, with comparison to a range of baselines as well as an ablation study. + Moreover, the visual explanation of what the different representations have focused on is highly interesting. + I appreciate the comprehensive grounding of the contribution in both new and old related work. The reference list contains all the relevant state of the art, as well as references to more classical work such as [13,14,29,47,53]. The paper is not highly seminal, but more incremental in nature, putting together and modifying existing methodology. However, since it is very well done, the work is absolutely worth acceptance. A criticism is that there are some repetition in the line of argument, for example between 2.1.2 second paragraph and 2.1.3 first paragraph. A more compact, e.g., section 2.1 would render more space for results which now have been pushed to the appendix to a large degree. Another suggestion for improvement could be to indicate more clearly in figure 1(b) that f is kept fixed in this step. This could be done e.g. with a different color of the f box in figure 1(b). <doc-sep>This paper proposes a framework for deep one-class classification (an example application being anomaly detection). The basic idea is to combine self-supervised representation learning (eg through a proxy task such as rotation prediction or contrastive learning), with a classical approach to one-class classification, such as one-class SVM or KDE. This is in contrast to existing methods for deep one-class classification that use simulated outliers to form a surrogate classification loss and then train end-to-end. The paper further improves on the first stage of representation learning, by introducing modifications to contrastive learning to make it more appropriate for one-class classification. The main insight is to introduce distribution augmentation, where geometric transformations of images, such as rotation, are treated as separate instances, to be separated from the original view. This is motivated from the perspective of reducing uniformity of the inliers across the unit hypersphere, to allow for better separation from outliers. Positives: + strong empirical results, with improved performance over existing methods for one-class classification + validation of two stage framework, by showing improved performance with RotNet representation with KDE detector versus RotNet end-to-end [20] + validation of improvements to contrastive learning for one-class classification, such as distribution augmentation, batch size selection, use of MLP project head Minor negatives: - I think the paper would flow a little better if the related work section was moved earlier in the paper, rather than coming only after the detailed description of the method. - In describing distribution augmentation and contrasting it with standard data augmentation for contrastive learning, it is clarified that the two sets of augmentations are disjoint. I would it have found it helpful if the paper was explicit about which data augmentations were used for the contrastive learning, as this did not seem to be stated in the paper. Overall I found this to be a nice paper with strong empirical results.<doc-sep>This paper proposes an anomaly detection approach that has two stages: a first stage for learning a feature representation and a second stage to train either a one-class classifier based on OC-SVM or KDE. The main contribution of the paper is the feature representation learning that relies on contrastive learning to optimise a self-supervised loss function which minimises the distance of the samples from the same image augmented with different data augmentation functions and maximises the distance of samples from different images augmented with the same augmentation functions. The data augmentation functions used were horizontal flip and rotation (0,90,180,270). Results on the public datasets CIFAR-10, CIFAR-100, Fashion MNIST, and Cat-vs-Dog show that the proposed method has better anomaly detection (measured with AUC) than the state of the art. The paper also displays qualitative anomaly detection results and an ablation study that shows: a) how close to uniform distribution (on hypersphere) the feature representations are as a function of batch size, and b) how AUC is affected with batch size and depth of MLP project heads. This paper has outstanding results on the datasets CIFAR-10, CIFAR-100, Fashion MNIST, and Cat-vs-Dog, but it is missing result on a challenging dataset, such as Mvtec [51]. It is also missing results on anomaly localisation (e.g., Venkataramanan, Shashanka, et al. "Attention Guided Anomaly Detection and Localization in Images." arXiv preprint arXiv:1911.08616 (2019)), so it scores slightly below acceptance for results given that it is hard to assess how the method would perform in a more realistic anomaly detection problem. In terms of the proposed method, it is quite similar to [20,21], with the difference that it uses more data augmentation functions and rely on contrastive loss. Therefore, it scores slightly below acceptance on novelty as well. One argument that seems contradictory is the one for class collision and uniformity. In particular, if pre-training forces all inlier samples to be on a hyper-sphere, wouldn't it be advantageous to have a uniform distribution given that outliers could be easily detected as not lying on the hyper-sphere? Of course, this would probably require a change in the OC-SVM classifier. Can the authors comment on that? Also, the argument on Sec. 2.1.3, on the effect of projection heads, says that "I(g(f(x));x) <= I(f(x);x), so f can retain more information than g, thus more suitable for downstream tasks that are not necessarily correlated with the proxy tasks". If we push this argument, then I(f(x);x) <= I(x;x), so we should use x for downstream tasks. Can the authors comment on that?
This paper investigates the one-class classification problem, proposing to learn a self-supervised representation and a distribution-augmented contrastive learning method; thorough results and analysis show that the method is effective and backs up their claims in terms of the underlying mechanism for why it works. In general, reviewers thought the paper was well-written, well-motivated/argued, and presents a thorough related work comparison and experimentation, though the novelty was found to be somewhat low. Several reviewers brought up some possible weaknesses in terms of demonstrating uniformity of the representations as well as suggesting additional datasets. Through an interesting discussion, the authors provided additional visualizations and results on the Mvtec dataset. This further bolstered the arguments in the paper. Overall, this is a strong paper with a clear argument and contribution, and so I recommend acceptance.
In this paper, they introduce a new method to optimize OPAUC and TPAUC. After presenting its derivation, they provide an implementation with SGD and analyze its convergence rate and generalization. Finally, they provide experiments comparing their new method to pre-existing. Strengths: * Cool reformulation ideas * Decent experiments showing the strengths of their method Weaknesses: * Unclear or sloppy notation (see questions 1, 4, 5, 6) Main limitations are discussed in the questions section (in particular, questions 3 and 7). <doc-sep>The paper proposes a nonconvex strongly concave min-max formulation for OPAUC and TPAUC maximization and employs a stochastic min-max algorithm with $O(\\epsilon^{-3})$ complexity. Strengths: 1. The paper is well-organized and written clearly. 2. The formulation conversion of PAUC is novel. Weaknesses: 1. The paper employs an algorithm with a very strong (bad) assumption. $L_G$ in Assumption 1 can be infinity. 2. Contribution is not significant enough. Please see above. <doc-sep>This paper proposes novel algorithms to improve the efficiency of partial AUC Optimization. Specifically, they present a reformulation scheme to transform the pairwise indifferentiable objective function into an instance-wise differentiable with an approximation scheme. Moreover, they provide generalization and optimization guarantees for their proposed method. The extensive experiment in this paper shows that the proposed method can outperform the state-of-art most times. Pros: This paper presents an efficient reformulation scheme to make a complicated problem much more practical to solve. In other words, both the number of epochs and the per-iteration running time could be reduced significantly. This proposed method also has a strong and comprehensive theoretical guarantee in terms of convergence and generalization. Moreover, technical details are non-trivial. I believe these merits can benefit the audience from a broad range of the ML community. The experiments are extensive. Most of the competitors are quite SOTA. The paper presents a solid work with the possibility to be employed in real-world problems. I only have some minor concerns, which I hope can be addressed during the rebuttal. Cons: The math is dense even in the main paper. Though I can understand most of the details, I think the authors can add more details and intuitive content to guide readers unfamiliar with AUC. I only see the performance comparisons in the main paper. I think efficiency is more important in this paper since the goal is to accelerate. So, I would also like to the running time comparisons in the experiments. YES <doc-sep>This paper focuses on optimizing the One-way (Two-way) Partial AUC metric, which is challenging since a ranking constraint is involved in the objective function. Interestingly, this paper presents a simple instance-wise reformulation of the original objective, which is unbiased in an asymptotic sense. It turns out that the complicated problem could be solved with an accelerated minimax optimization problem. Moreover, the convergence rate can thus be improved. Empirically, the experiments also show its superiority in most cases. Strength: 1) The reformulation of the original problem is impressive to me, where the ranking constraints are canceled by conditional expectation and a differentiable reformulation of the top-k (bottom-k) ranking. 2) The generalization analysis is interesting, where the minimax reformulation can also simplify the derivation of uniform convergence bounds. Moreover, the differentiable formulation also allows the analysis to deal with real-valued hypothesis classes, which previous works often fail to do. 3) Though the convergence analysis is an existing result. It is also good to see that the convergence rate could decrease to O(T^{-3}) due to the reformulation. Weakness: 1) It seems that there are some typos in the proof. For example, in line 529, in the decomposition of the conditional risk, I think $\\ell$ should be replaced with $\\ell_{0-1}$. The same problem exists in line 578. 2) In Figures 4-5, I can only see the efficiency improvement in the number of iterations. But the authors also claimed that the reformulation could improve the per-iteration efficiency, which I do agree. I think it may be better if they could give some empirical comparison in terms of this. All my concerns are presented in “Weakness” and “Questions”. This paper focuses on designing an efficient and asymptotically unbiased algorithm for PAUC, which seems no pontential negative social impact.
The paper presented a novel reformulation of maximzing PAUC in an asymptotically unbiased and instance-wise manner. Based on this formulation, the authors presented an efficient stochastic min-max algorithm for OPAUC and TPAUC maximization. Convergence and generalization analysis were conducted. The concerns and questions are well addressed in the rebuttal. Following the recommendation from the reviewers, I recommend its acceptance.
The paper proposes a method to use CLIP for ordinal regression using a combination of soft labels and prefix tuning, with an interpolation scheme added to enforce order between the learnt prompts. # Strengths The idea of using context from natural language for ordinal regression tasks is interesting and worth exploring. The proposed interpolation method to allow regressing in between ordinal ranks is also clever. # Weaknesses The main weakness to me is that an obvious baseline is missing from an engineering perspective. The paper uses a VGG-16 network pretrained on ImageNet as a trainable vision encoder for several datasets. The VGG-16 network has 138M parameters, similar to the ViT in the official CLIP release. The natural baseline would then have been to train a linear probe atop the CLIP ViT-B to simply predict the rank as a classification task. Second, the results in Fig 3. are very surprising. If I am reading it correctly, about 35% of the rank prototypes violate the ordinal property. This is a substantial portion, and suggests the proposed method cannot grasp anything on the long tail of the distribution. Third, _why_ do language priors help with this task at all? It's difficult to conceive that CLIP has learned a meaningful representation of say, the number 72. Even if CLIP did have a very meaningful representation of some arbitrary number, the rank embeds are entirely learned. How is the language information being used? T1 is missing variance numbers for the comparison with CoOp. As evident in T2, CoOp and OrdinalClip are very close, with large variances. The authors should include a statistical test to confirm that the proposed method is indeed superior to CoOp. There are no variance numbers again in Table 7. # Summary The technique presented is a somewhat straightforward extension of CoOp, with the main novelty being that the literal rank names are replaced with soft class names. The results are in general very close to that of CoOp, and some tables are missing variance information, while others have variance information. Given how close the results of the method are to CoOp and how high the included variances are, variance results should be included for all tables, and statistical significance tests conducted. An important baseline (linear probing) is missing. Furthermore, it is unclear to me why language information can help with this kind of task at all, I do not see any experiments to explain why language information can help with this kind of task, or provide insight into what the language model is adding here. Finally, the result showing that 35% of the rank embeddings do not obey the ordinal property is troubling, especially since the "broken" rank embeddings are clustered in the long tail. This suggests the error distribution of the model is highly biased and the model does not work well at all for the tail. Limitations are not discussed. <doc-sep>The authors propose a language-powered paradigm for ordinal regression tasks by learning rank prompts, named OrdinalCLIP. The OrdinalCLIP can leverage rank categories of language to explicit learning ordinal rank embeddings, which will preserve the order of the language prototypes in the language latent space. In the three regression tasks, including age estimation, historical image dating, and image aesthetics assessment, The experimental results show good performance than other baseline models. In addition, for few-shot learning, the method also gains improvement. The overall structure is well-organised. The paper has a clear motivation and is innovative for the regression field. Strength: 1. The innovative language-powered paradigm for ordinal regression uses language prototypes and learned rank prompts, which are interesting and valuable. 2. The good performance shows the effectiveness of OrdinalCLIP. 3. The approvement and experiments of the appendix are detailed. Weakness: 1. The statement of introduction, related works, and problem statements should narrow down the ordinal regression to the vision-language or CV ordinal regression task cuz there are some pure language ordinal regression tasks. 2.The two loss (image-to-text loss and a text-to-image loss ) should be introduced in detailed, and the reason for using KL. 3. we choose to maintain the order of rank embeddings to preserve the order of the language prototypes. This statement is unclear. How to maintain the order of the languag? Yes <doc-sep>The authors propose a language-powered model for ordinal regression based on CLIP. The language prototypes are constructed from sentences with rank categories via the CLIP paper encoder, and then optimizing the CLIP model by language prototype and image feature matching. To further boost the ordinality, this paper introduces the learnable rank prompts by interpolation from the base rank embeddings. Multiple experiments on age estimation, image aesthetics assessment and historical image dating show that the proposed paradigm surpasses other related methods. Strengths: 1. This paper introduces the contrastive language-image pretrained (CLIP) model as a paradigm for ordinal regression is novel to me. 2. The proposed language prototypes and learnable rank prompts are insightful extensions of the CLIP model for the ordinal regression task. 3. The proposed interpolation learning rank prompts contribute to the output of smooth language prototype similarity trends, which represent well learned ordinality. Weaknesses: 1. The writing of this paper should be improved , including motivation, related work and task description, etc. 2. The motivation for the two proposed interpolations, linear interpolation and inverse-proportion, is unclear and lacks visualization for comparison other than numerical comparison. The authors have discussed the limitations and potential negative effects of their work.
The paper proposes a language-powered model for ordinal regression tasks, based on CLIP. Language prototypes are constructed from sentences with rank categories via the CLIP paper encoder, and then optimizing the CLIP model by language prototype and image feature matching. To further boost the ordinality, this paper introduces the learnable rank prompts by interpolation from the base rank embeddings. While the proposed approach builds on CoOp, reviewers agree the contribution is significant enough and original enough for NeurIPS. Regarding the experimental section, the paper shows that on three regression tasks (age estimation, historical image dating, and image aesthetics assessment), results show good performance compared to baseline models. Concerns regarding the writing of the manuscript have been raised [PfAX, RX3e], but seem to have been addressed during the rebuttal phase.
This work concerns the asymptotic behaviours of Gaussian-smoothed 2-Wasserstein distance. In particular, the authors provide bounds on the difference between Wasserstein distances of a pair of discrete measures and their smoothened versions when the variances of Gaussian kernels are small. The authors consider two scenarios, when a perfect matching between two discrete measures exists and when it does not. For the former case, they show that the asymptotic gap decays exponentially in some near-zero regions, and linearly otherwise. For the latter case, they show that the gap is linear even in a region around zero. Strengths: - The paper is well-written and well-presented. - The addressed questions are of theoretical importance, and the given answers to these questions are novel and complete to the best of my knowledge. The authors covers all possible scenarios, and the tools used for proofs (stronger notions of cyclical monotonicity and implementability, robustness of optimality and their relations) are interesting to me. Weaknesses: - In the experiment, the toy example is indeed beneficial for understanding, but I think the authors should also verify their theory in large-scale settings (i.e., large $m, n$ - maybe in the one-dimensional space if the design of points statisyfing the uniqueness of optimal plan is an issue). The authors have adequately addressed the limitations and potential negative societal impact of their work. <doc-sep>The paper provides an approximation rate for Gaussian-smoothed Wasserstein distances for discrete measures. It shows that the approximation rate can be exponential in the perfect-matching plan case (with a phase transition to linear rate if $\\sigma$ is large) and is linear otherwise. **Strength**: The results in this paper are interesting and all the theoretical results are technically correct. **Weakness**: The setting considered in the paper appears to be restrictive. Both measures of interest are assumed to have finite number of support points. It makes the developed theory difficult to apply to general setting of Gaussian-smoothed Optimal Transport. Some points in presentation that can be improved: 1. In Proposition 1.1., the constant $c$ is dependent on $d$ (dimension) as well. The author should make this point clear so it is coherent with the argument about the curse of dimensionality of the Wasserstein distances. 2. In the presentation of Section 3 (Case I): We need reasons why you choose to present Proof of Theorem 3.3. but not of other results (like Theorem 3.1, which appears to be one of the main results). If the proof of Theorem 3.1. is not presented in the main text, then why do you present Lemma 3.4.? In general, the writing style in Sections 3 and 4 needs to be improved so that the reader can know what proof is going to be presented and what is deferred to the appendix. 3. The order of results in the appendix is not consistent. Dividing those results into several small sections may help to keep track of this issue. Because of the restriction of the setting in the paper, it is not obvious to see implications in using Gaussian-smoothed Optimal Transport for general measures. I recommend considering the problem in a more general setting and trying to see if the current approach still works. <doc-sep>This paper studies the approximation of the 2-Wasserstein distance between two discrete probability measures $\\mu$ and $\\nu$ by the 2-Wasserstein distance between the same distributions smoothed by a Gaussian measure, that is called Gaussian-smoothed optimal transport. In particular, they prove the existence of a phase transition in the small noise regime of the variance parameter, which depends on the existence of a perfect matching between the distributions $\\mu$ and $\\nu$. Studying the behaviour of Gaussian-smoothed OT distance is particularly interest as it approximates the true transport and does not present a curse of dimensionality in the sampling complexity. In particular proving the curious result of the existence of a phase transition in the context of finitely supported measures $\\mu$ and $\\nu$ is very nice. Moreover, this paper is well written, the problem and presentation are clear and the choices made for the study (the finitely supported framework) are precisely justified. The proofs, based on strong cyclical monotonicity and perfect matching, are quite elegant. The simulation study is also concise and convincing. Overall, the results constitute a modest (because limited to Gaussian-smoothed OT distance) but very interesting contribution to the study of approximations of the classical Wasserstein distance. Minor comments: - The notations for the parameter $\\sigma$ of the Gaussian and the permutation $\\sigma$ can be confusing. - The order and presentation of proofs in the appendix could be improved. The authors decided to limit their study to finitely supported measures, which is well justified. <doc-sep>This paper presents an analysis of the Gaussian-smoothed Wasserstein distance (GOT) in the framework where the Gaussian kernel parameter $\\sigma$ is "small". It is already known that (GOT) approximates the true Wasserstein distance and that the difference between the two is of order $\\sigma$. The objective of this paper is to refine this bound and to show, under certain assumptions of uniqueness of the transport plan, that this bound can be improved and that GOT approximates Wasserstein exponentially well in a certain regime. More precisely, the authors show that there is a phase transition on $\\sigma$ such that, below, the bound is exponential, and, above, it is linear in $\\sigma$. This paper completes the understanding of GOT with respect to $\\sigma$, which has already been studied in the $\\sigma \\to +\\infty$ regime. Overall I find this article quite well written, the thread of definitions and proofs is clear, the ideas are well linked. From a purely theoretical point of view the results are, I think, really interesting. They complete the statistical understanding of GOT with respect to $\\sigma$, which is a nice contribution. Moreover, the ideas/tricks introduced and the theorems go far beyond the study of the GOT distance and can certainly be used to establish other theoretical results in optimal transport. In particular I think that the notions of strong implementability/strongly cyclically monotonic and robustness of the transport plans are useful and rich.They allow us to establish the uniqueness of the transport plan, which is, in the discrete case, a key property that is not much addressed by the community as far as I know. The fact of having cleared all these properties around the strong implementability is for me a contribution that is useful in itself. The phase transition of GOT opens also the door to other studies, notably on sample complexity or on approximations of the Wasserstein distance. The main criticism I would have is that this article focuses on a really specific and technical problem, related to a sub-problem of optimal transport. For a reader who is interested in GOT the contributions are certainly really interesting, but the article does not discuss the potential applications for optimal transport in general neither for machine learning. To be more precise, the article strings together theoretical results without giving much insight, nor discussing the propositions and theorems. It also lacks a conclusion that could perhaps bring some perspectives around this work. In this context, I think it would be interesting to save space by moving the proof of Theorem 3.3 to an appendix to make a conclusion/perspective part and for discussing the different results and their implications. For example it could be interesting to discuss, even informally, the possible generalization to the case of continuous measures or to explain their implications for the Wasserstein distance approximation. Moreover, the numerical results are very succinct and, I find, difficult to read. I find for example that the phase transition, which is the center of the contributions, is not really visible on Figure 2. I think that this part should be more complete, by better illustrating this phase transition or for example the notion of robustness of a transport plan. For these reasons I rather recommend a weak-accept, but I am ready to change my mind depending on the authors' answer. Small remark: - In terms of notation the $\\sigma$ is both used to define a permutation and for the $\\sigma$ of the Gaussian kernel. - Moreover the fact that $\\sigma$ is a permutation in Definition 2.4 is not clearly stated in the article. - Typos: $T$ instead of $\\Gamma$ in Proposition 2.12 and 2.13. The authors did not discuss the potential negative societal impacts of their work, however this is not really relevant in this context as the article is quite theoretical and specific. Concerning the limitations: I find that the authors could discuss how, in practice, can we check if we are in the regime of fast approximation (i.e. when the transport plan is unique). ------- AFTER REBUTTAL ------- As written below I am satisfied with the authors' answer so that I change my score to 7
All reviewers are in agreement that the main factors (in particular, the results and their presentation) are above the bar for NeurIPS. No significant concerns remain following the author response and the discussion period. I encourage the authors to carefully take into account all of the minor comments when preparing the camera-ready version.
The paper surveys recent offline RL algorithms and seeks to analyze different factors contributing to their performance. Novel evaluation protocols are designed to analyze their representation and behavior. Based on this analysis, the authors propose a well-motivated modification to IQL which achieves strong results on several D4RL datasets. Strengths: - Well-motivated range of analysis on the representation learned by an offline RL algorithm. Proposed algorithms (RIQL, USS variants) show strong performance and are motivated by earlier findings in the paper. - Novel and insightful analysis of how to integrate model-free methods into a model-based framework, and how to address the failures of a naive approach. Weaknesses: - On line 208 and 269, the uncertainty estimate used is the max mean-discrepancy, this measure is unusual as we would use the standard ensemble variance in supervised learning [1]. - Evaluation is solely on med/exp datasets, a good understanding of the strengths of each algorithm on mixed/random data where we may expect a higher extrapolation gap would be useful. Minor: - TD3BC should TD3+BC - Typo on line 182: dose -> does - Line 204, 262: the learned probabilistic model and model-based training are more accurately attributed to MOPO rather than COMBO. In all tables and results: total random seeds should be included. [1] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell. The paper provides good insights on the datasets it tests, however: - Limited evaluation on med/exp datasets, this could be expanded to include more D4RL-suite datasets. - Incomplete information on seeds used during evaluation. - Possible inaccuracy of using online Q-functions as ground-truth. Tabular settings where precise Q-function can be determined could elucidate this better. <doc-sep>The paper provides a comprehensive analysis of the state-of-the-art offline reinforcement learning algorithms. In particular, the paper evaluates the critics and policies from offline RL algorithms using several metrics, including representation probing, when the learned representations are used to predict various quantities, effective rank, action ranking, etc. The authors observe a mismatch between the quality of critics' predictions and the performance of the policy for some methods. Based on this insight, the authors propose a modification of IQL that achieves significant improvements over the original version. # Strengths * The paper is well written and easy to follow. * I believe that the analysis of offline reinforcement learning methods is very valuable to the community and has received only limited attention so far. * The analysis covers a variety of metrics for evaluating the critic and policy separately. * The metrics introduced in the paper can help tune the components of offline reinforcement learning methods separately. The analysis has been applied in practice to the improvement of IQL. # Weaknesses * The paper focuses only on a subset of tasks considered in CQL and IQL. * The paper focuses only on improving IQL. To sum up, I believe this paper is relevant to the offline reinforcement learning community, and the pros outweigh the cons. Therefore, I recommend this paper for acceptance. The paper can be improved by considering a wider variety of datasets from D4RL and applying the same analysis to enhance the other methods. <doc-sep>Introduces a set of experiments to evaluate & diagnose the bottleneck of offline RL algorithms along three axes: (1) Representations - Representation probing: Use the second last layer of the critic network to predict the next state, reward, and action via linear regression. Similarly, use the actor embedding to predict the optimal action, value function, and Q-function (which is approximated by a trained TD3 policy). - Representation metric: Feature dot-product and effective rank of critic representations. (2) Value functions - Rank actions using the learned Q(s, a) Observation: TD3BC and IQL learn more accurate value functions (policy evaluation might be more effective), but achieve worse performance (policy improvement might be limited). More generally, an offline RL algorithm can be higher-performing, but learn poor representations and value functions. (3) Policies - Policy ranking experiment: Average MSE of the selected action vs. optimal action. - How often does the policy take OOD actions? Observations: - COMBO selects most optimal & worst actions. - Performant policy is good at selecting better actions, even if the actions are sub-optimal. The paper also introduces a new offline RL algorithm called RIQL (Relaxed In-Sample Q-learning), based on a simple modification of IQL. - The method is motivated by empirical, heuristic-based observations: that AWR-based policy improvement is effective to avoid taking OOD actions, but is sometimes over-conservative. It add extra policy constraints and use a less conservative actor loss to enable learning from OOD actions. - Note that there is no theoretical justification or provable guarantees for the method. The paper also investigates when a learned dynamics model helps model-free offline RL. Introduces an uncertainty-based sample selection method that is more robust to model noises. Strengths: The paper presents thoughtful experiments analyzing different offline RL algorithms. It is an interesting result that performant offline RL algorithms often exhibit poor representations and inaccurate value functions. (However, I wonder if this is due to the environments/tasks, which are all similar, state-based locomotion tasks.) Overall, I think the empirical experiments & analyses are useful for better understanding offline RL algorithms on simple classic control tasks, although it is unclear if these analyses still hold for more challenging tasks. Weaknesses: 1. My main concern is that all evaluations are done on the toy classic control locomotion tasks in the D4RL benchmark, which are limited due to the fact that simple filtered behavioral cloning outperforms SOTA offline RL methods on D4RL; and therefore, good performance on these simple D4RL tasks does not necessarily translate to good offline RL performance. The paper's contribution would be greatly improved if the authors add a more challenging and different task other than toy locomotion, such as D4RL AntMaze navigation, manipulation tasks, or image-based tasks such as Atari. 2. The proposed method (RIQL) performs similarly as other baseline methods on the toy locomotion tasks (Table 10), which shows that RIQL is a reasonable offline RL algorithm to use. On the other hand, RIQL is a heuristic-based modification of existing offline RL objectives, and there is no theoretical justification (e.g. provable guarantees about policy improvement) for the proposed method RIQL. Yes
The main strengths of this paper are that (1) it provides some interesting analysis that leads to some somewhat surprising findings, and (2) it presents and evaluates some new technical algorithmic ideas based on this analysis that lead to improved performance. After the author discussion, the main weaknesses is that the new ant-maze results are somewhat disappointing, showing that the algorithmic ideas don't improve over IQL on a more complex problem setting. The ant maze tasks are a lot more interesting and complex than the standard locomotion tasks, and so this as a fairly major weakness. Of lesser importance, the title is not particularly descriptive, and could be used to describe a lot of papers. So, I would like to suggest to the authors to make the title more specific to the contributions of this paper. Overall, the reviewers and AC think the strengths outweigh the weaknesses, especially since the analysis is interesting on its own and since there is some new analysis on more complex image-based settings, irrespective of the technical ideas only providing benefits on simplistic locomotion tasks. Nonetheless, we encourage the authors to use our feedback to further improve the paper.
This paper proposes an algorithm for off-policy reinforcement learning using the Hellinger distance between the sampling policy and optimized policy as a constraint. The motivation for the proposed method is explained in the preliminaries section. The actual algorithm and experiments run using the proposed algorithm are also provided. The derivation is easy to follow, and this is because of the well-known lower and upper bounds on the Hellinger distance. The writing of the paper needs work. For example, the abstract talks about the sampling policy and current policy. By current policy, what the authors mean is the policy that is being optimized. The sampling policy is the policy that was run offline. Clarifying these terms would help. Similarly, I did not follow "return for the new policy is improved comparing to KL". In paragraph 3: "With the use of Lagrangian, have been derived" needs proofreading. In eqn 13, what is beta? In the figures, what are the axes?<doc-sep>########################################################################## Summary: The paper provides a new metric - Hellinger distance to be combined with trust region ideas in policy optimization. The major difference from prior work is the change of this distance metric. The paper shows that with this distance metric, along with Lagrangian relaxation, one could show analytic results of improved policies. The paper also shows similar lower bound improvement results and compared with baselines on offline rl tasks. ########################################################################## Reasons for score: Overall, I vote for rejection. I think the idea of changing the distance metric is not novel enough. Critically, I do not think so far in the paper there is a strong enough motivation to use this distance metric: both innovation-wise and result-wise. I will explain in details below. ##########################################################################Pros: 1. Idea is not novel: the overall idea of using an alternative metric does not seem novel. Though the authors motivated an 'improved' version of the trust region lower bound, by using the fact that the Hellinger distance is upper bounded by KL - I think such an improvement in the lower bound is a bit trivial and does not provide new perspectives on the old results. 2. This new lower bound also might not provide additional benefits in practice - because in practice such lower bounds are generally too conservative. 3. Experiment results are also not strong enough. I will explain below. ########################################################################## Cons: 1. The final performance of all three baseline algorithms are fairly bad in terms of final rewards (e.g. for halfcheetah, all returns are negative, yet we know that online algorithms could achieve >3000 at least and in some cases >6000). I wonder if this general inferior performance is a result of using offline dataset - in that sense, does the agent learn anything meaningful at all? 2. From both fig 1 and fig 2, about for half of the tasks the performance seem to drop (or stay at the same level) as the case where no training is done (x-axis at the origin). Does this also corroborate my previous concern that these agents do not learn much at all? 3. From the curves presented in Fig1,2, as well as mean+std results in Table 1,2, it does not seem that the new method provides much significant gains either. ########################################################################## Questions during rebuttal period: Please address and clarify the cons above. Thanks. #########################################################################<doc-sep>The authors propose the use of the Hellinger distance instead of KL divergence to constrain new policies to remain close to the behavior policy. The technical aspects are straightforward, noting that Hellinger provides tighter bounds on total variation than KL, and can straightforwardly be plugged into the CPI/TRPO bounds for policy improvement. They also propose an offline reinforcement learning algorithm based on enforcing a Hellinger constraint to the data policy, deriving iterative optimization procedure, and evaluate it on offline I find the experimental evaluation highly lacking. It seems with the datasets and envs evaluated, policy performance actually *drops* as policy optimization is conducted, so it is not clear to me that these evaluations actually provide meaningful information towards which methods perform better in scenarious where we would want to use offline RL. I would like to see much more extensive evaluation of this method compared to other offline RL algorithms like BCQ https://arxiv.org/abs/1812.02900, BRAC https://arxiv.org/abs/1911.11361, or CQL https://arxiv.org/abs/2006.04779, over a much wider variety of datasets. In general, I'm not convinced that simply using the Hellinger distance instead of KL will lead to significant improvements on its own, given that in the BRAC paper, the authors experimented with different trust regions including Wasserstein, MMD, and KL and didn't find huge differences in the tested domains. Overall, the contribution does not seem significant enough to warrant publication without strong experimental results, which this paper lacks.<doc-sep>Summary: This paper proposes a supervised learning for off-policy reinforcement learning. It exploits the Hellinger distance instead of KL divergence. Thus it achieves tighter lower bound of the expected culmulative return than that using the KL divergence. Moreover, the new lower bound is policy independent. The experimental results show that the proposed method slightly outperforms other baselines when only small amount of data are given, while the algorithms fail to learn on several environments. Reasons for score: Though it has some advantages, I vote to reject this paper. This is because, it has low novelty, the experiments are wrongly designed, and thus it is hard to believe the results. The specific details are below. Pros + Hellinger divergence is used instead of KL divergence, and thus the lower bound become tighter than that using KL divergence. + The loss function for policy can be derived by theory Cons - Changing KL distance to Hellinger divergence has low novelty. Also, the derivation of the loss function using Hellinger distance isn't difficult. Hellinger distance and KL divergence are all under the class of Amari alpha-divergence. When alpha = +/- 1, Amari alpha-divergence becomes KL and when alpha=0, Amari alpha-divergence becomes the Hellinger distance = integral [sqrt(p) - sqrt(q)]^2 dx. Indeed, HD is symmetric and satisfies the axioms of distance. Basically, when we consider the HD on the space of probability distribution, we consider Euclidean geometry on the space of probability distribution, whereas the KLD induces the Boltzman interpretation, i.e., p ~ exp( -KLD). - In addition to the issue of significance in novelty, the numerical results show that the performance improvement is insignificant or negligible. . - The experiments used data sampled by random policies or first few samples of on-policy data, but I think that this is a little strange training setting. Most of the previous works in this line use samples at a certain performance (NOT DRAWN BY RANDOM POLICY). For example, in ABM paper[1], it used first 10,000 episodes (if the length of an episode is 1,000, it uses first 10 million samples), or first 2,000 episodes (first 2 million samples) to show its performance when it uses high performed samples, or low performed samples, respectively. These contain good performed samples relative to the random samples. However, experiments in this paper use almost random samples to train policies. We cannot expect a good policy at a certain performance using these random samples. This expectation is also shown in the results. Some learning curves go down as learning proceeds, and this means that the learning fails on these environments. If the proposed method learns successfully while the others fail to learn, it is a meaningful result, but it is not, otherwise. I think that the authors should evaluate performance using better samples to prove that the proposed method outperforms others. Reference [1] Noah Siegel, et al. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. In International Conference on Learning Representations, 2020.
The reviewer concerns generally centered around the novelty of replacing the distance metric for a policy constraint. While the authors clarified many of the reviewer concerns and added some additional comparisons, in the end it was not clear why the proposed approach was interesting: while it is true that this particular distance metric has not been evaluated in prior work, and the result would have been interesting if it resulted in some clear benefits either empirically or theoretically, in the absence of clear and unambiguous benefit, it's not clear how valuable this concept really is. After discussion, the reviewers generally found the paper to not be ready for publication in its present state.
This paper presents the NetHack Learning Dataset (NLD), which has 3 parts: i. 1.5 million human trajectories recorded from the NAO public NetHack server; ii. 100,000 trajectories from the symbolic bot winner of the NetHack challenge 2021; iii. code for users to apply these trajectories in a compressed format. To demonstrate the utility of NLD, the authors train and compare several algorithms spanning online RL, offline RL, imitation learning, and learning from observations only. 1. The paper presents a large-scale dataset for a fast and challenging RL environment (NetHack). The authors also provide high-performance code to load this dataset and train agents, which makes larger-scale experiments more accessible to those with smaller compute budgets. 2. The dataset and code is well-documented. 3. Baseline results cover a range of different approaches to train agents (online RL, offline RL, learning from observations only, combinations of offline and online learning). 1. Appendix G.6 mentions a SQL interface to consider subsets of the datasets, according to the metadata. Based on Table 2, the performance of learned policies is quite below the dataset average. I think the paper would benefit from additional investigation and discussion on why agents trained on NLD-AA (more narrow data distribution generated by symbolic bot, cleaner demonstrations) outperform those trained on NLD-NAO (human demonstrations). For instance, cutting NLD-NAO trajectories short so the length of the trajectories are similar to NLD-NAO to exclude the later-stage states of NetHack. From Figure 1, it seems that NLD-NAO trajectories may be several times longer than NLD-AA. 2. Only 5 seeds are reported. In Figure 2, it seems that APPO methods have very high variance (blue and orange lines), and would likely benefit from running more seeds. The discussion would also benefit from exploring why APPO has higher variance b/w 0 and 600M steps, after which the variance suddenly drops. <doc-sep>The paper introduces a dataset called NLD, to be useful for imitation learning and offline RL study for Nethack Learning Environment (NLE) [Kuttler+, NeurIPS-20]. NLE (and potentially for a controlled version of NLE, MiniHack [Samvelyan+, NeurIPS-21]) is an OpenAI gym environment from the same author group, with a popular Rogue-like (or dungeon explorer-type) game, NetHack. NLD-NAO collects 1.5M human game plays in a state (only) trajectories form with game play metadata. NLD-AA collects 100K plays by the winning bot (AutoAscend) from a NeurIPS-21 competition, in the state-action-score trajectory form. NLE itself is updated (v0.9.0) to be compatible with NLD. Experiments using the dataset is included. Related dataset for similar benchmarks (StarCraft, Dota, and MineRL) is discussed. NLD-NAO collects 1.5M human game plays in a state (only) trajectories form with game play metadata. NLD-AA collects 100K plays by the winning bot (AutoAscend) from a NeurIPS-21 competition, in the state-action-score trajectory form. NLE itself seems to be updated (v0.9.0) to be compatible with NLD, containing TtyrecDataset in its python modules (nle.dataset). It is essentially valuable that the work includes experiments for online RL, offline RL, imitation learning, and learning from demonstrations, which demonstrate the example usage and usefulness of the dataset. Unfortunately, the process for reproducing experimental results shown in the paper is not clear from the main paper, supplemental material, or github (https://github.com/dungeonsdatasubmission/dungeonsdata-neurips2022/tree/a3f01c425f5d75ce9174b76105ac34a377c2df30/experiment_code). <doc-sep>This paper introduces the Nethack Learning Dataset (NLD), which is a large dataset of demonstrations from the NetHack game. It has 2 partitions: NLD-NAO that contains 10 billion state-only trajectories and metadata from 1.5M human games, scraped from online web servers; and NLD-AA, a collection of 3 billion trajectories with complete state, acton, and scores, generated by a winning bot on the NetHack challenge. Python scripts are provided to load these datasets efficiently. Experimental results are demonstrated on the dataset with RL methods like APPO and DQN, and imitation learning methods like BC and BCO (clone from observations). * The paper is well-written and easy to follow. * The human dataset is high quality, as it is scraped from actual game plays on the web servers. The scale is huge - 1.5M human trajectories with almost 10B transitions. This is much more than most other video game benchmark data, and is provided under the open-source GPL license. * The NLD-AA dataset contains trajectories generated by a winning bot on NetHack. While not as good quality as human dataset, it has groundtruth actions to train inverse dynamics model and do direct behavior cloning. It is complementary to NLD-NAO and I am glad that the authors included both. * The experiments use off-the-shelf methods like A-PPO, DQN, CQL, etc. No novel algorithm is proposed, but these methods are standard enough to provide good baseline results for the dataset. * "Dungeon and Data" - great name! My main concern is about the benchmark's contribution to advances in the broader policy learning community. NetHack is a very niche domain. It is an ASCII-based game that does not make much sense to untrained eyes, and does not have any meaningful high-dimensional observation like 3D perception or other sensing modalities. In addition, the action space and world transition dynamics are quite simplistic. I am not convinced that the potential future algorithms developed on NetHack or NLD will be generally applicable to other embodied agent domains. As the authors pointed out, even a purely hard-coded agent called "AutoAscend" is able to achieve nontrivial performance on the benchmark. AutoAscend bot actually contributes the NLD-AA dataset with full state-action-score trajectories. I am not claiming that mastering NetHack is easy. But in contrast, it is extremely difficult to hard-code a robot agent in Habitat [1] or AI2Thor [2] from pixels alone even for the simplest tasks. While the authors discussed connection to robotics in L337-339, I am still doubtful of NetHack as an effective testbed for general-purpose embodied agent algorithms. That being said, I'm still leaning towards acceptance for the large-scale dataset introduced in this paper, considering that much simpler Gridworld environments without datasets have been accepted at top conferences. * [1] Habitat: A Platform for Embodied AI Research. Savva et al. 2019. * [2] AI2-THOR: An Interactive 3D Environment for Visual AI. Kolve et al. 2019. <doc-sep>The paper introduces NLD, a large-scale dataset of demonstrations from NetHack. The dataset enables research in multiple areas, such as imitation learning and learning from both offline data and online interactions. Empirical results indicate that significant research advances are needed to leverage large-scale datasets to solve challenging decision-making problems fully. 1.The dataset is combined with NLE, which provides potentially large insights for the study conducted in MDP problem. The proposed dataset enables computationally-accessible research in multiple areas including imitation learning, offline RL, learning from sequences of only observations, as well as combining learning from offline data with learning from online interactions. 2.The dataset has many properties of real-world domains such as partial observability, stochastic dynamics, sparse reward, long trajectories, rich environment, diverse behaviors, and a procedurally generated environment. Such properties allow the dataset to provide a more realistic evaluation environment, thereby making the evaluation of RL algorithms more reliable. 3.NLD can enable agents learn from demonstrations containing only observations, learning from both static data and environment interaction. 4.The proposed NLD is complete, which is decomposed of three components: NLD-NAO, NLD-AA, and TtyrecDataset. The significance of each component for dealing with decision-making problems is clearly explained in the paper. Also, the details of the released data, including the used raw features, file format, and scalability are described clearly. 5.The dataset have been opensourced, and there is a detailed introduction for usage in the corresponding repository, thus facilitating researchers to quickly get started with the development and research of NLD. 6.NLD strikes a better balance between scale (i.e. a large number of diverse human demonstrations on a complex task) and efficiency (i.e. cheap to use and fast to run). 7.Experimental results indicate that NLD poses a substantial challenge to state-of-the-art methods, as the NLE environment is highly stochastic, and partially observed. 1.The authors should explain the relations between symbolic and RL in details, corresponding to the sentence in Section 2 "Symbolic bots decisively outperformed deep RL methods, with the best performing symbolic bots surpassing state-of-the-art deep RL methods by a factor of 5." 2.The paper should provide a quantitative metric to quantify data scale and efficiency of implementation in reinforcement learning between different datasets. Such metric could further demonstrate the significance of the NetHack dataset. Also, the authors need conduct experiments to compare different decision-making datasets in the area of randomness, magnitude of actions and state spaces, and partial observability. In other words, the paper needs to quantitatively emphasize the necessity and pioneering of NLD for existing research. 3.Section 5 does not fully explain the research significance of this dataset by only mentioning that there is room for improvement in symbolic, i.e., NAO-AA. <doc-sep>This paper presents NLD (NLD-NAO and NLD-AA), a large-scale dataset of NLE environment. NLD is easy to use (cheap but large-scale, pipeline code ready) and is hard for offline RL community. Several promising directions can utilize NLD, including learning from only observations. 1. The author provides detailed descriptions and analysis of NLD-AA and NLD-NAO, including the performance distribution and other attributes (format, metadata, ...). 2. The author effectively shows the computation cost of this dataset backed with solid numbers (38TB -> 229GB), which is impressive! 3. The author illustrates the difficulty of solving this dataset and posts several promising research directions. I did not personally find significant weaknesses. <doc-sep>The paper proposes a dataset of trajectories gathered from the game of NetHack. It has human-generated trajectories (NLD-NAO), as well as ones generated by a symbolic agent that won the NetHack challenge at NeurIPS 2021 (NLD-AA). The paper describes the dataset, its analysis in terms of the game coverage, as well as experimental results for popular RL methods. These include online RL, as well as offline RL and learning from demonstrations using the dataset NLD-AA. The results show the large gap between the current RL methods and the symbolic agent, not to mention the human players. Minor: You refer to Table 4 multiple times, was that supposed to be Table 2? The dataset can be useful for research in offline RL, but also possibly of interest to the broader RL community. The efficient implementation is definitely a plus, given the computational hunger of current RL methods. As the paper indicates, the symbolic method used to gather the data can be characterized as slightly above "Beginner" level. This data is static, in the sense that it is gathered once and is not updated, with larger parts of the state space explored. Thus, at some point, the dataset will become obsolete. Even if that day may be not too close, it will come. Therefore, it would be beneficial to have a sort of symbolic agent that can search the state space and update the dataset with additional trajectories. Obviously, no significant portion of the entire state space can be kept offline, but maybe a symbolic agent would be able to generate some portions on the fly.
This paper presents the NetHack Learning Dataset (NLD), which has 3 parts: i. 1.5 million human trajectories recorded from the NAO public NetHack server; ii. 100,000 trajectories from the symbolic bot winner of the NetHack challenge 2021; iii. code for users to apply these trajectories in a compressed format. To demonstrate the utility of NLD, the authors train and compare several algorithms spanning online RL, offline RL, imitation learning, and learning from observations only. weakness: - A niche domain: "NetHack is a very niche domain. It is an ASCII-based game that does not make much sense to untrained eyes, and does not have any meaningful high-dimensional observation like 3D perception or other sensing modalities" - quantitative metric to quantify data scale and efficiency of implementation - This data is static, in the sense that it is gathered once and is not updated, with larger parts of the state space explored. Some of these points were addressed in the rebuttal, while the challenge of a static dataset is deferred to future work. Broadly there is agreement among the reviewers, and the ACs that this is a useful benchmark for the community. AC would request the authors to carefully integrate all the feedback in the updated manuscript as well as any leftover comments added as clarifications in the appendix.