Input
stringlengths
128
43.6k
Output
stringlengths
141
10k
New Definitions and Evaluations for Saliency Methods introduces intrinsic evaluation metrics for saliency methods --- completeness and soundness --- that do not require additional models or human evaluation. These metrics are grounded in logical proof concepts and force the method to output a saliency map that only explains the class of interest. The paper proposes a mask-based saliency method that optimizes for soundness as well as completeness. Evaluations compare the proposed saliency method to other mask-based saliency methods on soundness and completeness, deletion and insertion game metrics, and saliency metric. **Strengths** * *Intrinsic evaluation methods for saliency methods* --- This paper proposes that saliency methods should be evaluated on completeness and soundness. These attributes are grounded in logical proof systems and defined for saliency methods mathematically and textually in the paper. These concepts provide a formal intrinsic framework to evaluate saliency methods without requiring human evaluations. Human evaluators often measure how well the saliency maps to their representations, which may not always align with the model's representations. Intrinsic evaluations are better suited to evaluating saliency methods on their ability to reflect the underlying model. * *Defining saliency method requirements* --- The paper reframes saliency by introducing completeness and soundness as two necessary constraints for saliency methods. Previously we only required completeness; saliency justified a model's prediction. By requiring completeness and soundness, saliency justifies a model's prediction but can not justify any other possible prediction. These requirements improve the specificity of what a saliency method should output and make it more straightforward to interpret the results of a saliency method. It also ensures that the saliency map for each class is distinct, which can improve our ability to compare maps and draw meaningful insight between possible predictions. * *Clarity* --- The paper is very well written and precise. Section 3 is straightforward to follow, despite presenting complex definitions. **Weaknesses** * *Novelty of replacement strategy* --- The key novelty of the saliency method is its pixel replacement strategy, where a new pixel value is sampled from another random image. This strategy is known as hot-deck imputation, where replacement values are sampled from the marginal feature distribution. Existing work has used hot-deck imputation as a masking strategy. It has also shown hot-deck imputation and mean imputation (i.e., grey pixel replacement) result in similar changes to model outputs. (see "What made you do this? Understanding black-box decisions with sufficient input subsets" by Carter et al.). Given the similarities to this work, I suggest discussing it in the related work and including it in the comparison to existing metrics and methods. * *Missing related work* --- Related work on saliency evaluation methods should include model and data randomization tests from "Sanity Checks for Saliency Maps" by Adebayo et al.. Also, consider the saliency method axioms from "Axiomatic Attribution for Deep Networks" by Sundararajan et al.. "Sanity Checks for Saliency Metrics" by Tomsett et al. has a good evaluation of existing saliency evaluations. * *Lack of reproducibility* --- The checklist indicates the paper does not include the compute details, code, or data. Please include computing details and other details needed for reproducibility. If possible, also release the code. * *Limited limitations section* --- The paper does not discuss limitations. Understanding limitations is essential for readers who are looking to use this work. Please include a discussion on important considerations when using your method. Also, please incorporate the ethical considerations in Checklist 1 in the main text. **Minor Issues** * Line 19: missing space between emdash and words * Line 37: "and if so one" --> "and, if so, one" * Line 213: "Procedures for find masking explanations" --> "Procedures for finding masking explanations" * Line 278: "from original test set" --> "from the original test set" Please include a discussion of limitations. Some questions I had were: * What is the tradeoff between intrinsic and extrinsic evaluations? Do they both have a place in evaluating saliency methods, or are intrinsic evaluations like yours always better? * Is there a tradeoff between completeness and soundness? Should we optimize for both equally, or is there ever a case where we should prioritize one over another? Can looking at completeness and soundness separately tell us anything different than looking at them together? <doc-sep>This paper presents an additional dimension, *soundness*, for evaluating saliency methods for explainable AI. The authors define this concept, then use it to provide both explanations for why existing heuristic methods work, and to suggest new saliency methods. ### Strengths I find this to be a useful and convincing paper. The paper is well written, but the presentation of the concepts could be made more crisp in parts (see questions, below). ### Weaknessess Nothing major I could see, but this is somewhat outside my area. The authors have adequately addressed limitations. <doc-sep>This paper presents a method for attributing saliency (in the sense of determining which pixels contribute to a classification outcome) to an image. It does so in a novel fashion that explores the tradeoff between the notions of completeness and soundness, pointing out that prior work in this domain does not address the latter. The paper itself is reasonably well written albeit with some typos (e.g. completeness is spelled wrong in some places). I find that this addresses an original angle of this type of assessment of how the neural network makes its determination and in a principled way that gives it an advantage over some of its predecessors. I think this is a significant result and generally view the conclusions drawn by this paper as positive. There are not strong societal impacts of this work insofar as I can see and to the extent that these do exist the authors have made a good case. <doc-sep>The paper introduces and formalizes new evaluation metrics to ensure goodness of saliency methods, based on the logical concepts of completeness and soundness. The first ensures that the network's output is unchanged when using the masked (with the saliency map) input in place of the full image, which is what most of current evaluations methods for saliency methods require. The latter requires verifying that the same saliency method cannot be used to produce masked input that make the net output a different label, and therefore ensures that the evaluation of saliency maps appropriately track the model's probability of assigning labels. The paper's contributions are clear and significant, and explained in a straightforward and accurate manner. Examples are significant and useful. The originality of the contribution lies in connecting the context of saliency methods to logical proof systems and to formalize an evaluation approach which overcomes limitations of current methods and helps making them more rigorous and theoretically grounded. A simple saliency method based on optimization is proposed, which thanks of a change in the pixel replacement strategy allows to satisfy soundness at a small price in completeness. This is proven to work as expected when validated on various datasets and compared to other saliency methods. Furthermore, thanks to their formal frameworks of definitions, authors provide an intrinsic justification about why methods used heuristically to improve the aspect of masks (TV regularization and upsampling) actually work, in that they improve the soundness. I think sharing code and data related to the paper would be beneficial for the scientific community.
The paper introduces and formalizes new evaluation metrics to ensure goodness of saliency methods,. Reviewers consensus about the paper was positive. They found that the paper contributions are clear and significant and also appreciated the paper originality. I therefore recommend acceptance.
The paper is well written. In this paper, the author proposed an EM-based algorithm, DIEM, for set representation learning. The author first provides the equivalence between the OTKE representation learning algorithm and a single-step EM algorithm with extra balanced assignment constraints on the E-step. Then DIEM is developed and consistently outperforms/competes with OTKE algorithms in different empirical studies with the assistance of multiple EM steps and extra regularization. And DIEM is applicable both for supervised and unsupervised settings. The paper is well written and easy to understand. However, I do have some comments: 1) It is obvious to see that DIEM achieves better results than OTKE baseline in terms of offline evaluation metrics, such as accuracy, log-likelihood score. And as the author mentioned, the improvements come from the multiple steps EM algorithms. If this is the case, has the runtime been increased? In addition, the author also mentioned that OTKE-type methods would reduce the computational cost compared with attention (Set)Transformer. Based on two arguments, the running time probably should be compared between different baselines. 2) DIEM doesn't have better results than OTKE on the largest DeepSEA dataset, which would influence the practical performance of DIEM on the large-scale NLP/Bioinformatics tasks. Please refer above. <doc-sep>This paper proposes a novel set embedding method inspired by the EM algorithm. Treating each element in a set as i.i.d. samples from a mixture of Gaussians, the procedure of computing pairwise similarities between the elements and prefixed set of reference vectors corresponds to the computation of responsibilities in E-step for the mixture of Gaussians, and the embedding step using the similarities corresponds to the parameter update in M-step. The previous approaches such as OTKE can directly be interpreted with this EM view (plus balanced assignment constraint). Based on this reinterpretation, the paper proposes a novel set-embedding method extending previous methods in various ways; 1) use multiple steps of EM updates, 2) learn parameters other than reference vectors (covariances and mixing proportions), 3) learn the initial value of the parameters by placing prior distributions on them. The resulting algorithm entitled DIfferentiable EM (DIEM) is demonstrated to excel in various set-to-vec tasks. Overall, I like the paper; it is well written, and the interpretation of the set-embedding procedure as an EM iteration indeed makes sense. It is also good to see the authors derive a novel algorithm from their re-interpretation. The experiments are diverse and thorough, and as far as I can see, they seem to be reproducible with all the details provided in the appendix. I think the paper can be enhanced with some further clarification. 1) In my opinion, it is quite important to compare the number of parameters when comparing different set embedding methods; for instance, in (Lee et al., 2019), they set the number of parameters for DeepSets and Set transformers roughly the same. How many parameters were used for the proposed method? I hope to see the parameter counts at least in the appendix. It would also be helpful to compare the wall-clock time for the forward passes; especially, for the proposed method, it is worth checking the inference time w.r.t. the number of EM iterations $k$. 2) There are quite a few hyperparameters or options for the proposed model; the number of mixture components $p$, number of EM iterations $k$, prior hyperparameter $\\tau$, and the way of pooling (PC, SB, or SB2). Judging from the appendix, the performance of the proposed approach is quite sensitive to the choice of these hyperparameters. I'm also quite confused with three options for the pooling; is there any guide for which one to choose? Was any of those three pooling methods dominant in general? It is quite hard to directly compare the effect of individual choices of the hyperparameters because the results so far is not controlled experiments for the hyperparameters. Does the performance generally saturate with the number of mixture components $p$ or the number of EM steps $k$? 3) Have you considered using generative models other than a mixture of Gaussians? I guess the primary reason for the choice is its conjugacy, but probably we can think of other conjugate pairs for the mixture components. 4) Collapsing the hyperparameters $\\tau = \\eta-1 = \\lambda = 1 = \\nu + d + 2$ is weird; for instance, $\\nu + d + 2$ cannot be equal to one. Can you elaborate on this? 5) How important is the step to initialize the parameters as the mode of the posteriors? What happens with the randomly initialized parameters or learning them as well with gradient descent? For instance, if the mixture components are not conjugate so the MAP parameters are not easily estimated, then we may consider different options. The paper proposes an interesting idea, and the experimental results are promising. There are some minor concerns to be clarified. <doc-sep>This paper discusses that optimal transport kernel embedding (OTKE) can be regarded as a single expectation-maximization (EM) step towards the maximum likelihood estimate of Gaussian mixture models under mild conditions. Motivated by the finding, this paper proposes differentiable EM, which can be regarded as a generalized version of OTKE with prior and several EM steps. Experiments on OMNIGLOT unique character counting, amortized clustering in CIFAR-100, protein fold classification on SCOP 1.75, sentiment classification on SST-2 and chromatin profile detection on deepsea demonstrate the effectiveness of differentiable EM on set representation learning. Strengths: 1. The connections between OTKE and EM is insightful in set representation learning, and differentiable EM is well motivated. 2. Experimental results are impressive and support the claims made in this paper well. Weakness: 1. Time complexity or empirical wall-clock time is needed to give a thorough analysis of differentiable EM. It will be helpful to present the time complexity (or empirical wall-clock time) of differentiable EM, since it takes several EM steps and costs more time compared to OTKE. This paper presents a novel idea about set representation learning. Experiments cover multiple tasks and support the claims well. Though more analysis on time complexity is needed, I think this paper is above the acceptance threshold. <doc-sep>This work proposes a new embedding for sets of features, an important problem since many data modalities can be seen as such (images, sentences, etc.). More precisely, a set is represented by the output means of an EM algorithm for fitting the input set with a mixture of gaussians. The authors draw a new connection to an existing method for set embedding (OTKE). Moreover, their method achieves good experimental results. Pros: - This work introduces a principled method for representing sets. - The OTKE method is derived in a principled manner. An interesting consequence is that the choice of the number of reference can be made using the existing litterature of mixture fitting. - Good experimental results on varied datasets (NLP, bioinformatics, vision, synthetic). - Sensitivity studies for different hyperparameters. Cons: - The proposed method may somehow lack of novelty since the idea of using prototypes has been very studied recently. Questions and remarks: - What is the intuition of doing multiple EM steps in terms of embedding? Can this be related to the recent Perceiver [1] architecture? What is your view on this? - Does DIEM learn the parameters of the prior distribution in the supervised setting? This could be more clear in the paper. - The paper claims that the method has low computational complexity but it seems that this claim is not detailed in the paper (apart from remarks on the number of prototypes). Could you elaborate on the complexity of the EM steps? - It could be great to provide more details on how to set the hyper-parameters for your method. - Could you further discuss the impact of the prior depending on the task? Could we inject another prior/inductive bias here? - Features given by protein language models such as ESM [2] can greatly improve results for SCOP 1.75. In fact, this may be the actual state-of-the-art for this dataset (see Table 5 in OTKE paper). Transfer learning is however orthogonal to the method proposed here but it is worth having this in mind. - In the related work: "The limitation was found...": could you elaborate on this? ----------------------- [1] Perceiver: General Perception with Iterative Attention (Andrew Jaegle and Felix Gimeno and Andrew Brock and Andrew Zisserman and Oriol Vinyals and Joao Carreira) [2] Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences (Rives, Alexander and Meier, Joshua and Sercu, Tom and Goyal, Siddharth and Lin, Zeming and Guo, Demi and Ott, Myle and Zitnick, C. Lawrence and Ma, Jerry and Fergus, Rob) The paper seems sound and provides new insights for set representation, with convincing experiments. I tend to recommend acceptance but it would be great if the authors could answer my questions.
This work proposes a new embedding for sets of features. A set is represented by the output means of an EM algorithm for fitting the input set with a mixture of Gaussians. The authors draw a new connection to an existing method for set embedding (OTKE). Moreover, their method achieves good experimental results. There is general consensus among the reviewers that the paper is sound, well-written and provides new insights for set representation, with convincing experiments. The authors have answered to most comments raised by the reviewers and have revised the paper accordingly. I recommend acceptance as a poster.
This paper considers that spiking neural networks are not suitable for traditional adversarial robustness analysis methods, and proposes linear relaxations for the membrane potential and spikes of SNNs. These relaxations can be used to provide robust training for SNNs. Pros: This paper is well written. This paper gives a linear relaxation scheme for SNNs. This solution takes into account not only temporal updates but also spatial updates. Cons: I'm not sure if the linear relaxation is too loose. I think an example of MNIST can be used to illustrate the gap. The upper and lower bounding strategies used for spike inputs in Sec 3.2 are probabilistic, not linear. Can the introduction of probability be consistent with linear relaxation? If not, i.e. x^u=x^i=x, how will the model work? Previous work has shown the robustness of SNN is affected by coding (Poisson coding and directly coding) [1]. What's the way of coding in your paper? There are some typos that need to be fixed. And the citation is not standardized. For example: "leaky integrated-and-fire (LIF) Gerstner et al. (2014)”(Line 56)should be “leaky integrated-and-fire (LIF) (Gerstner et al. 2014)”. [1] HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training With Crafted Input Noise yes <doc-sep>The paper proposes a methodology to tackle adversarial robustness in SNNs. + The authors show that their method is able to resist attacks of different types on small scale datasets. - The authors evaluation is pretty limited. In [1, 3], the authors show that BNTT trained SNN models are inherently more robust. Can the authors comment on how their methodology is different from [1,3]. Further, the authors started their paper discussion with SNNs being advatangeous on hardware, so it makes more sense to develop hardware aware robustness. But, the author's method is algorithm-based. Can the authors comment if their robustness will transfer to hadware as is or any modification will be required? In [4], the authors show that adversarial robustness on hardware become pretty low, so they come up with a normalization technique to resist attacks. In [2], the authors show that the type of coding technique plays a role in determining adversarial robustness. I am not sure if teh author's methodology can tarnsfer across differnet coding techniques. [1] Revisiting batch normalization for training low-latency deep spiking neural networks from scratch Y Kim, P Panda Frontiers in neuroscience, 1638 [2] Rate Coding Or Direct Coding: Which One Is Better For Accurate, Robust, And Energy-Efficient Spiking Neural Networks? Y Kim, H Park, A Moitra, A Bhattacharjee, Y Venkatesha, P Panda ICASSP 2022-2022 [3] Visual explanations from spiking neural networks using interspike intervals Y Kim, P Panda Scientific Reports 11, Article number: 19037 (2021) [4] Bhattacharjee, Abhiroop, et al. "Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars." arXiv preprint arXiv:2206.09599 (2022). Please see weakness section. <doc-sep>In this paper, a robust training method for SNNs is proposed. It is based on S-IBP and S-CROWN algorithms. The results on 3 different datasets show attack error reduction with some original accuracy loss. Strengths: 1. The contributions of this paper are clear and original. 2. The achieved results are significant and relevant to advancing the state-of-the-art. Weaknesses: 1. There are several typos and semantically incorrect sentences throughout the text. It is recommended to conduct thorough proofreading. 2. The clarity of some key sections can be improved. See comments below. The limitations and societal impact of this work have not been discussed. However, there is no reason to penalize the submission for this, since this work provides only positive impacts.
This paper applies existing certification-based adversarial robustness techniques to spiking neural networks. They achieve this through upper and lower relaxations of the spiking equations. Review scores were high variance, ranging from 4 through 8. Reviews were generally of high quality. The largest concern was that the use of rate coding for the network's output limited the applicability of the technique. I found the authors' response to this concern satisfying. I appreciate that this paper is the first to apply certification-based techniques to spiking neural networks. I believe it has the potential to produce significant impact for that reason. Based upon the reviews, and my judgement of the potential impact, I recommend the paper be accepted.
=== Summary This paper proposes a framework, HyperDynamics, that takes in observations of how the environment changes when applying rounds of interactions, and then, generates parameters to help a learning-based dynamics model quickly adapt to new environments. The framework consists of three modules: - an encoding module that maps the observation of a few agent-environment interactions into a latent feature vector, - a hypernetwork that conditions on the latent vector and generates all parameters of a dynamics model dedicated to the observed system, and - a target dynamics model constructed using the generated parameters that predicts the future state by taking the current system state and the input action as input. The authors evaluate the framework in a series of object pushing and locomotion tasks. They have shown that a single HyerDynamics model allows few-shot adaptation to new environments, outperforming several baselines while maintaining a performance that is on par with a set of models trained separately for each environment. === Strengths This paper targets an important question of building a more generalizable dynamics model that can perform online adaptation to environments with different physical properties and scenarios that are not seen during training. The authors have evaluated the method in several object pushing and robot locomotion tasks and shown superior performance over baselines that uses recurrent state representations or gradient-based meta-optimization. Many practical treatments used in the pipeline can be good references for the community to learn from, e.g., how to encode object information in 3d, specific representation of the object orientation, and the use of Geometry-Aware Recurrent Networks (GRNNs) to learn 3D feature grids, etc. === Weaknesses Although I like the idea of this paper, I believe the authors should provide more clarification and illustration of the experimental results to solidify the claims in the paper: (1) What are the objects used in the pushing task? The authors claim that their "dataset consists of only 31 different object meshes with distinct shapes." It is important to include images of the objects to give the readers a better understanding of how diverse the dataset is and how different the geometry of the "seen" and "novel" objects are. This can help the readers better appreciate the generalization ability of the proposed method. (2) It would be great if the authors can include some qualitative examples, e.g., video, to show the performance of the method. Purely from the numbers in the tables, it is hard for the readers to imagine how well the proposed approach solves the tasks. (3) It would make the paper more illustrative if the authors can include some analysis and visualization of the learned representations in the middle of the network. For example: - How are the latent embeddings different for different objects? - Are there any correlations between the embeddings and the actual physical properties? - How do the interactions affect the embedding? Will different interaction sequences result in the same embedding? - How do the learned representations from Geometry-Aware Recurrent Networks (GRNNs) look like? The authors claim that it can "complete missing or occluded shape information." Can the authors provide some concrete evidence supporting this claim in the specific scenarios used in this paper? How do different numbers of interactions affect the quality of the representation? (4) How does E_vis detect the objects in the scene? Are these detections in 2d or 3d? How accurate is the detection algorithm? (5) The beginning of Section 3.1 describes that an object's orientation is represented as a quaternion. However, at the end of Section 3.1, the authors suggest that they "discard the orientation information from states fed into the generated dynamics model." This seems to me makes the "state" an incomplete representation of the environment, where the authors only predict the position of the object, which makes me wonder: How does the model encode the geometry of the object? Will the missing of the orientation information introduce any ambiguities or uncertainties? What if the object is re-oriented? It may be better to include comparisons of different state representations. Also, in Section 3.3, the authors suggest that they update the orientation using quaternion composition, which seems to be inconsistent with what has been described before. Haven't the model already discarded the orientation information? === Other comments This paper only shows experiments in the simulation. I'm curious, are there any gaps before applying the method to the real world, and what are these gaps? For example, how long does the model take to optimize the action trajectories when performing MPC? Can it support real-time feedback control in real physical scenarios, especially when the environment is dynamic? Model-predictive control relies on the environment's feedback to correct the action sequences, which can achieve a good control performance while tolerating a larger long-term prediction error. In your experiments, how important is the accuracy of the dynamics model? In other words, even if some baselines have a poorer forward prediction performance, will MPC be able to bridge some of the performance gaps? In table 3, why are there multiple red numbers in the Ant-Slope columns? Typo? Page 4, Section 3.1: "E_int then maps z_int to a 1-dimensional code z_int \\in R^2." This sentence seems weird. How does E_int map z_int to its own? Why does a "1-dimensional code" lies in a 2d space? === Post rebuttal The authors' response and the revisions to the manuscript have greatly improved the quality and clarity of the paper. Most of my major concerns regarding the implementation and evaluation details have been sufficiently addressed; hence, I decide to increase the score from 5 (Marginally below acceptance threshold) to 6 (Marginally above acceptance threshold).<doc-sep>#### Summary: This paper proposes an adaptive dynamics model based on the idea of hypernetworks. It is demonstrated that this approach compares favorably to other ways of adapting dynamics models such as conditioning on a separate feature input and meta learning by gradient-based model updates. The proposed approach is evaluated on Pushing and Locomotion tasks. #### Pros: - The proposed approach for conditioning dynamics models on rollouts to model system-specific properties using the hypernetworks idea seems novel and is interesting. - Paper is clearly written, the provided figures help understanding - Outperforms state-of-the-art adaptive dynamics modeling approaches [Nagabandi et al., 2019], [Sanchez-Gonzalez et al., 2018b] - Reasonable baselines are used for comparison, such as fixed model (XYZ), input feature conditioning (Direct), expert ensemble, and state-of-the-art adaptive dynamics models #### Cons: - The paper does not explain training details for the architecture sufficiently well. How are the network components trained, especially the visual recognition part for object pushing? What kind of supervision with ground truth is required to train the components, for instance for object detection and shape representation? Are components pretrained and how? Which losses/data are used for training? - Its unclear why moving from a canonical to an oriented shape representation in Sec. 3.1 should improve results. Shouldnt this limit generalization and require more training data? - Giving standard deviations in addition to the average values in table 1-3 would complete the numerical results - Sec. 1) Why is PlaNet [Hafner et al., 2019] listed as "no adaptation", although it contains a recurrent state representation? - It appears magical that the approach performs better on novel than on seen objects during training for Cheetah-Slope or Ant-Slope in Table 3. Please discuss. #### Recommendation: The paper reads well and proposes an interesting novel approach which could deserve acceptance. The paper should address the points raised in paper weaknesses. #### Questions for rebuttal: Please address points raised above in "weaknesses". #### Typos: - p2: "They are are" - p4: "E_int then maps z_int to a 1-dimensional code z_int ∈ R2" - shouldn't this be "E_int maps interactions to 2-dimensional code z_int ∈ R2"? - p4: "which is typically comprised of an agent and its external environment" -> "which typically comprises / which is typically composed of" - Table 1: Motion rediction error -> Motion prediction error - Advice: In Figure 1, the concatenation symbol is slightly misleading, as it could be interpreted as elementwise multiplication. Maybe replace it by $[\\cdot,\\cdot]$. #### Post-rebuttal comments: The authors' comments addressed my concerns on method and experimental details mostly well. I keep with my rating "6: Marginally above acceptance threshold". <doc-sep>== Update == Thank you for your detailed response. The newly added clarifications and sanity checks have greatly improved the quality of the paper, and I am therefore increasing my score from 4 to 6. I believe the model capacity comparison (Table 6) is especially important for demonstrating the value of the new architecture, and would recommend mentioning that result in the main paper. == Original Review == The paper proposes a model for predicting the dynamics of a physical system based on hypernetworks: given some observed interactions and some visual input, the hypernetwork outputs the parameters of a dynamics model, which then predicts the evolution of the system's state over time. Experiments are conducted on an object pushing and a locomotion task. Strengths: 1. The paper addresses an important question, namely, how a dynamics model may adapt to environments that don't fully match its training distribution. 2. The proposed use of a hypernetwork is plausible and novel to my knowledge. 3. The related work section appears comprehensive, and, to my knowledge, does not miss any major prior work. Weaknesses: 1. The main claim of the paper is that hyperdynamics network offers better prediction accuracy and generalization than a standard dynamics model. I feel like the evaluation of this question is confounded by the choice of tasks and baselines. On the pushing benchmark, the XYZ, VF, and DensePhysNet operate on different modalities than HyperDynamics (either no state information or no visual information), and are therefore difficult to compare. For the MB-MAML baseline, this is not specified. The Expert-Ens model cannot be expected to generalize, since it is designed to overfit on individual objects. As a result, only the 'Direct' baseline clearly operates in the same experimental regime as HyperDynamics. However, nothing is reported on the model architecture or the training method for that baseline, raising the question if its model capacity was competitive. My impression is that this experimental design blurs the effects of (a) using side-information to infer system properties, and (b) utilizing such information through a hypernetwork as opposed to a standard dynamics predictor. If the goal is to evaluate the new architecture, these should be disentangled. 2. On the locomotion benchmark, the Recurrent baseline is similarly unclear. Sanchez-Gonzalez et al. is cited, but that paper focusses on comparing recurrent models based on graph networks to those based on MLPs, and it is unclear which model was used. 3. No results are reported for the prediction accuracy on the locomotion task, which would have helped evaluate the performance of the dynamics models more directly than the task scores. 4. Many of these issues could have been avoided by testing on established benchmarks from the literature, for which results are available. If there is a simulator available, generalization ability could still have been tested by varying the physical constants of the dataset. 5. The paper contains a decent amount of typos and grammatical errors. Overall, while the paper presents an interesting idea, the experimental evaluation is not convincing in its current state: Baseline architectures are not fully specified, many of them did not receive the same input, and no benchmark task with previously reported results has been used. As a result, I recommend rejection at this time. Questions: 1. In eq. 1, should omega be a parameter of H(.) instead of F(.)? 2. Section 3.1 introduces the "1-dimensional code $z_{int} \\in \\mathbb{R}^2$". So is it one or two-dimensional? 3. Overall, the dimensionality of the latent codes and hidden layers seems incredibly small, e.g., only 1/2 numbers to encode prior interactions, and 8 to encode shape. Is there really no benefit to using higher capacity models?<doc-sep>## Summary The authors present "HyperDynamics", a novel method for system-identification and learning of flexible forward models that can be used in planning tasks. The presented methods is generic and is shown on both locomotion and pushing tasks with different simulated robots. I enjoyed reading this work a lot and I hope it gets accepted. It's a clever idea and most flaws that I'm about to point out are easily addressable by the authors. ## Strengths & Weaknesses #### Strengths 1) The method is generic (shown to work across tasks and environments). 2) The baselines are strong. When I started reading your paper, I thought that DensePhysNet and some form of MAML would be good candidates for this to compare again and it turns out these were indeed included. 3) Figure 1 + caption as well as the introduction to section 3 do a great job at introducing the architecture in a way that would allow the reader to create a basic implementation. 4) Code was included. I didn't run it but it's clean and seems functional from what I can tell. #### Weaknesses 1) You really, really need to be more clear in the main paper on the implementation details. You can't move the amount of training data to the appendix and it's not good practice to only include the network architecture by name in the main paper. And what are all your losses? You make the method looks super simple but then you train on shapenet, some 2D reconstruction, something about cropping, and there's a GRU in there too (full backprop vs truncated backprop?). The appendix shines a _little_ light on this but you need to be way more specific in the main paper. That has to stand on its own. 2) You don't motivate all the nitty-gritty implementation choices. Why did you add the decoder? What's the performance if you remove it? What about the cropping? What if you don't do an object-centric feature map, but instead a few CNN layers? What are the individual contributions of all these details? 3) DensePhysNet, Visual Foresight, and many other works in this domain use (simple) real-robot experiments to demonstrate that their method can handle realistic robot noise. Obviously there's a global pandemic happening at the moment, so I won't require you adding this for the rebuttal, but I think in order to really establish this method (maybe before putting it on Arxiv), you'd have to add some real-robot experiments. This can be as simple as a 180USD RealSense and a 500USD robot arm plus a few objects and a playfield. It's become a standard for system-identification-style works and it's justified in my opinion since your method isn't inherently useful in simulation where the user has access to all the information and can arbitrarily reset/reposition the model. And since you don't have ShapeNet data for many real-world objects (which you seem to need for pretraining), could you at least add a sentence or two detailing how this would transfer to real-world problems? **TL;DR my main requests:** (2) Motivate implementation details (add ablations if you have any) and (1) be more explicit about them in the main part of the paper. ## Impact & Recommendation Despite that there seem to be a lot of hacks that make this method in the specific settings, I think the general idea behind it is sound. And I think the authors show that it performs better than the sota, at least in simulation. Therefore I'd recommend acceptance given that the authors add the requested information. In its current shape, it's a 6 for me but if my main concerns are addressed, I'm happy to up this to a 7 or if major improvements are made and my questions below are answered, to an 8. ## Questions, Nitpicks, Comments - Kudos for not making another acronym method - There are a lot of typos and orthographic errors, would recommend a spell-checked or getting this proofed. Examples: section 2 "poinclouds", section 2 "properties in hand" -> "properties at hand" - Maybe start the introduction with an example, e.g. how children are able to chew on a block of wood to assess its hardness and then build towers with it. - **important** Introduction: when you go over (i-iv), that feels a bit too long and lit-reviewy and misplaced in the introduction. I would recommend the following changes: (a) trim this severely, only mention that there are model-based methods that usually do only one environment and there's meta-learning and how your method is more adaptable than either, (b) move this into the literature section, where you have to come back to it anyway, (c) move the Hypernetworks section from the literature into a separate "Background" section and develop it a bit further, since it's less "competing method" and more "you should know about this to understand out method". - Also in the introduction, you present (i-iv), and you mention how your method is better/different than (i-iii) but you never address (iv). - It's become a standard to summarize the contributions again at the end of the introduction, ideally as bullet points. Please add these. - In equation (1), why is the ordering O-T-N for the sums? I feel like ONT would be more natural, no? - When reading the method, my main question was if the method would work on "dense" trajectories or on before-and-after photos like DensePhysNet. This is only answered a few pages later but I think this belongs in 3-Overview or 3.1. Just to be clear, you're gathering trajectories of length 4s, i.e. 5 frames of 800ms where you do NOT retract the robot arm when pushing, right? (Compared to DensePhysNet, where the arm is never visible because they take photos before and after complete standstill). If that's the case, how do you deal with occlusion from the arm? - Do you encourage object-object interactions in any way or do they just occur randomly? Or do you only ever experiment with single objects? - The object orientation vs state section isn't super clear? You're subtracting an object's absolute starting position+orientation from it's future trajectory points? - In 3.2: Why a GRU, why not LSTM? Why k=16 (and similarly why k=5)... This ties into the main criticism from above. Please motivate your choices. - In 4.1: I think this is a typo, but it says you added beds to your experiment table. I think they'd be a bit too large, no? :D - 4.1: specify the random mass+friction range, please! - 4.1: same with the total amount of training data/frames - And since you won't have ShapeNet - 4.2: I think it's a half-cheetah, not a cheetah. - 4.2: I don't understand why it's unrealistic to assume arbitrary resetting in simulation. That's one of the benefits of running simulations and common practice. - 5: What do you mean "predicting both the structure and parameters of the target dynamics model"? Parameters is clear (mass, friction, etc.) but what's the structure here?
This paper proposes "HyperDynamics" a framework that takes into account the history of an agents recent interactions with the environment to predict physical parameters such as mass and friction. These parameters are fed into a forward dynamics model, represented as a neural network, that is used for control. Pros: - addresses an important problem (adapting dynamics models to "new" environments) and provides strong baselines - well written and authors have improved clarity even further based on reviewers comments Cons: - I agree with the reviewer that it is currently unclear how well this will transfer to the real world - The idea of predicting physical parameters from a history of environment interactions is not not novel in itself (although the proposed framework is, as far as I know). The authors should include related work along the lines of (1) (this is just one paper that comes to mind, others exist) (1) Preparing for the Unknown: Learning a Universal Policy with Online System Identification
This paper studies the k-NN algorithm when applied to multiclass classification with a few samples. The authors develop algorithms by formulating a distributionally robust variant of k-NN where each nearest neighbor is weighted based on least favorable distribution. The paper is well-written, and the proposed algorithm is supported by theoretical results and empirical evaluation. However, it lacks novelty and empirical evaluation does not convey the superior performance of the proposed method. For example, the authors discuss the success of metric learning in kNN but exclude it in the experiments. I understand that in small per-class setting, the similar and dissimilar sets for learning the distance metric would be very imbalance, but it would interesting to see to what extent the proposed algorithm improves upon it (by making tweaks to metric learning such as hard negative sampling etc to overcome the imbalance ness issue). Overall, while the paper is very well-written and enjoyable to read, the lack of novelty and aforementioned issue on empirical evaluations prevents me from giving it a high score. The paper lacks novelty and empirical evaluation does not strongly support the superior performance of the proposed method. <doc-sep>This paper proposes a distributionally robust version of k-nearest neighbors (k-NN) classifier that can perform well in a small-sample regime, especially for a multiclass setting. The authors propose to consider a minimax optimization problem for a distributionally robust classification, and show that this infinite-dimensional problem can be indeed solved by a finite-dimensional convex problem. Its connection to the Lipschitz regularization framework is also established. They then propose the Dr. k-NN algorithm and show that it can be seamlessly used with learning neural features jointly. The experiments show that the proposed algorithm can beat the existing baselines as well as other neural network based approaches. This is a solid and well-written paper. The mathematical formulation and the technical results are well motivated and very elegant based on the convex optimization theory. The presentation is also very clear except that the notation is a bit heavy. The experiments are well designed and executed to corroborate the power of the theoretical framework. Weaknesses are hard to find. It is indicated in Checklist that limitations are mentioned in Section 6, but I cannot find any. Is there any limitation of this framework? <doc-sep>The authors take the generalization of the K-NN method for the multi-label classification problem, which lifts the samples to feature spaces and replaces the distance weights with more general weight functions. The distributionally robust formulation of this well-known generalization is defined and shown to be equivalent to a much simpler problem when the ambiguity sets comprise Wasserstein balls. Thanks to this equivalence, the authors show that the worst-case distributions are characterized by the solution of a convex optimization problem. There is further a solution algorithm proposed, and thanks to this, the authors compare the performance of Wasserstein DRO weighted K-NN with benchmark algorithms on well-known classification datasets. Strengths: The paper is written extremely well. It is very easy (and fun) to follow. The motivation is clear. The proofs are correct and they follow a modern set of techniques. The numerical experiments are very thorough and interesting. Weaknesses: - There are some missing discussions about the Wasserstein DRO side of the paper. Especially, recently, there is a strong focus on the structure of the worst-case distributions, finite sample guarantees, and asymptotic consistencies. These are not mentioned in this paper, and except for defining and solving the problem, there is not much focus on the properties that come thanks to the Wasserstein formulation. - The ambiguity sets (that said, the authors call those uncertainty sets, which I believe should be named ambiguity sets) are restricted to distributions supported on training points. I have never seen this, and this may be a dangerous approach. I would like to see more discussions on this. If I am wrong, then seeing further references would be great. Further details are in the "Limitations" section. Overall, I am positive about the paper. I would like to clarify the questions I asked above, as well as the weaknesses mentioned. My biggest concern (or question that I would like to clarify) is that the authors constrain the ambiguity sets to include distributions that are supported only on the training instances. In general, in most Wasserstein classification settings, the most useful results are thanks to the fact that we do *not* have such constraints. It can be seen from the literature that the worst-case distributions are (typically supported on at most $n+1$ atoms -- please also check if this holds here) characterized by a weighted mixture of training points, as well as a point that is extremely far away from the training points, though with a negligibly small weight. This is how the Wasserstein methods coincide with regularization techniques. Would it be possible for the authors to compare their method with a brute-force method that solves the Wasserstein DRO problem where the ball's support points are unconstrained? I am also wondering whether the equivalence between (6) and (8) works because of such an assumption. If my concern is not valid, I would appreciate an explanation from the authors. <doc-sep>This paper aims to developing a distributionally robust KNN classifier for multiclass few shot scenario to mitigate those weaknesses of existing similar methods, it essentially learns the class-dependent metrics to build corresponding optimal weighted k-NN classifiers, , so-designed algorithm Dr. KNN is able to hedge against feature uncertainties.so-reported comparison results show relatively favorable performance to SOTAs.. Strengths: 1.For multi-class few shot metric learning, the authors develop the optimal weighted k-NN classifiers by using the proposed Dr. KNN algorithm to optimize a defined distributionally robust formulation, including the class-dependent weights in classification. 2.Theoretically proving the formulation equivalent to a Lipschitz norm regularization problem and analyzing a few properties to justify their algorithm; 3.Empirically, confirming the proposed algorithm to have competitive performance compared to the SOTAs in the same setting with various real-data sets. Weaknesses 1.The proposed formulation lacks a sufficient clarification about the uniqueness, including, essential difference in principle from existing DRO formulation. 2.the problem under study involves distributional robustness and metric learning, thus the authors should NOT overlook some existing works in the two aspects, at least being mentioned to make differences, especially those appeared in 2021 and 2022. 3.The assumption among classes is NOT practice. Though the formulation or definition in this manu. is somewhat trivial, but its highlight lies in optimization and theoretical property analysis from which some conclusions or insights can be gained.
The reviewers conclude on an interesting paper (especially PjNq) with substantial results that justify its acceptance. I can only recommend to include all of the discussion parts in the camera ready version.
The paper presents a maximally expressive parameter-sharing scheme for hypergraphs, and in general when modeling the high order interactions between elements of a set. This setting is further generalized to multiple sets. The paper shows that the number of free parameters in invariant and equivariant layers corresponds to the different partitioning of the index-set of input and output tensors. Experimental results suggest that the proposed layer can outperform existing methods in supervised learning with graphs. The paper presents a comprehensive generalization of a recently proposed model for interaction across sets, to the setting where some of these sets are identical. This is particularly useful and important due to its applications to graphs and hyper-graphs, as demonstrated in experiments. Overall, I enjoyed reading the paper. My only concern is the experiments: 1) Some of the benchmark datasets for the proposed task as well as some well-known methods (see Battaglia et al’18 and references in there) are missing. 2) Applying the model of Hartford et al’18 to problems where interacting sets are identical is similar to applying convolution layer to a feature vector that is not equivariant to translation. (In both cases the equivariance group of data is a strict subgroup of the equivariance of the layer.) Do you agree that for this reason, all the experiments on the synthetic dataset is flawed? <doc-sep>Given a graph G of n vertices, the activations at each level of a graph neural network (G-NN) for G can be arranged in an n^k tensor T for some k. A fundamental criterion is that this tensor must be equivariant to permutations of the vertices of G in the sense of each index of of T being permuted simultaneously. This paper enumerates the set of all linear maps that satisfy this criterion, i.e., all linear maps which (the authors claim) can serve as the analog of convolution in equivariant G-NNs. The authors find that for invariant neural networks such maps span a space of dimension just b(k), whereas for equivariant neural networks they span a space of dimension b(2k). The proof of this result is simple, but elegant. It hinges on the fact that the set of tensor elements of the same equality type is both closed and transitive under the permutation action. Therefore, the dimensionality of the subspace in question is just the number of different identity types, i.e., partitions of either {1,...,k} or {1,...,2k}, depending on whether we are talking about invariance or equivariance. My problem with the paper is that the authors' model of G-NNs doesn't actually map to what is used in practice or what is interesting and useful. Let me list my reservations in increasing order of significance. 1. The authors claim that they give a ``full characterization'' of equivariant layers. This is not true. Equivariance means that there is *some* action of the symmetric group S_n on each layer, and wrt these actions the network is equivariant. Collecting all the activations of a given layer together into a single object L, this means that L is transformed according to some representation of S_n. Such a representation can always be reduced into a direct sum of the irreducible representations of S_n. The authors only consider the case then the representation is the k'th power of the permutation representation (technically called the defining representation of the S_n). This corresponds to a specific choice of irreducibles and is not the most general case. In fact, this is not an unnatural choice, and all G-NNs that I know follow this route. Nonetheless, technically, saying that they consider all possible equivariant networks is not correct. 2. The paper does not discuss what happens when the input tensor is symmetric. On the surface this might seem like a strength, since it just means that they can consider the more general case of undirected graphs (although they should really say so). In reality, when considering higher order activations it is very misleading because it leads to a massive overcounting of the dimensionality of the space of convolutions. In the case of k=2, for example, the dimensionality for undirected graphs is probably closer to 5 than 15 for example (I didn't count). 3. Finally, and critically, in actual G-NNs, the aggregation operation in each layer is *not* linear, in the sense that it involves a product of the activations of the previous layer with the adjacency matrix (messages might be linear but they are only propagated along the edges of the graph). In most cases this is motivated by making some reference to the geometric meaning of convolution, the Weisfeiler-Lehman algorithm or message passing in graphical models. In any case, it is critical that the graph topology be reintroduced into the network at each layer. The algebraic way to see it is that each layer must mix the information from the vertices, edges, hyperedges, etc.. The model in this paper could only aggregated edge information at the vertices. Vertex information could not be broadcast to neighboring vertices again. The elemenary step of ``collecting vertex information from the neighbors but only the neighbors'' cannot be realized in this model. Therefore, I feel that the model used in this paper is rather uninteresting and irrelevant for practical purposes. If the authors disagree, I would encourage them to explicitly write down how they think the model can replicate one of the standard message passing networks. It is apparent from the 15 operations listed on page 11 that they have nothing to do with the graph topology at all. Minor gripes: - I wouldn't call (3) and (4) fixed point equations, that's usually used in dynamical systems. Here there is an entire subspace fixed by *all* permutations. - Below (1), they probably mean that ``up to permutation vec(L)=vec(L^T)''. <doc-sep>This paper explores maximally expressive linear layers for jointly exchangeable data and in doing so presents a surprisingly expressive model. I have given it a strong accept because the paper takes a very well-studied area (convolutions on graphs) and manages to find a far more expressive model (in terms of numbers of parameters) than what was previously known by carefully exploring the implications of the equivariance assumptions implied by graph data. The result is particularly interesting because the same question was asked about exchangeable matrices (instead of *jointly* exchangeable matrices) by Hartford et al. [2018] which lead to a model with 4 bases instead of the 15 bases in this model, so the additional assumption of joint exchangeability (i.e. that any permutations applied to rows of a matrix must also be applied to columns - or equivalently, the indices of the rows and columns of a matrix refer to the same items / nodes) gives far more flexibility but without losing anything with respect to the Hartford et al result (because it can be recovered using a bipartite graph construction - described below). So we have a case where an additional assumption is both useful (in that it allows for the definition of a more flexible model) and benign (because it doesn't prevent the layer from being used on the data explored in Hartford et al.). I only have a couple of concerns: 1 - I would have liked to see more discussion about why the two results differ to give readers intuition about where the extra flexibility comes from. The additional parameters of this paper come from having parameters associated with the diagonal (intuitively: self edges get treated differently to other edges) and having parameters for the transpose of the matrix (intuitively: incoming edges are different to outgoing edges). Neither of these assumptions apply in the exchangeable setting (where the matrix may not be square so the diagonal and transpose can't be used). Because these differences aren't explained, the synthetic tasks in the experimental section make this approach look artificially good in comparison to Hartford et al. The tasks are explicitly designed to exploit these additional parameters - so framing the synthetic experiments as, "here are some simple functions for which we would need the additional parameters that we define" makes sense; but arguing that Hartford et al. "fail approximating rather simple functions" (page 7) is misleading because the functions are precisely the functions on which you would expect Hartford et al. to fail (because it's designed for a different setting). 2 - Those more familiar of the graph convolution literature will be more familiar with GCN [kipf et al. 2016] / GraphSAGE [Hamilton et al. 2017] / Monti et al [2017] / etc.. Most of these approaches are more restricted version of this work / Hartford et al. so we wouldn't expect them to perform any differently from the Hartford et al. baseline on the synthetic dataset, but including them will strengthen the author's argument in favour of the work. I would have also liked to see a comparison to these methods in the the classification results. 3 - Appendix A - the 6 parameters for the symmetric case with zero diagonal reduces to the same 4 parameters from Hartford et al. if we constrained the diagonal to be zero in the output as well as the input. This is the case when you map an exchangeable matrix into a jointly exchangeable matrix by representing it as a bipartite graph [0, X; X^T, 0]. So the two results coincide for the exchangeable case. Might be worth pointing this out.
The paper provides a comprehensive study and generalisations of previous results on linear permutation invariant and equivariant operators / layers for the case of hypergraph data on multiple node sets. Reviewers indicate that the paper makes a particularly interesting and important contribution, with applications to graphs and hyper-graphs, as demonstrated in experiments. A concern was raised that the paper could be overstating its scope. A point is that the model might not actually give a complete characterization, since the analysis considers permutation action only. The authors have rephrased the claim. Following comments of the reviewer, the authors have also revised the paper to include a discussion of how the model is capable of approximating message passing networks. Two referees give the paper a strong support. One referee considers the paper ok, but not good enough. The authors have made convincing efforts to improve issues and address the concerns.
### Summary: This submission proposes an ensemble framework to improve learning disentangled representations with Variational Autoencoders (VAEs). The approach builds on the assumption that entangled latent representations learned by VAEs show some “uniqueness” in their latent space structure, while disentangled representations exhibit some “similarity”; an assumption corroborated by recent studies. On that basis, a VAE ensemble approach is proposed where several VAEs are connected through linear mappings between the individual latent spaces to encourage alignment of latent representations and thus disentanglement. A formal derivation of the framework is provided and the formal validity of the underlying assumption demonstrated. Furthermore, empirical evaluation of the proposed approach in comparison to the standard VAE, beta-VAE and FactorVAE on the datasets dSprites (main results, main text) and CelebA (appendix) is performed, yielding improved results on the FactorVAE disentanglement metric (all baseline methods considered) as well as the Distance to Orthogonality (DtO) metric (only standard VAE considered). ### Strengths: - Significance / Novelty: The proposed approach builds on recent work by Rolinek et al. and Duan et al., which show PCA-like behaviour in VAEs and leverage these results to develop disentanglement scores for model selection. This submission uses these insights for training an ensemble of VAEs in order to improve learning of disentangled representations. The claim is validated both formally as well as empirically on a benchmark dataset (dSprites) and state-of-the-art methods like FactorVAE, where the proposed framework performs favourably. To my knowledge the proposed idea is novel and simple yet potentially quite powerful. This approach could be relevant for other disentanglement methods and a wider audience employing VAE approaches. - Technical Quality: An important contribution of this paper is the thorough formal derivation and theoretical justification of the approach which to me appears sound. The experimental evaluation is well-designed and mostly succeeds in justifying the claims, with some exceptions outlined below. I believe that all the relevant details to reproduce the results are provided. - In particular, the results that the DtO comes close to 0 (fig. 2) for the ensemble approach illustrate that the latent representations of the different VAE in the ensemble converge (question 1), i.e. the linear transformations between latent space converge to (signed) permutations. This means that it should not matter which latent representations in the ensemble is studied (in the paper the first model in the ensemble is chosen; lines 274-275). However, I am curious whether the authors considered the results (“polarisation” and FactorVAE scores) for other latent representations (i.e. not the first model) and how much the results agreed? - Clarity: I consider this paper well-written and well-structured. Relevant details and formal justifications are provided in an appropriate manner resulting in a self-contained paper. ### Weaknesses: - The ensemble approach comes at a cost which is probably the reason why only up to 5 parallel models were used. Can the authors comment on the running time and memory requirements compared to the competing methods? I think the quality of the paper could be improved if these details and the restrictions of the ensemble approach were provided. - The results in table 1 (comparison of baseline methods and ensemble approach w.r.t. FactorVAE metric) show that an ensemble of size >=3 can outperform state-of-the-art methods like FactorVAE on the considered FactorVAE metric. However, they also show that it might not always be beneficial to put more weight onto enforcing aligned latent representations for the same ensemble size (gamma > 1). This is a bit at odds with the premise of the paper. As the discussion points out (question 3, lines 285-289), this could be due to balancing different contributions in the more extensive objective function. However, this could also hint at potential optimisation problems for more challenging tasks. - The examples for the latent traversal (in the appendix) are slightly less convincing and a comparison is only done w.r.t. a standard VAE. However, it would be much more insightful to compare the ensemble approach to beta-VAE and FactorVAE latent traversal results. - Similar to the last point, in figure 2, it would be quite insightful to see the DtO results for the beta-VAE and especially the FactorVAE. In my opinion, this is a crucial aspect which so far is missing and could justify the approach even more. Isn’t the whole motivation that beta-VAE and FactorVAE should perform slightly worse w.r.t DtO? ### Additional Feedback: - Figure 1: I like the illustration, however I do not understand the bar plot (“VAE, BetaVAE, FactorVAE, VAE Ensemble”). Maybe an additional annotation could help? - Line 8: *“sometime”* -> *”sometimes”* - Line 24: *”state-of-the-arts”* -> *”state-of-the-art”* - Line 25: *”[…] deploy Variational Autoencoder […]”* -> *”[…] deploy the Variational Autoencoder […]”* or *”[…] deploy Variational Autoencoders […]”* - Line 37, line 190, line 221 : *”On contrary, […]”* -> *”On the contrary, […]”* - Line 74: *”[…] closely approximate prior […]”* -> *”[…] closely approximate the prior […]”* - Line 127: *”[…] models […]”* -> *”[…] model […]”* - Line 164: *”[…] decomposition L2 term […]”* -> *”[…] decomposition, the L2 term […]”* - Line 224: *”Such gap […]”* -> *”Such a gap […]”* - Line 225: *”[…] such case […]”* -> *”[…] such a case […]”* - Line 233: *”Does VAE ensemble improves […]”* -> *”Does the VAE ensemble improve […]”* ### Recommendation: This submission was an enjoyable read, it provides some new insights and I believe this paper can pose an important contribution in areas which are concerned with learning disentangled representations and VAE models. In my opinion, the claims of the paper are justified both theoretically and empirically. However, there are certain aspects and concerns outlined above which need to be addressed adequately to warrant a publication. At the moment, I am inclined to accept the paper, but I would like the authors to clarify the concerns and questions above. ### Post-Rebuttal: I would like to thank the authors for the insightful rebuttal! The authors were able to address my concerns adequately and I believe that the revision improved the quality of the paper quite a bit. Therefore, I stand with my initial recommendation and due to the reasons stated above, I endorse accepting this paper. ### References: - Rolinek et al., “Variational autoencoders pursue pca directions (by accident)”, CVPR 2019. - Duan et al., “Unsupervised model selection for variational disentangled representation learning”, ICLR 2019.<doc-sep>This paper proposes a simple and effective technique to improve disentanglement by coupling the latent spaces of different VAE models. It builds on Duan et al. (2019)’s proposed method to rank the representations of different models. By learning a VAE ensemble with linear transformations between the latent spaces and an additional “cross-model” reconstruction loss, the authors show that they can achieve significantly better disentangling. Strengths: - The theoretical justification seems reasonable and builds on previous work. - The experiments are organized to answer three meaningful questions. The results do suggest the VAE ensemble learns better latent representations which can be converted between models with simple, orthogonal linear transformations. Questions: - Regarding the last term of the loss in equation (2): for a fixed i and j, the loss is E_{q(z_ij|x)} ||z_jj - z_ij|| = E_{q(z_ij|x)}||z_jj - M_ji z_ii||. This loss term can be optimized by tuning the parameters of VAE i, VAE j, and M_ji. Do you backprop through all these? Or is there a stopgradient on z_ii when used in computing this loss term (i.e. no gradients through VAE i from this loss term)? - What would be the effect of training the VAE models in two stages: independently first and then jointly in the ensemble? Would it help or hurt disentangling? - How would you express the total information cost of representing an image across the VAEs in the ensemble (say if you wanted to to compare the information rate to a single VAE)? It doesn't make sense to add up the KL costs linearly. Suggestions: - It would help enormously to strengthen the findings and assertions regarding the effect of ensemble size and the gamma hyperparameter. - Consider adding another disentanglement metric e.g. MIG. - Figure 5 in the Appendix shows a larger effect on DtO of the number of dims than the gamma hyperparameter. This result (and other results on CelebA) are perhaps worth describing in the main paper. Minor: - In Figure 2(a) I assume the curves are overlapping? Does it help to use a log scale for the y-axis? - How are the latent dimensions sorted in Figure 3? - Are the scores in Table 2 across different training runs?<doc-sep># Summary The authors introduce a novel VAE-based approach for unsupervised learning of disentangled representations of image data. The approach trains an ensemble of VAEs along with pair-wise linear transformations between their latent spaces. The objective includes the ELBO objectives for each VAE as well as two additional pressures: (i) An L2 similarity objective that pressures samples from each VAE latent space to match under linear transformations samples from the other VAE latent spaces, and (ii) A cross-model decoding objective that encourages decoding accuracy of the linearly transformed latent samples. The authors provide a theoretical argument that the linear transformations should learn to be orthogonal, and show some experimental results indicating that their model performs well compared to baselines when evaluated with an established disentangling metric. # Pros * The theoretical analysis in section 4.1 is clear and provides good mathematical intuition for the authors’ results. * The introduction and related work sections are clear and include a thorough set of references. # Cons: * The authors’ baseline results give unexpectedly low metric scores. The authors report FactorVAE metric values of 0.665 for beta-VAE and 0.764 for FactorVAE on the dSprites dataset. However, the values reported in the FactorVAE paper (and corroborated by others) on the same dataset are significantly higher. This makes me suspicious that something went wrong with the authors’ training --- perhaps they didn’t train those baseline models to completion or something else went wrong. Having baseline results that are inconsistent with the existing literature makes me uneasy. * The traversals in Figure 8 from the authors’ model are much less disentangled than other models in the literature. For example, they are much less disentangled than the traversals shown in the beta-VAE paper and the FactorVAE paper on the same dataset. Thus from these traversals, it seems that the authors’ model is performing worse than existing models in the literature (the authors’ metrics indicate the opposite, but as mentioned above I’m uncertain about the validity of those metric results). Figure 3-A also suggests that the authors’ model is using too many informative latents, i.e. not disentangling well. * I am not convinced by the authors’ intuitive justification in lines 216-225 (and appendix C) that the cross-model objective encourages entangled models to align to disentangled models. Specifically, in that argument the authors seem to assume that orthogonal linear transformations are orthonormal. However, there is nothing to enforce normality of the transformations in the model, hence the cross-model encoding variance from an entangled to a disentangled model could be quite small. * The purpose of the cross-model reconstructions is not clear, particularly given that I’m not convinced by the authors’ intuitive justification of them. The L2 regularization between the transformed encodings should pressure the cross-model reconstructions to be good, so I do not see the reason to include them in the model objective. It would be good if the authors could do an ablation study without the cross-model reconstructions. * The authors do not mention the computational complexity of their model, yet computational complexity seems to be a significant drawback of it. Ensemble training is very computationally expensive, so the authors should include some discussion about it as well as runtimes and memory requirements for their model. Furthermore, with the cross-model reconstructions the computational complexity of the authors’ model scales with the square of the number of ensemble elements, which is quite a steep scaling. * The authors only compare to a couple (relatively old) baselines, betaVAE and FactorVAE, which are no longer state-of-the-art. However, more recently a number of other VAE models have been published that perform better. In order to support their claims about state-of-the-art performance, the authors should compare to newer baselines. Here are a few examples: DIP-VAE (Variational inference of disentangled latent concepts from unlabeled observations. Kumar et al., 2017) TCVAE (Isolating sources of disentanglement in variational autoencoders. Chen et al., 2018) Spatial Broadcast VAE (Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs. Watters et al., 2019) * The authors also don’t include many metrics or datasets. dSprites and CelebA were used in the original betaVAE paper, but more recently it has become the norm to test on a larger set of datasets and with a number of different metrics to convincingly show disentangling. By the way, a number of models, datasets, and metrics have been open-sourced in DistLib (https://github.com/google-research/disentanglement_lib), which may be useful for comparing to more models with more metrics on more datasets. # Summary I do not recommend accepting this paper. Baseline results are inconsistent with prior work, the model seems to disentangle less well than existing methods, and the authors don’t do ablation experiments to justify the high computational complexity of the model.
This paper proposes to use an ensemble of VAEs to learn better disentangled representations by aligning their representations through additional losses. This training method is based on recent work by Rolinek et al (2019) and Duan et al (2020), which suggests that VAEs tend to approximate PCA-like behaviour when they are trained to disentangle. The method is well justified from the theoretical perspective, and the quantitative results are good. Saying this, the reviewers raised concerns about the qualitative nature of the learnt representations, which do not look as disentangled as the quantitative measures might suggest. There was a large range of scores given to this paper by the reviewers, which has generated a long discussion. I have also personally looked at the paper. Unfortunately I have to agree that the latent traversal plots do not look as disentangled as the metric scores would suggest, and as one might hope to see on such toy datasets as dSprites. The traversals are certainly subpar to even the most basic approaches to disentanglement, like beta-VAE. For this reason, and given the reviewer scores, I unfortunately have to recommend to reject the paper this time around, however I hope that the authors are able to address the reviewers' concerns and find the source of disagreement between their qualitative and quantitative results for the future revisions of this work.
The paper tackles the problem of restricted class unavailability after a deep learning model has already been trained on such restricted classes and the aim is to remove any information pertaining to the restricted classes from the model parameters so that the model will not be able to correctly classify the restricted classes in the future. The approach presented includes identifying the model parameters that are most relevant to the restricted classes and removing the restricted class information from these parameters (gradient ascent) while ensuring that these parameters can still be used for accurately classifying other non-restricted classes. With the need to correctly assess the utility of the proposed approach, several baseline methods have been proposed. Empirical results on the CIFAR-100 and ImageNet-1K datasets illustrate how the proposed approach can be used. Positives: 1. The paper studies an important problem of tackling with restricted classes. 2. The presented approach displays an ability to remove restricted class information from model parameters. Negatives: 1. The paper is not very clearly written, with concepts repeated several times and not clear description on some others that are mentioned below. 2. While the problem is interesting indeed, the motivation for the proposed solution is not clearly presented. Instead of repeating the ideas, it would be helpful to have a few clear examples that illustrate the need to solve this problem, as well as a clear description of the behavior of the said approach. While an example about the company logo is stated, it would be helpful to have a few more clear examples from real-world settings to help the reader. One such example, a model trained to predict which treatment would be beneficial for the patient would need to be altered if the treatment cannot be offered in the future due to ethical or resource constraints. 3. While empirical results on the CIFAR-100 and ImageNet-1K datasets seem promising, it would be helpful to study this in the real-world dataset. Issues such as generalizability due to distribution shifts in the future and fairness considerations when certain labels are dropped are potential directions. Additional comments: 1. The notation for the excluded and non-excluded classes is a bit confusing as $C_e, C_r$ can both mean excluded or restricted. I would suggest to change this. The paper studies an important problem. However, there are some challenges with respect to the writing, motivation of the solution and potentially several important directions that can be addressed. <doc-sep>This paper proposes a new learning setting of fine-tuning a pretrained model to forget some specific categories which is motivated by class-level privacy. The solution to this challenge is firstly detecting the most related model parameters that significantly affect model performance on restricted classes and then tuning on a small number of examples with the losses of desired classification capability. The proposed method is experimentally demonstrated effective than possible baselines. This paper proposes a new learning setting of fine-tuning a pretrained model to forget some specific categories which is motivated by class-level privacy. The solution to this challenge is firstly detecting the most related model parameters that significantly affect model performance on restricted classes and then tuning on a small number of examples with the losses of desired classification capability. The proposed method is experimentally demonstrated effective than possible baselines. I mainly have the following concerns. 1. The motivation of the new setting is not strong. In introduction, the class-level privacy is specified by violated privacy concerns and corresponding examples. However, they are not quite convincing to me, and I feel more practical instances are needed to clarify the significance of studying class-level privacy. In particular, in what situation there would be only a few training examples available when considering removing information of restricted class from model concerned with privacy? 2. In related work, individual data deletion (Ginart et al 2019) is cited but not properly evaluated. Following the work of data deletion, I feel there also exists an important problem which is ignored. That is, making model forget some examples or some classes does not mean zero classification accuracy or random classification accuracy (i.e., 1/N). In data deletion work, Ginart et al tune the pretrained model by only compensating the impact of deleted samples instead of forcing model have large error on them. As a result, the tuned model turns out to be never seeing the deleted examples. This work obviously cannot guarantee it from the loss shown as Eq. 1. For example, a 3-way classifier on dog, cat, and leopard and leopard is the restricted class. It is predicted a classifier training on dog and cat only would intend classify leopard to cat because of their natural similarity. Thus, a careful clarification about this point is required in this paper, especially from the view of class-privacy. 3. The process of identifying parameters related to restricted classes seems quite empirically, as a transformation component is needed from some prior knowledge. The authors have mentioned it for images. However, many data privacy related data are also tabular. In this case, how to apply a proper transformation? If this component is quite related to data format, any workaround for this issue? 4. From Figure 3, KD is defined for remaining classes only, but the KD loss also includes restricted classes. 5. It is interesting to see the model performance comparison with the original training in term of remaining classes only (also related to concern 2, the model performance on the original raining data of remaining classes only may be a good reference point for evaluation), although original training data might be inaccessible in the proposed setting. The paper has a weak motivation for the new setting. The proposed method seems too heuristic and the evaluation for the new setting is not appropriate. <doc-sep>In this paper, the authors present a new method to remove information about specific classes from a trained model without reducing the performance of the remaining classes. After the information is removed, the model should not be able to identify the class anymore. Instead of retraining the complete model from scratch without the restricted classes, the presented method only needs a few examples of the restricted classes and the remaining classes. In terms of speed, the presented method is ~200 times faster on ImageNet than a new model training without the restricted classes. Furthermore, they present a method for identifying model parameters that are mainly relevant to the restricted classes. The evaluation of the model is performed on the CIFAR-100, ImageNet-1k, and the CUB-200 dataset. For a detailed comparison, eight baseline methods were designed and evaluated. An ablation study is performed on the class relevant parameters and the number of classes that are excluded. The presented method achieves an accuracy close to the original model on the remaining classes in terms of accuracy. Also, the forgetting prototype accuracy is close to the model trained only on the remaining classes. Introduction: The paper is well written, and the evaluation is very detailed. It is an interesting idea to remove class information from the model with a limited amount of data. However, from the description in the paper, it is not clear why this is a real-world problem. It would be beneficial to rate the importance of this application if the authors had provided sources for such cases or a more detailed description of a specific scenario. Method: The re-training procedure of the model with only a limited amount of training data is very detailed. The description of the identification of the relevant parameters for the restricted classes is missing some details. For example, it is not defined what other transformation besides the grayscale transformation is used. If other transformations are optional, it would be good to know what type of transformations are used in the experiments. Furthermore, it is not clear how the parameters with the highest gradients are selected. Is a fixed threshold used? What is the minimum number of parameters of each layer that are selected? Is this a fixed number for each class? Does it depend on the number of excluded classes? Evaluation: The evaluation is very detailed, with eight baseline methods to show the performance of the presented method. However, the results of the FDR model are not shown in Table 1, only mentioned in the text. Is there a reason for this? Adding the results of the FDR to Table 1 would be beneficial. It would be very interesting to see how the parameter selection influences the accuracy of the model. Unfortunately this is not part of the evaluation. The presented method of re-training a model to forget a specific class is very interesting. However, the part for identifying the most relevant model parameter is missing some essential details. For example, how the parameters are selected (manually or automatic) or the number of selected parameters. This information is essential to understand the method. Moreover, the influence of the parameter selection method is not studied in the evaluation part. <doc-sep>This paper proposes a novel and practical problem called RCRMR-LD, aiming to removel restricted categories from model representations with limited data. They first give some direct solutions and analyze their weaknesses. Then, they propose their own solution to discard the restricted class information from the restricted class relevant parameters. Experiments verify that this approach not only performs similar to FDR but also is faster than it. Pros: 1. The problem RCRMR-LD seems interesting and practical, which addresses the specific class-level restriction by removing corresponding model representations. This setting also save time and computational resource for large scale datasets. 2. Experiments in this paper are solid and convincing enough. They design 5 basic baselines and perform corresponding ablation study. Considering RCRMR-LD is a new problem, if there exists, comparing some related works will be better. ################################################ Cons: 1. From my point of view, the transformation $f$ plays a key role in identifying the parameters that are highly relevant to the restricted classes. However, they seem only try the grayscale transformation and do not give more discussion about $f$. If the model is just trained by grayscale images, will this method fail? For natural language tasks, what transformation are you going to use? I suggest that the authors make more discussion and comparison of the various transformations. 2. From Table 1, I find all the FPA$_e$ of ERwP are relatively high, indicating that the feature representations of the model still contain much restricted category information. Although they indeed remove restricted category from class-level, attackers still can use some model inversion techniques (e.g., [1]) to restore the restricted class data with few owned ones, leading to privacy leakage. 3. Identifying those parameters that are relevant to the restricted classes through ERwP is still heuristic. I admit that ERwP seems to make sense, but some verifications about this claim need to be included. 4. Except for "Related work", I do not find any references in this paper. At least in the "Introduction", you should cite some related works to support your claims. 5. Colloquial expressions and grammar issues are common, thus the writting needs further improvement. ######################################################### Typo: 1. "Baseline 4 - Training of Original model on Limited Non-Restricted Class data with (TOLNRC):" -- missing words? 2. "$N_e$ and $N_r$ refer to the number of excluded classes, respectively." -- missing words? and so on... ############################################################ Questions during rebuttal period: Please address and clarify the cons above. ############################################################## References: [1] Zhang Y, Jia R, Pei H, et al. The secret revealer: Generative model-inversion attacks against deep neural networks. CVPR, 2020. The setting proposed by this paper is novel and practical. However, there exists some technical flaws that need to bu further solved. Please see the "Main Review" for details.
The paper proposes a technique to efficiently retrain a model when a small number of classes are required to be removed. Reviewers in general like the paper, but the key issue is motivation for the problem. The motivating examples in the rebuttal are not very good because a. authors do not provide any evidence that such situations are critical or commonplace, b. the data points that are available for retraining might be very biased. A more careful grounding of the work would be important to motivate the ICLR community and the ML community in general to further study this problem. But for now, unfortunately the paper does not seem ready for publication at ICLR.
This paper demonstrates a rank diminishing behavior of deep neural networks considering the mapping from the input space to the feature space of an increasingly deeper intermedate layer. Theoretically, it proves that the rank doesnot increase as the layer depth increases. Experimentally, it demonstrates a general decreasing trend of rank on various NN architectures. This work also empirically demonstrates that the number of major PCA components at the final feature layer is much less than its ambient dimension, which leads to feak correlation between very different categories. Strenghs: 1. This work systematically studies the evolution of function rank throughout the layer computation and provides theoretical jusfication to the empirically observed rank diminishing behavior. 2. The finding about the independence deficit of final feature manifolds is very interesting and provides insight to the lack of robustness of DNNs. Weakness: 1. Classification dimension estimated by the number of major PCA components in this work is not a good indicator of the feature dimesnion. In fact, a very low dimensional manifold can have high classification dimension. Therefore, the main results about rank diminishing cannot explain the interesting finding about low classification dimension of final feature manifolds. The statement in the abstract that "independence deficit caused by the rank deficiency of deep networks" is misleading. 2. It seems that the definition of the rank of function and lemma 1 implicitly assume that the jacobian of neural network functions has a constant rank over the entire input space of R^n. This is a strong assumption that doesnot hold in general. When this assumption holds for neural networks should be carefully discussed. The authors adequately addressed the limitations. <doc-sep>This work aims to study the rank of hidden layer representations of neural networks in relation to how deep the layer is in the network. In particular they note that the rank of the hidden layers diminish monotonically as we observe deeper layers. Numerical measures of rank are proposed and motivated. The primary theoretical concerns are the rank of the Jacobian from the input to the i-th layer of the network (essentially a linear approximation of the network mapping to that hidden layer) and the dimension of the feature space for a hidden layer. The paper further investigates the tolerance of the final hidden layer to dimensionality reduction by applying PCA to features space and projecting onto a decreasing number of eigenvectors. The number of eigenvectors remaining when a significant drop in performance is observed from the dimensionality reduction provides an approximation for the intrinsic dimensionality of the hidden layer. Finally, the paper explores the idea that it is possible to use the logits of different categories to classify another category in a dataset. One example is that by merely using -0.923 as a weight on the logit for the "triumphal arch" category it is possible to predict the "junco" category without loss of accuracy. # Strengths ## Originality The paper is fairly original with the primary novelty being the rank metrics used and their justification. Additionally the paper touches on some possible connections between symmetry and rank which to my knowledge have not been explored, however, these connections are mainly pointed out but not discussed or treated theoretically. ## Quality The need for the numerical tools to measure rank is well motivated and the numerical tools themselves makes sense and are justified. The claims that are made appear correct and in-line with the evidence presented. ## Clarity There is some variance in the clarity of the paper for various sections. The writing is clear and understandable and the mathematical notation is consistent and intuitive which helps the clarity in the earlier sections greatly. Sections 3.2 and 3.3 are examples where the notation made potentially tricky sections more manageable. Figure 4 stands out as a very helpful figure. The effort on that is definitely worth it. ## Significance The paper touches on some significant points, like the point linking symmetries to lower ranks. The PCA experiment and the experiment on the using categories as predictors for others may be of general interest to the ML community. # Weaknesses ## Originality A primary concern of this work is the fact that $Rank(AB) \\leq min{Rank(A), Rank(B)}$. This is even mentioned in the paper below equation 8 and is one of the primary tools for the work. This, however, is a well established principal and quite intuitive. Thus, the finding that the rank of the network decreases with layer depth is not surprising. Two possible interesting points: noise increasing network rank and structure avoiding the rank staying the same across layers are mentioned but do not form part of the analysis. The noise aspect is ignored in the theory and removed through the noise tolerant rank measures. The point on monotone decrease over equality of rank due to structure is discussed briefly. ## Quality The various sections of the paper feel quite loosely connected. Up to Section 4 the work considers whether the rank of the network decreases monotonically. The Section 5 considers PCA just on the final feature space and is the used to point out that low dimensional feature spaces do not hold semantically meaningful features for each category in Section 6. These sections are all related to rank, however, the connections do not seem to go deeper than that. Finally, there are some points where unjustified claims are made (or the phrasing makes these claims appear unjustified). Two examples are "Theorem 5 that investigates the behaviour of all singular values of deep neural networks" when theorem 5 requires hidden layers of the same size and assumes the Jacobians have Gaussian elements (which appears to be unrealistic in its own right) and "The principle of rank diminishing describes the behavior of general neural networks with almost everywhere smooth components" where it is not clear that ReLU networks would even fit this requirement. ## Clarity Theorem 4 and Theorem 5, which are the most technical aspects of this paper are not given enough space. The clarity of the paper could benefit greatly from a more in-depth treatment of this section. In addition, how the theory of these sections relate to Figure 1 could also be explained more. For example I acknowledge that the shape of the bottom row of Figure 1 is non-linear but to call it exponential (which Theorem 4 and 5 predict) might also be a stretch. Understanding Theorem 4 and 5 would help with interpreting Figure 1. Figure 1 could also use different colours, especially for the bottom row where distinguishing between Jacobian and Feature rank/dimension is not easy. Finally, the notation of Section 5 is not easy to follow, particularly in the meaning of the $i_j$ double subscript where it is not immediately clear what $i$ and $j$ each refer to. Figure 4 does help clarify this a lot and with space constraints fully explaining the new notation may not be feasible. ## Significance This work appears generally significant, however, its significance is hindered by the same issues noted under the originality section. I feel that this work might spend too much time on the potentially quite obvious points of rank diminishing and on introducing the PartialRank and not enough time on the potentially significant points such as Theorem 4 and 5. My primary recommendation would be to rephrase the work more in line with those theorems. I suggest that the authors be clearer on the conditions required for their theory to help. For example saying "The principle of rank diminishing describes the behavior of general neural networks with almost everywhere smooth components," which does not seem to include ReLU networks but is described as general is unclear. <doc-sep>The paper studies the dynamics of the rank evolution of the feature maps of a neural network as a function of its depth. By leveraging the abstract definition of rank of a function as the rank of the corresponding Jacobian matrix, the authors can study the rank dynamics in full generality (i.e. without assuming any specific architecture). This results in Theorem 1 (Principle of Rank Diminishing), that finds that the rank of neural network should never increase with depth due to its compositional nature (a neural network can be see as a composition of $L$ functions, where $L$ is the depth). Then, the authors analyze conditions under which the rank strictly diminishes (Theorem 3) and convergence of the rank to specific constants (Theorem 4-5). Finally, the authors apply their low rank findings to the study of the dependence and correlations between different output classes. They find that the output of some classes of ImageNet (e.g. hamster) can be predicted with a linear combination of the output for irrelevant classes (e.g. broccoli and mouse trap). The authors attribute this problem to the low rank representations of very deep network, as showed by their developed theory. **Strengths** 1. **Generality and Importance of the Results**: the theoretical results are very general and remarkable, abstracting away from the specific architecture. The only assumption is the compositional nature of the layers, which includes most of architectures but excludes residual networks (as the author mention in the supplementary material). 2. **Paper Organization**: the paper is very clear in explaining the abstract concepts of the first part. Until Theorem 2 (page 4), the theory is easy to digest. At first read, Theorem 1 seems trivial if one thinks about linear networks (i.e. simple product of matrices) and the famous property $\\text{rank}(AB) \\leq \\min(\\text{rank}(A), \\text{rank}(B))$, but the author do a great job to generalize it to any composition of functions through ideas from topology theory. The other two theorems delve deep into the rank diminishing properties of function compositions, showing an exponential decay of the rank with depth. 3. **Independence Deficit of Feature Manifolds**: Section 5 provides a nice application of the theory, and would probably cause follow up works in trying to understand how one can reduce this undesirable effect of strong dependences between semantically different classes. **Weaknesses** 1. **Inconsistency of Residual Network**: Skip connections are proposed as a tool to (partially) prevent the rank deficiency problem, and they give a brief theoretical argument in the supplementary material. However, this seems to be in contradiction with Figure 1, where an exponential decay of the rank is observed for ResNets, MLP-Mixers and Transformers, all architectures that adopt skip connections. This could be due to the fact that during training the magnitude of $\\text{Res}(x^i)$ becomes large, hence lowering the rank. At initialization, the magnitude of $\\text{Res}(x^i)$ can be controlled, e.g. with an appropriate factor inversely proportional to the depth (see for instance [1] for this scaling and [2] for its consequences on the rank). In any case, I found it confusing that skip connections are adopted in almost all the architectures used to exemplify the theory (skip connections that according to the authors should have an opposite effect). 2. (minor) **Presentation Style of Structural and Implicit Impetus**: After brilliantly explaining the principle of rank diminishing, in my view the concepts of "Structural Impetus" (due to the specific architectural modules) and "Implicit Impetus" (due to the very compositions of infinite modules) of rank diminishing could be better explained. In particular, I would invest some extra lines to better explain why normalization layer prevent rank diminishing, and maybe better introduce some concepts ( or instance "moving along directions" of Theorem 3 is not properly introduced and in general the current version of the Theorem fails to convey a simple and intuitive explanation). [1] Hanin, Boris, and David Rolnick. "How to start training: The effect of initialization and architecture." Advances in Neural Information Processing Systems 31 (2018). [2] Noci, Lorenzo, et al. "Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse." arXiv preprint arXiv:2206.03126 (2022). I do not see a negative societal impact of this theoretical work. <doc-sep>This work presents some theoretical results that imply that the rank of the Jacobian between the inputs and features of deep networks is non-increasing with depth. They predict that in some settings it should in fact decrease exponentially with depth to some fixed value. They also develop efficient methods to estimate the Jacobian rank of real networks and show empirically that it indeed decreases with depth across a number of different architectures. The effects of depth on the learned representations in deep networks and their geometric structure is an important area of study. While this work contains an interesting combination of theoretical and empirical results, I believe the connection between the two would have to be made more concrete. The result about non-decreasing rank follows from the basic compositional structure of the network as the authors suggest, yet it is unclear that the rank must decrease. In fact, there is a vast literature on signal propagation in deep networks that approaches this question from a different angle (by studying covariance between hidden features as a function of depth, in which case convergence to certain fixed points should essentially be equivalent to the rank of the representation collapsing [1, 2]). This literature also highlights ways to avoid this phenomenon with a careful choice of initialization, and relies on modeling the dynamics of the correlations as a function of initialization hyperparameters. This allows one for example to train convnets of depth 10000 [2]. In the simplest case of a network with orthogonal weights and no non-linearities, it is clear for example that there is no decrease in rank, so there are clearly ways that it can be avoided. Another related issue is that the results are vague in the sense that the behavior of the rank is not connected in a quantitative way with the structure of the network (i.e. the choice of nonlinearity, initialization, etc). I think the submission would be much more compelling if the results could take these into account and make predictions about their effects on the rank. For example, how is the rank one converges to or the speed of the rank decay related to properties of the network? An additional, related concern is the connection between the experiments and the theory. The experiments that attempt to show exponential decay of the rank are not plotted on a logarithmic scale, which makes it hard to understand whether the decay there is indeed exponential or follows some other law. In addition, it appears that the rank decay in the case of resnets may be influenced more by the pooling layers or changes in width than any other operation, yet no mention of this is made in the text. [1] Poole, Ben, et al. "Exponential expressivity in deep neural networks through transient chaos." Advances in neural information processing systems 29 (2016). [2] Xiao, Lechao, et al. "Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks." International Conference on Machine Learning. PMLR, 2018. Limitations have been addressed
This paper studied the "rank" of neural networks and showed that deeper network in general will have lower rank. The paper did a detailed empirical study on network rank, as well as some theoretical insights on why rank is likely to decrease as the network becomes deeper, and how the rank decrease can change with or without normalization layers. The paper also demonstrated a "independence deficit" phenomenon which happens when the rank of the output layer is too low. Overall the reviewers feel that the paper gives interesting observations and nice intuitive explanations.
This paper is based on the "sign agnostic learning" (SAL) method for capturing signed distance functions with neural networks. It extends this method by incorporating derivative information, which interestingly can likewise be handled in a sign agnostic manner. (Maybe I missed this somewhere, but if the derivatives are sign agnostic, couldn't it happen that the inside is positive? Did the authors encounter that in some cases?) The paper presents and motivates this extension together with an additional theoretical insight about the minimal surface property of SAL and SALD. In line with SAL, the paper presents a nice variety of results for shapes from different shape databases. The quantitative results are also convincing. It's interesting to see the substantial difference between the VAE and AD architectures. For the comparison with SAL it's good to see the direct improvements from the derivative loss with a VAE. The paper leans heavily on SAL, and the change in terms of the overall method seems to be fairly small. Nonetheless, I think it's an interesting insight that the sign agnostic derivatives can be included in this way, and I found it interesting to see how much they improve the results. Given that learning signed distance functions is a very active topic, and a very useful building block for a variety of adjacent works that use learned SDFs, the proposed SALD approach seems like a very nice advancement of the state of the art. So, overall, I really liked the paper. Figure 2 alone is impressive, and makes a good case for the method. Together with the nice presentation and set of results I think this paper makes for a very good addition to ICLR. <doc-sep> This paper presents SALD, a new type of implicit shape representation that, in addition to predicting the signed distance function, aligns the gradients of the distance function with that of the neural distance field. The resulting algorithm, for example, has improved approximation power and better preserves the sharp features than the ancestor SAL (sign agnostic learning). The formulation is such that the architecture can consume raw point clouds. STRENGTHS This paper certainly speaks to me. First of all, learning implicit representations directly from raw point clouds can allow for interesting applications such as better generative models or efficient 3D reconstruction networks. The approach is very sensible. In fact, aligning gradients of the implicit surface with the ones of the data is not a new idea and has been done for instance in quadric fitting: * Birdal, T., Busam, B., Navab, N., Ilic, S., & Sturm, P. (2019). Generic primitive detection in point clouds using novel minimal quadric fits. IEEE transactions on pattern analysis and machine intelligence, 42(6), 1333-1347. * Tasdizen, T., Tarel, J. P., & Cooper, D. B. (1999, June). Algebraic curves that work better. In Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149) (Vol. 2, pp. 35-41). IEEE. [the paper might benefit from including those especially because it has related work sections called 'primitives' and 'implicit representations'.]. This is not a drawback but just the opposite: there is a strong prior evidence that such approaches are useful. I also like that the authors spend a reasonable amount of effort for theoretical analysis. Though, I believe that this can be extended to more realistic scenarios (as the authors aptly explained in the limitations). WEAKNESSES / ISSUES - In addition to aligning the gradients, many works benefit from constraining the gradient norm of the implicit function be |\\nabla| = 1. See for instance: * Slavcheva, Miroslava, et al. "Killingfusion: Non-rigid 3d reconstruction without correspondences." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. Can we think of a similar approach here? Could the paper show some ablations with regularizers concerning the gradient norm? - Nowadays, the use of implicit 3D representations is omnipresent. In the evaluations, would it be possible to compare against the variants of DeepSDF (e.g. Curriculum DeepSDF or MetaSDF etc.)? With that, it might also be nice to include some more qualitative results in the supplementary. - Would it be possible to include additional real objects that are non-humans? This might involve for instance cars in an autonomous driving scenario. - Some discussions on the following aspects could be valuable for the reader: (i) What would be a good suggestion to handle thin-structures? It seems to be a common issue among many SDF-like methods. (ii) The use of raw point sets is good, but such data usually come partially observed. Could this method support partial observations? If not, could there be workaround? - The Chamfer distance and the variations thereon are obviously not well suited to assess the accuracy of the deep implicit representations. This creates an urge for better quantitative metrics, maybe the data driven ones. For the future, I would strongly suggest thinking about those to have more meaningful evaluation data. - Some minor remarks: * Can we already compare D and D' and give an intuition about what they might refer to at the place they are first defined? * "they strives to" -> they strive to * "tested SALD ability" -> tested SALD's ability * "the surfaces produces" -> "the surfaces produced"<doc-sep>This paper studies how to generate meshes from raw point clouds. In particular, this paper proposes a framework which is built on top of recent "sign agnostic learning (SAL)" work. Compared to SAL, this work adds a gradient penalty term, which encourages the derivative consistency. The problem studied in this paper is important, however, the proposed method is very incremental and has several motivation issues. I summarize the pros and con as follows. Pros: 1. The idea of using gradient penalty to learn "sharp" signed distance function seems convincing. In Figure 4, the proposed method preserves sharp features compared to its counterpart SAL. 2. This paper presents a theoretic intuition why SALD works -- under uniform distribution assumption, SALD finds the global minimum. Cons: 1. My biggest concern is the motivation to learn sign distance function from its unsigned observations. For data (ShapeNet and FAUST) used in this paper, signed distances are immediately available -- one can easily convert a mesh to its implicit representation. To me, learning signed distance function (as DeepSDF does) is more convincing since the direct supervision is available. So why does this method bother to learn the proxy objective (unsigned distance function)? 2. Following 1, the most obvious application of this paper would be learning signed distance function when the distances are not available -- the input is either LiDAR scan or depth image. In that case, if the paper can reconstruct realistic 3D models, it will be much stronger. 3. To some extent, this paper uses neural networks to learn sign priors from data. There are multiple existing works on this direction which this paper doesn't mention (or briefly mentions but doesn't compare to). E.g, "Deep geometric prior for surface reconstruction" and "Point2Mesh: A Self-Prior for Deformable Meshes". The paper should at least explain the differences of the tasks if it doesn't compare to them. 4. In the implementation detail, the paper says it uses a similar architecture to DeepSDF in the auto-decoding case. However, the method shows improvements over DeepSDF. This seems impossible given that DeepSDF learns from direct signed distance supervision. So I am wondering if this is due to model size difference. I'd like to see more comparisons to DeepSDF under exactly the same model capacity. <doc-sep>## Summary of paper and contributions SALD extends prior work on Sign Agnostic neural implicit shape representations to include a loss term on the derivative of the implicit function. The authors justify the benefits of derivatives in 2 ways: (a) By citing prior work [1] which shows empirically that derivatives decrease sample complexity of deep ReLU networks, and (b) By showing qualitative improvements over SAL without derivatives. The authors show qualitative evidence that global minimizers of sign agnostic losses (with and without derivatives) satisfy the *minimal surface property*, a desirable property of solutions in commonly discussed the surface reconstruction literature. They demonstrate this property via 2D experiments and via a motivating theoretical example. Finally, the authors show their loss function can be integrated into existing generative shape modelling pipelines, comparing results on ShapeNet and D-FAUST against DeepSDF which requires pre-computed SDF data, and SAL which can operate on raw inputs. ## On the benefit of using derivatives The authors cite [1] to motivate the benefit of including derivative terms in the loss. In the case of deep ReLU networks such as the one used by the authors, this prior work shows an empirical reduction in sample complexity when regressing low dimensional functions (Section 4.1) motivated by a theoretical intuition (Section 3). While the neural implicit functions learned by SALD are indeed low dimensional, the shape-space learning problem is not: It learns a map from a point set (consisting of many points) or a high dimensional (256 in the SALD case) latent code to an implicit function. Given this, I don't believe the authors can simply claim a reduction in sample complexity by citing [1] without demonstrating further experimental evidence, especially given the fact that the experiments in the paper do not show SALD drastically improving over SAL. In particular I would be more convinced by an experiment showing the degradation of SAL vs SALD as the number of available samples for a shape is decreased when (a) regressing a single shape directly from data (such as in IGR [2] Section 6), and (b) regressing a shape using an auto-decoder. ## Minimal surface property Showing that global minima to SAL may satisfy the minimal surface property is indeed quite interesting. I do feel however that the claim in the paper regarding this is a bit oversold. In particular "We prove that SAL enjoys a minimal length property in 2D" (Abstract) and "Identifying and providing a theoretical justification for the minimal surface property of [sal]." (end of Section 1). The minimal surface property is well known in the surface reconstruction literature (e.g. [3] cited by the authors in Section 3) and the theorem shown by the authors appears to be for a specific case in 2D unless I am missing something. While these results are not trivial, I feel the contribution should be rephrased to something along the lines of "We give empirical evidence and theoretical motivation that minimizers of SAL-type losses produce solutions satisfying the minimal surface property [citation]" ## Experimental Evidence I feel like the choices of datasets and baselines are sufficient to show the effectiveness of SALD. There are two experiments however which I feel are missing from the paper: 1. The sample complexity experiment described above. 2. Some kind of performance evaluation. I imagine that computing losses on gradients of networks is quite expensive. How much is the increase in runtime compared to the gains in accuracy? ## Summary of review Generalizing SAL to include derivative quantities is a natural next step for this line of work. The authors show that SALD improves performance over the state of the art on Shapenet and performs comparably on D-FAUST. While these results are great, I feel the paper is missing a few key experiments described above, and that the claims around the minimal surface property are a bit overblown. I am rating this paper as marginally below the acceptance threshold but am more than willing to increase my score if the authors make the requested revisions or give a strong justification as to why they are unnecessary in their rebuttal. ## References [1] Czarnecki et. al. - Sobolev Training for Neural Networks [2] Gropp e.t. al. - Implicit Geometric Regularization for Learning Shapes [3] Zhao et. al. - Fast surface reconstruction using the level set method
Congratulations! The reviewers unanimously viewed this work positively and were in favor of acceptance to ICLR. While the current revision already addresses many reviewer concerns, it may be worth adding some of the datasets pointed out by R3 or comparing to some of the papers suggested by R1.
The paper presents a method to predict when and which two nodes in a graph community will be linked (i.e., when and what event will happen). Rather than taking the whole graph into account, the paper first leverages community detection algorithms to divide the network into subgroups and then performs event prediction within each group. More specifically, GCN or message passing is utilized to capture the topological information and the temporal point process is utilized to capture the temporal information. Strong points: 1. The paper studies an interesting problem, it is of practical significance to predict events within a community of a graph. 2. The paper is well-motivated and shows clearly the difference between it nad existing methods. 3. It is not hard to understand the main idea of this work and the proposed model, albeit simple, indeed makes sense. 4. The paper did lots of ablation studies to verify the effectiveness of the proposed model. 5. Code enclosed. Weak points: 1. The paper repeatedly claims that it is the first to jointly predict the next event's incident nodes and timestamp within a certain community. However, existing methods can already achieve this goal (e.g., Transformer Hawkes process and its follow-up work [1]). 2. Although the main idea of this work is clear, the description of the model (Sec. 3) is a bit hard to follow due to the huge amount of notations used. Moreover, many symbols are not clearly explained, some notations are in bold and some are not and the subscripts and superscripts are also confusing (e.g., Eqs (2) and (3)). Is Eq. (5) correct? 3. The hierarchical probability-chain forecaster is straightforward and I cannot learn too much from this design. Besides, one concern is that in practice some links are undirected, in such cases will the order of the factorized terms matter? 4. In Table 3, it is shown that the proposed model underperforms CEP3 without using hierarchical factorization significantly. Although CEP3 is faster, the big drop in performance makes the model less attractive. 5. Stronger baselines (e.g. Transformer Hawkes Process) and effects of key parameters (e.g., L) should be included. It is also super important to use different community detection algorithms to divide the network into subgroups. Questions: 1. Why is AR forecasting not useful sometimes? It looks like that using AR forecasting should do no harm to the model performance. [1] Zuo, Simiao, et al. "Transformer hawkes process." International conference on machine learning. PMLR, 2020.<doc-sep>The paper describes a model for predicting events in a dynamic graph. Unlike most previous work, the model predicts both the incident nodes and the time step of the event jointly, rather than only predicting one given the other. The model formulation and training is based on temporal point processes. * Suggestions: 1. The paper essentially proposes a new task, a new model, new baselines, and a new evaluation metric, which makes it difficult for the reader to judge how effective the model actually is. I think carefully designed, easy to interpret baselines are therefore crucial. Currently the authors only compare against neural network baselines, which are all fairly similar to the proposed model. I would strongly suggest comparing to a more straightforward baseline, for example simply predicting past events with the same time interval as before (or an average time interval). In my experience, dynamic graph data sets largely consist of repeated events meaning even such a simplistic baseline might perform fairly well. The examples in Figure 3 are encouraging but it is hard to judge how representative they are. 2. In a similar vein as my comment above, it would be informative to add the percentage of unique edges to the data set statistics and compare it to the predictions of the model(s). Again, this would provide more context for the results. 3. Section 3, in particular subsection 3.1 could benefit from some major improvements in terms of clarity. The paper claims that the model does not require unrolling because it uses a pure attention mechanism. However, at the same time, it makes repeated references to recurrent states (eg. l. 139-140, 151 among others). It is unclear whether this refers specifically to the version of the model with an RNN (“CEP w RNN”) or whether it is common to all models. Furthermore, subsection 3.3 refers to auto-regressive message passing, which seems to also require a hidden state. A more structured, clear exposition would be helpful. Minor comments: * “Exponential” in Eq. 7 is typeset incorrectly I can see a number of positive aspects about this work: * The task is carefully and elegantly designed and appears more practically useful than the formulations addressed in prior work. I believe having an effective, well-motivated model for this task would be a great addition to the literature in this field and the proposed model looks like a step in the right direction. * It is commendable that the authors consider the scalability of their approach. In my experience, a lot of prior work is computationally expensive and this task is particularly interesting for large, production-grade graphs. While the paper primarily combines existing neural network components I believe the novelty of the work is sufficient for publication given it addresses a relevant task that is of interest to the wider community. However, in its current form I am reluctant to recommend acceptance simply because the merit of the paper hinges a lot on the effectiveness of the proposed model, which to me remains unclear. If the authors could incorporate some of my suggestions, above all an additional, easy to interpret baselines, I am happy to raise my score.<doc-sep>This paper mainly studies the forecasting problem on continuous-time dynamic graphs. The main motivation is to jointly forecast multiple link events and their timestamps over dynamic graphs. For this aim, the authors propose a united model composed of graph neural networks and marked temporal point process. For scalability, the authors further propose to factorize the joint prediction problem into three easier conditional probability modeling problems. Experiments are conducted to show the improved performance in effectiveness and efficiency. Strengths: 1. The paper is easy to read; the organization of this paper is clear. 2. It is well-motivated for the studied problem. It's interesting to jointly consider forecasting link events and timestamps on dynamic graphs. 3. The experiments part seems to be convincing with the new benchmark for the community event forecasting task. Weakness: 1. It is an incremental work of existing forecasting methods on dynamic graphs. 2. In the method part, some proposed architectures are not explained very well. For example, why design hierarchical probability-chain architectures as forecaster? Is it better for performance? It will be better if some intuitions are given. 3. In the experiments part, some recent baseline algorithms are missing for comparison.<doc-sep>########################################################################## Summary: The paper looks at the task of event prediction within communities of Continuous Temporal Dynamic Graph (CTDG). It aims at jointly predicting the event time and the two nodes involved in the event with the CEP3 method. CEP3 combines a GNN encoder, a MTPP forecaster, and a Auto-regressive message passing component to break the joint probability on event type and event time in conditional probabilities which is more scalable w.r.t. the number of nodes. The paper also propose evaluation experiments to measure the quality of entities and timestamps predicitons. ########################################################################## Reasons for score: Overall, I vote for weak reject. The event prediction is clearly introduced and well formalized. My major concerns are about the model presentation and the experimental setup (see cons below). Hopefully the authors can address my concern in the rebuttal period. ########################################################################## Pros: 1. The presentation of the event prediction task on CTDG communities is clear and mathematically well formalized. 2. The CEP3 model combines different techniques to solve a new task in a fairly scalable fashion. 3. This paper provides experiments which evaluate different parts CEP3. It includes an ablation study and the evaluation of both entities and event time predictions. ########################################################################## Cons: 1. Related Work: The related work description is spread over the paper and the appendix. This makes sometimes redundant or harder to identify relevant related works. More specifically l. 50-71 sounds a bit redundant with e.g. section 2.1. Further, the TPP related works cite only two related works while the literature is quite rich in this field as described in [28]. Some related works for TPP are only mentioned in appendix. Action suggestion: I feel that concentrating the related work description at one place would improve the paper. I would also extend the TPP related works by e.g. using the survey [28] and partly moving appendix B to the main paper. 2. Model: 1. The model description (sec.3) is sometimes hard to follow. The paper introduced a very large number of mathematical notations. Eq.(3)would need some explanation even if it relies on previous works. What is the meaning of each variable in Eq.(3) ? The meaning of bold variables was unclear to me. Should bold variables be used for all vectors/matrices ? What is the difference between the bold and not bold $z_i^{(l), (t)}$ in (3)? Should the vectors be denoted with arrows like in Eq.(5)? The notations are sometimes not consistent (e.g. the neighorhood of $v$ in Eq.(2) and l. 147, probably a typo). Action suggestion: Only present necessary equations in the main text to reduce the number of notations. Make mathematical notations more consistent. 2. It is not clear to me why forecaster is “Hierarchical” and it is not explained in sec. 3.2. Action suggestion: Explain the "Hierarchy" aspect. 3. “Specifically, we initialize $ \\hat{G}_0$ with the candidate node set C as its nodes. Two candidate nodes are connected in $ \\hat{G}_0$ if their distance is within L hops. The resulting graph encompasses the dependency between candidate nodes during the encoding stage” .This sentence was unclear to me. 3. Experiments: 1. The evaluation is very dependent of the pre-defined communities. The communities are computed with only the Louvain algorithm which heavily suffers from the resolution limit especially for large graphs (Resolution limit in community detection, Fortunato et al.). It would be interesting to report results for different community sizes and number of communities for each dataset. Action suggestion: Take other clustering algorithms (e.g. linkage algorithms, spectral clustering) to define communities in the experiments. Performs the experiments when the number of communities changes. This is also possible with the Louvain algorithm by changing the resolution parameter. 2. Eq. (12) is supposed to evaluate the predictions (ˆui,ˆvi,ˆti) but these predictions notations do not appear in the PP formula. 3. Eq.13 compares t_i and ^t_i while the true even might be different from the predicted event. Thus, it is possible to achieve good MAE while the model is completely wrong in terms of entities. Since a key contribution is the joint prediction, it would be more convincing to provide an experimental evaluation of the joint predictions. Action suggestion: Explicitly mention the limitations of this evaluation. Propose an experimental evaluation of the joint predictions. 4. I appreciate the will to show results visualization as in Fig. 3. However, Fig. 3 did not convince me that CEP3 is better than DyRep in this specific case. DyRep does not less similar to the ground-truth than CEP3. Action suggestion: Maybe another color scheme would show a better visualization. Another idea is to complement the plots with a quantitative metric measuring the distance to the GT next to each plot. Others: - I feel that it would be appropriate to cite the work(s) who introduced CTDG framework in line 30. Without these citations, it is hard to understand where this common representation comes from. - typo "the/a" l.82 I m happy to improve my score if a majority of the above points are addresses (e.g. with action suggestions). #### Post Rebuttal I believe that authors improved the paper by providing clarifications and discussing the limitations of the work. Therefore, I increased my score from 5 to 6.
We agree with the AC that this paper is ready for publication. We encourage authors to incorporate suggestions for clarity improvements, in particular those mentioned by reviewer `XRhC`.
This paper presents new convolutional and pooling operators for protein structures. These components are used to design an architecture that shows strong performance on several downstream tasks. The main strength of the paper is the presentation of new ideas for modeling protein structures. The proposed operators leverage the intuition behind convolutional networks but extend them for the protein case, e.g. by introducing rotational invariance in addition to translational invariance. The ideas themselves are interesting to machine learning researchers and useful to those working proteins. Due to the complexity of the model, I recommend that the authors release their code so that other researchers could evaluate these ideas on additional problems. The writing and presentation is clear. Weaknesses: - More updated baselines should be used. For example, for the sequence-only baselines, the authors should compare to ProTrans [Elnaggar, et al. 2020] or [Rives, et al. 2019] which show better results than the baselines used here. On the structural side, the authors should compare to the architectures proposed by [Du, et al. 2019] or [Anand, et al. 2020]. - A key sequence baseline is missing: multiple sequence alignments. - The only tasks considered are classification tasks. The paper could be improved by evaluating on more practical tasks, such as protein design, e.g. the tasks in [Du, et al. 2019] or [Anand, et al. 2020]. The architecture described here could be very useful in those settings. - The authors compare to Bepler, et al. (2019) which is a great baseline since it uses both sequence and structural information. However, it appears from the text that the authors used the version of this model provided by Rao, et al. However, Rao et al. simply used the architecture from Bepler, et al. and re-trained it on sequence data only. Therefore, I recommend that the authors retrieve the weights from Bepler, et al. directly. - On the fold classification task, the hardest test set considered is "Fold, in which proteins from the same superfamily are not present during training." It would be interesting to evaluate the model on a harder generalization setting in which proteins from the same fold are also not present during training. The delta between this model and DeepSF decreases when the sets go from family -> superfamily -> fold. To complete the picture, it would be important to go one step further. - Relatedly, the authors have not demonstrated that the models can generalize to novel folds. Without demonstrating this, the model cannot be used for important tasks, such as protein design. The paper would be much more compelling if the authors could show that their architecture generalizes better than prior work. To accomplish this, the authors would need to move beyond a classification framework toward a clustering framework because it's impossible for a classifier to predict novel folds. - The names of the test splits on the fold classification task is non-standard. Generally, "fold split" means that proteins from the same fold are not included in the same set; "superfamily split" means that proteins from the same superfamily are not included in the same set, etc. What the authors call the family split ("in which proteins of the same family are present during training") is usually not included as overfitting to / memorizing the training set could still result in good performance (perhaps this is why the proposed model scores 98.9% here). To summarize the weaknesses: more work is needed on the baselines and metrics. Additional evidence is also needed to support that the model can generalize to unseen folds. Overall, this paper is a great start and the proposed model architecture could be interesting to ML researchers and practitioners in the biology space. In its current state, this is a borderline paper because it is missing a critical component of generalization of novel folds, which is necessary for this model to have significant impact in the field. If the authors can resolve my concerns during the rebuttal period, I am willing to raise my score. Update: The authors have included an additional experiment around fold generation in Sec 6.6. However, no baselines are included, so it is difficult to understand the result in context and understand how this method generalizes compared to existing methods. The authors have also included two additional baselines: Bepler, et al. and MSAs. More analysis is needed to compare this with SOTA in representation learning. The authors compare to "Elnaggar et al. (2020)" but it isn't clear which model was used. Elnaggar et al. (2020) have released a series of different models. The authors should clarify this in the camera-ready and ensure they used the best models released by Elnaggar et al. I have increased my score.<doc-sep>This paper describes a deep learning architecture for representing and performing classifications on protein structures. The representation involves three different distances: Euclidean distance and the shorted path between two atoms, where edges are either along covalent bonds or also include hydrogen bonds. Each atom has a vector of associated features, and convolution is accomplished by defining a kernel on all three distances and then summing the features of each neighboring atom, weighted by the kernel value. The paper also proposes three protein-specific pooling operations to cope with the large input size when representing all atoms in a protein. Overall, this is an extremely clear paper, and the core ideas appear to be sound. Furthermore, the experimental validation is quite extensive, and the results are impressively good. Some positive points are that the authors consider several different tasks, and numerous state-of-the-art methods are included in the comparison. I particularly appreciated the careful ablation study, demonstrating not just that the entire system works end-to-end but that the various pieces each contribute to its behavior. The experimental setup appears to be valid. There is always the chance that these results could be optimistic due to (presumably unintentional) model selection happening during development of the proposed method, or because of a mismatch between the training data used for the published models and the test set used here. But I can't see how the authors could have done a better job to guard against such issues, other than the obvious step of making their code and trained models publicly available. It is unfortunate that the manuscript makes no mention of this. One drawback to this work is its focus on recent literature. I found it strange that the earliest citation in the related work section is from 2013. The tasks being solved here have been the focus of extensive research going back 25 years or more. The manuscript is up front about the fact that a drawback of the method is its requirement that the input proteins have known 3D structure. However, another potential drawback is that the input does not take into account homology information drawn from, e.g., a sequence similarity search over a large protein database. This information is typically represented as a PSSM column for each observed amino acid. I would like to have seen this acknowledged, since it seems like a potentially valuable source of additional information. A minor point: the introduction states that the model captures primary, seconary and tertiary structure, and then says that "As chain bindings affect the tertiary structure, the quaternary structure is captured implicitly." But of course, this argument could apply to any of the other levels: amino acid sequence implicitly captures secondary and tertiary structure. Incidentally, the Murzin cite has an incorrect year (1955). <doc-sep>This paper proposes a graph neural network architecture that operates on the atoms in a protein structure. It proposes a specific multigraph and pooling model structure, constructed using Euclidian distance and 3 types of edges (Euclidian, covalent, and hydrogen+cov). There are three consecutive levels of granularity, with nodes corresponding to: (1) atoms (2) amino acids and (3) grouped amino acids. The model is used to make a global prediction for a protein, and results are presented on the taks of Fold Classification and Reaction classification. I recommend rejection for the paper in its current form, based on the concerns about the relevance of this method for fold classification (1/2 experiments), its framing as representation learning, and its framing as convolution vs graph neural networks. ## Strenghts: * The two key model choices feel like a powerful choices for a graph neural network with an interesting domain-motivated set of architectural choices: - construction of the hypergraph with shortest-distance edges of 3 types - custom pooling of the graph from atom level nodes to groups of amino acids * The paper has helpful visualizations and well-written, however see below for concerns around framing. * Excellent ablation study in Table 2. ## Weaknesses: * The key weakness is that the protein structure has to be provided as input to the network - ~~therefore fold classification is a flawed experiment as the full atomic coordinates is all that's needed for perfect assignment to the folds. Specifically comparisons to sequence-only classification (TAPE, Unirep, etc) are misleading.~~ - as the authors point out, the amount of available data is tremendously less than sequence-only models. In fact the framing as "representation learning" is odd in this context, as there is no way to leverage unlabeled data, and no self-supervised objective is proposed. * I find the framing of the method somewhat misleading, on a few counts: a) ~~representation learning~~ (see remarks abvove, no self-supervised + transfer of features) b) naming the core layer of the model a "convolution on 3D protein structures" is off. A crucial element of standard convolution is the regularity of the domain, while this is intrinsically graph structured data. Furthermore I believe the method still fits in the "neural message passing" framework (see bullet below). Therefore the proposed architecture seems to be much better summarized as "message passing graph neural network on a hierarchy of multi-graphs (hierarchy through protein pooling), with 3 types of graph edges defined by bonds and euclidian distances". c) after pooling, when vertices don't correspond to atoms but to clusters, the proposed convolution/GCN does not directly apply anymore. What are the edge connections at this stage? * I disagree with the phrase "Although this operation could appear similar to message passing algorithms, they differ significantly". I believe the method fits in the MPNN framework, roughly as follows (notation following Gilmer et al 2017, renaming x, xi to v, w): - hidden state (per node) F_v with v the node (either atom, group of atoms, amino acid, cluster of amino acids depending on stage in the hierarchy) - edge introduced if euclidian distance d(vw) < m_e - edge features: 3 distances clamped [0,1] - learned message function $M = \\kappa(e_{vw}) . F_w$ - $h_v^{t+1} = m_v^{t+1}$ - or possibly including batchnorm and relu. The above re-formulation is quite close to GCN from Kipf & Welling (2016) but with more complex learned message function function of the 3 distances. * Writing clarity: for eq (1) the notation needs to be introduced with dimensions (x, $\\kappa$, F). Specifically for kernel $\\kappa$ it needs to be made clear that $\\kappa_j$ is a function from $R^3 \\to R$ (?) * Comments on experimental results: - ~~as mentioned above, I think fold classification is not an appropriate benchmark for this model~~ - for enzyme reaction classification, a sota method on this problem should be included as benchmark. Ryu et al 2019 (DeepEC) seems relevant here, or a method based on HMM profiles. ----- ### Edit: reply to author's response and updated paper (also see strikethroughs in the original review above) * Fold classification: let me withdraw my concern here, and will defer to other reviewers & AC judgement if this task makes sense with protein structure as input -- indeed it may not to be a trivial task. * Framing as (a) representation learning: improved in the updated paper, (b) convolution: still stands - the point cloud convs are not a very good comparison, since there is no graph structure there. (c) pooled coarsened graph stages: thanks for the pointer to end of Sec5. * Positioning wrt message passing: the paragraph is a big improvement, removing some claims about over-smoothing. However re: "the message passing function is learned": this is still very much within the default MPNN framework from Gilmer et al. Altogether, the whole method would still be much better framed as a graph-based network, rather than shoehorning this into a description of a single "convolutional operator". This will allow a proper discussion of what is currently the end of Sec5, where the graph does not correspond to an atom-level graph anymore, rather they now correspond to amino acid or coarser level graphs - it is confusing that this coarser graph stages are so briefly glossed over. -- The citation to "can also be understood in a message passing framework (Kipf & Welling, 2017)" is off, should be "Gilmer et al., 2017" https://arxiv.org/abs/1704.01212 In conclusion, I am raising my score from 4->5, leaning towards 6. There is a lot of good work in this paper, and I would consider the paper a clear accept with the same method and same results, if it were thoroughly rewritten based on graph neural networks. Requiring full atomic structure as input to the method is the major limitation to the application and impact of the method.<doc-sep>Pros: - I think the paper is exceptionally well-written and the figures are very carefully designed. Applaud! - Thank you for proper train/test/validation splits! Glad there are varying degrees of difficulty with proper held-out sequences. - I very much appreciate proper comparison to other methods. Very thorough. - Less important, but the model also performs better at these two tasks than any other approach. ( I say this because I believe the field shouldn't always require SoA if there is a significant technical advancement.) Cons: - The authors site "over-smoothing" for why their convolution operator performs better, but provide no direct evidence that this is the case. It needs to be noted that this is either a hypothesis, or more concrete evaluation of this needs to be performed to make this claim. - Are there any replicates for standard error and ablation studies? - Table 3 BLAST comparison is weak. JackHMMER or HMMER based tools are more appropriate than BLAST. Neutral: - What defines a hydrogen bond? This definition is clear to me in secondary structure, but seems more loose in tertiary structure. - In your figures, it looks like only carbons, oxygens, and nitrogens are defined. What about hydrogens? If hydrogens aren't parameterized, how do you define hydrogen bonds? This may be good to clarify. - In Table 2, does the modification of the architecture change the number of parameters? - Definition of a "ball query" might be helpful. - Are there any sequences with post-translational modifications in the dataset? If so, how are those handled?<doc-sep>__Summary__ The authors describe a method to transform 3D protein structures for supervised machine learning. Their method introduces a convolution operation that considers both the intrinsic distances between atoms as defined by their bond structure and the extrinsic distances as defined by 3D proximity. They also introduce interpretable pooling operations developed using known biology of the amino acids. Overall, the method is effective and straightforward to follow due to having avoided unnecessary complexity. The figures greatly aid the reader. The authors’ method outperforms a variety of competitive alternatives on protein fold and function classification tasks. These are important problems for which the authors’ model has achieved a significant performance boost. I don’t see why this model wouldn’t work well for any 3D protein structure labels that can be collected. They also perform a through ablation analysis to establish the contribution of the various components of their method. __Major comments__ * I wasn’t able to understand what the “neighborhood” ablations represent and how they differ from “convolution” ablations. Are the neighbors used for anything other than the convolutions? For example, “CovNeigh” uses only the intrinsic distances, similarly to “InConvC”. What makes these different? __Minor comments__ * On page 7, a Table 4 is mentioned that doesn’t appear to exist. I think they mean Table 3.
Protein molecule structure analysis is an important problem in biology that has recently become of increasing interest in the ML field. The paper proposes a new architecture using a new type of convolution and pooling both on Euclidean as well as intrinsic representations of the proteins, and applies it to several standard tasks in the field. Overall the reviews were strong, with the reviewers commending the authors for an important result on the intersection of biology and ML. The reviewers raised the points of: - weak baselines (The authors responded with adding suggested comparison, which were not completely satisfactory) - focus mostly on recent protein literature - the reliance of the method on the 3D structure. The AC however does not find this as a weakness, as there are multiple problems that rely on 3D structure, which with recent methods can be predicted computationally rather than experimentally. We believe this to be an important paper and thus our recommendation is Accept. As the AC happens to have expertise in both 3D geometric ML and structural biology, he/she would strongly encourage the authors to better do their homework as there have been multiple recent works on convolutional operators on point clouds, as well as intrinsic representation-based ML methods for proteins.
Summary: ======== The paper presents rates of convergence for estimating nonparametric functions in Besov spaces using deep NNs with ReLu activations. The authors show that deep Relu networks, unlike linear smoothers, can achieve minimax optimality. Moreover, they show that in a restricted class of functions called mixed Besov spaces, there is significantly milder dependence on dimensionality. Even more interestingly, the Relu network is able to adapt to the smoothness of the problem. While I am not too well versed on the background material, my educated guess is that the results are interesting and relevant, and that the analysis is technically sound. Detailed Comments: ================== My main criticism is that the total rate of convergence (estimation error + approximation error) has not been presented in a transparent way. The estimation error takes the form of many similar results in nonparametric statistics, but the approximation error is given in terms of the parameters of the network, which depends opaquely on the dimension and other smoothness parameters. It is not clear which of these terms dominate, and consequently, how the parameters W, L etc. should be chosen so as to balance them. While the mixed Besov spaces enables better bounds, the condition appears quite strong. In fact, the lower bound is better than for traditional Holder/Sobolev classes. Can you please comment on how th m-Besov space compares to Holder/Sobolev classes? Also, can you similiarly define mixed Holder/Sobolev spaces where traditional linear smoothers might achieve minimax optimal results? Minor: - Defn of Holder class: you can make this hold for integral beta if you define m to be the smallest integer less than beta (e.g. beta=7, m=6). Imo, this is standard in most texts I have seen. - The authors claim that the approximation error does not depend on the dimensionality needs clarification, since N clearly depends on the dimension. If I understand correctly, the approximation error is in fact becoming smaller with d for m-Besov spaces (since N is increasing with d), and what the authors meant was that the exponential dependnence on d has now been eliminated. Is this correct? Other - On page 4, what does the curly arrow notation mean? - Given the technical nature of the paper, the authors have done a good job with the presentation. However, in some places the discussion is very equation driven. For e.g. in the 2nd half of page 4, it might help to explain many of the quantities presented in plain words. Confidence: I am reasonably familiar with the nonparametric regression literature, but not very versed on the deep learning theory literature. I did not read the proofs in detail. <doc-sep>This paper makes two contributions: * First, the authors show that function approximation over Besov spaces for the family of deep ReLU networks of a given architecture provide better approximation rates than linear models with the same number of parameters. * Second, for this family and this function class they show minimax optimal sample complexity rates for generalization error incurred by optimizing the empirical squared error loss. Clarity: Very dense; could benefit from considerably more exposition. Originality: afaik original. Techniques seem to be inspired by a recent paper by Montanelli and Du (2017). Significance: unclear. Pros and cons: This is a theory paper that focuses solely on approximation properties of deep networks. Since there is no discussion of any learning procedure involved, I would suggest that the use of the phrase "deep learning" throughout the paper be revised. The paper is dense and somewhat inaccessible. Presentation could be improved by adding more exposition and comparisons with existing results. The generalization bounds in Section 4 are given for an ideal estimator which is probably impossible to compute.<doc-sep>This paper describes approximation and estimation error bounds for functions in Besov spaces using estimators corresponding to deep ReLU networks. The general idea of connecting network parameters such as depth, width, and sparsity to classical function spaces is interesting and could lead to novel insights into how and why these networks work and under what settings. The authors carefully define Besov spaces and related literature, and overall the paper is clearly written. Despite these strengths, I'm left with several questions about the results. The most critical is this: piecewise polynomials are members of the Besov spaces of interest, and ReLU networks produce piecewise linear functions. How can piecewise linear approximations of piecewise polynomial functions lead to minimax optimal rates? The authors' analysis is based on cardinal B-spline approximations, which generally makes sense, but it seems like you would need more terms in a superposition of B-splines of order 2 (piecewise linear) than higher orders to approximate a piecewise polynomial to within a given accuracy. The larger number of terms should lead to worse estimation errors, which is contrary to the main result of the paper. I don't see how to reconcile these ideas. A second question is about the context of some broad claims, such as that the rates achieved by neural networks cannot be attained by any linear or nonadaptive method. Regarding linear methods, I agree with the author, but I feel like this aspect is given undue emphasis. The key paper cited for rates for linear methods is the Donoho and Johnstone Wavelet Shrinkage paper, in which they clearly show that nonlinear, nonadaptive wavelet shrinkage estimators do indeed achieve minimax rates (within a log factor) for Besov spaces. Given this, how should I interpret claims like "any linear/non-linear approximator with fixed N -bases does not achieve the approximation error ... in some parameter settings such as 0 < p < 2 < r "? Wavelets provide a fixed N-basis and achieve optimal rates for Besov spaces. Is the constraint on p and r a setting in which wavelet optimality breaks down? If not, then I don't think the claim is correct. If so, then it would be helpful to understand how relevant this regime for p and r is to practical settings (as opposed to being an edge case). The work on mixed Besov spaces (e.g. tensor product space of 1-d Besov spaces) is a fine result but not surprising. A minor note: some of the references are strange, like citing a 2015 paper for minimax rates for Besov spaces that have been known for far longer or a 2003 paper that describes interpolation spaces that were beautifully described in DeVore '98. It would be appropriate to cite these earlier sources.
The paper extends the results in Yarotsky (2017) from Sobolev spaces to Besov spaces, stating that once the target function lies in certain Besov spaces, there exists some deep neural networks with ReLU activation that approximate the target in the minimax optimal rates. Such adaptive networks can be found by empirical risk minimization, which however is not yet known to be found by SGDs etc. This gap is the key weakness of applying approximation theory to the study of constructive deep neural networks of certain approximation spaces, which lacks algorithmic guarantees. The gap is hoped to be filled in future studies. Despite the incompleteness of approximation theory, this paper is still a good solid work. Based on fact that the majority of reviewers suggest accept (6,8,6), with some concerns on the clarity, the paper is proposed as probable accept.
This paper introduces a method to enhance the global coherence of text generated from Language models. The proposed method (Time Control). Under the assumption in the latent space of sentence embeddings, the incoherent text can be seen as "Brownian motion" in the latent space. In order to enforce a goal to the generated text authors by fixing a start and end to this Brownian motion the process of text generation can be modeled as a Brownian bridge. From this assumption, the authors drive a method that consists of three steps (1) training an encoder to map sentences to a latent plan defined as Brownian bridge (2) training a decoder to reconstruct sentences from the given context + the true encoded vector of the target sentence from planning latent space using the trained encoder (3) at inference time: given a start and endpoint, a target trajectory of vectors $z_0, ..., z_t, ..., z_T$ is sampled and use the decoder to generate a sentence based on this bridge. Authors run several experiments to (1) evaluate the hypothesis that the encoder can capture local text dynamics using sentence order prediction task (2) evaluate the decoder to generate local incoherent text using the text-infilling task. (3) capture global text statistics by measuring the statistics (length of Wikipedia sections for city articles) of the generated text and compare them to the ground truth. (4) Evaluate the overall coherence of the long-generated text. Overall the results look convincing except for some caveats (see the areas of enhancement) **Pros** - The paper is well structured and easy to follow, the idea of modeling sentences to a Brownian bridge latent space is neat and generic enough to (1) allow for noise given its stochasticity (2) doesn't require explicit domain knowledge for planning. - Well Structured Experiments sections with 4 RQs and results that confirm each of the hypotheses - Reproducibility and Transparency in reporting of experiments in terms of available source code, dataset information, details about human evaluation, generation examples. **Areas of Enhancement & Questions to authors** - The information about each of the ablations (ID, BM) could be explained better. namely the section "ablations". - There's a clear Inconsistency in the best TC method between different latent dimensions (8,6,32), in most of the experiments there's at least one of the 3 that is performing drastically worse than the other baselines, while there's overall no clear winner. I wonder if you have thoughts about this. - Table 5 the VAE(32) method performs the best overall in "Wiki section" although the TC (16) method has been highlighted as the best. Is there a reason behind this? - During the training of the decoder how do you make sure that the decoder uses the information given by the latent plan? - Overall the paper would have benefited from an intrinsic visualization of the latent space, to make sure for example that there's no Information collapse of the embeddings when dealing with long sentences. This could be done by visualizing the planning trajectory difference between coherent and incoherent text. The paper introduces a simple method of preserving coherence in language modeling it builds on previous work that tried to implicitly model planning dynamics. The introduced solution is effective and general enough to not need domain-specific planning information. It is a good paper to accept overall. I advise the authors to clarify the information about the used baselines in a more clear manner. <doc-sep>This paper proposes a generation from a language model not only from an initial state, but also using a goal state. Instead of Brownian motion, the authors employ a draw from Brownian bridge by designating initial and end states, called Time Control. Experimental results show the proposed generation from Brownian bridge is more natural and coherent for text-infilling task, and also preserves text structures both by automatic evaluation and human evaluation. This paper proposes a generation from a language model not only from an initial state, but also using a goal state. Instead of Brownian motion, the authors employ a draw from Brownian bridge by designating initial and end states, called Time Control. Experimental results show the proposed generation from Brownian bridge is more natural and coherent for text-infilling task, and also preserves text structures both by automatic evaluation and human evaluation. Using Brownian bridge is a very simple and effective idea for text generation. My only concern is the range of its applicability: while it is far more natural than a simple random walk, Time Control only allows designating the first and last states for generation. However, in the actual situation, it is not always the case for the first (and sometimes, last) sentence should have a designated state. First few sentences might constitute just an ice-break, and the actual content might start after that. More generally, it is more desirable that we can condition the generation at arbitrary time. In fact, I think that this can be done by a conditional draw from a Gaussian process. Since Brownian motion corresponds to using an exponential kernel of GP, sentence generation from conditional GP would be the way for the future extention of this work. Anyway, this work will surely pave the way for such principled generations. Minor - Some tables are located within the main text. Tables and Figures should be placed top or bottom of the paper for readability: please use \\begin{figure}[t] for something like that. - Numerical results in Tables can be rendered in a smaller font (i.e. \\small). Also I recommend to condense line spacing for Tables for readability, using \\usepackage{setspace} and begin{spacing}{0.9} ... end{spacing}, for example. Nice attempt for random generation from neural language models using the idea of Brownian bridge. This work will pave the way for more princpled random generation from language models. <doc-sep>This paper proposes to model the evolution of sentences in a document via a stochastic process; specifically a Brownian Bridge process. The paper start off by assuming that the generated sequences by autoregressive models like GPT-2 follow Brownian motion in that they tend to get incoherent and "meander" in the semantic space. This paper aims to reduce this random behavior by pinning the endpoints of the trajectory and model the generation by Brownian bridge process instead. The key intuition behind this process is that given two endpoints z0 and zT, the evolution of z along time t is a Gaussian with mean that is some linear combination of z0 and zT. This paper models text by training an encoder for sentences x that produces the embedding z by training over triplets (x0, xt, xT) where 0<t<T that encourages zt to follow Brownian bridge dynamics and uses contrastive loss with a negatively sampled x't for training. The approach is tested for local coherence, long range order sensitivity, and generation of long sequences and is compared against ablative and external baselines. The proposed approach does lead to learning of embeddings that are obtainable via linear combination and this leads to improved performance on sensitivity to sentence order in documents and document generation. This paper has an interesting approach and tackles an important problem of streamlining sequence generation from autoregressive models. The experiments show the value of learning a manifold over the latents that have a linear relationship with some stochastic perturbation. They provide evidence that learning in such a manner is promising in order to maintain coherence over long text generation. However, the setting is fairly limited because this approach requires two contextual endpoints, the start and finish. This is especially underwhelming given that the introduction states that this approach aims to perform \\emph{controllable goal-oriented} generation. In my view, the setting described and experimented with doesn't reflect this goal. For example, there are limited experiments with regard to controllable generation, or goal-oriented generation tasks. Secondly, the assumption that autoregressive generation follows a Brownian motion is strong and I would like to see some empirical evidence or theoretical argument supporting this. One simple experiment could be to actually try to fit a Brownian motion model to a bunch of sequences generated from GPT-2, and show that this fitted model is not suitable for naturally occurring text. Experiment wise, my biggest concern is the VAE baseline. The point of this baseline is to show that for the same setup of Brownian bridge process, contrastive learning is better than the VAE objective, but I feel that the VAE implementation as described in the appendix does not make the comparison fair. Due to lack of details in the paper, I am assuming that the priors p(z0) and p(zT) are standard gaussians. If this is not true, then a clarification would ease this concern of mine. But assuming this is true, the loss basically tries to match the encoder distributions q(z0) and q(zT) obtained by f_{\\theta}(x0) and f_{\\theta}(xT) to the standard Gaussian. What this means is that there is a pressure to make the 0 and T embeddings similar which is not at all what we want from this bridge process model. A more careful instantiation of prior for VAE, or even learning a time sensitive prior would be a better implementation of the VAE baseline. Table 1 is another concern. This experiment basically trains a linear classifier over the encodings to identify if they are in-or-out of order. The proposed approach is naturally suited for this metric/classifier because the encodings at different times are more or less linearly related with some stochasticity. However, this is not true for the other baselines, so I am not sure what is the takeaway message from this experiment. Also, more exposition on the Brownian motion baseline would be helpful. The current description is not enough to get an idea about what exactly was done for generation and other experiments with this baseline. On a related point, I don't get why BM for Table 2 would be the same as the brownian bridge. Isn't it the case that Brownian motion baseline doesn't get to see \\emph{both} the endpoints? If I am mistaken about this, then more exposition is required here because I checked both the paper and the appendix carefully for this. Table 5 shows mixed results. More discussion and analysis here would be helpful. For clarification: please make explicit whether the triplets have a notion of distance or not i.e. it is sensitive to different value of t depending on which sentence in the middle was sampled. From the context, I am assuming this is the case but clarification would be helpful. Also, notation in equation 2's denominator is confusing. Are you summing over all the negative x_{t'}? Overall, I think this paper is well motivated and proposes a reasonable solution to improve coherence of model generated text. This is supported by ample experiments but I have serious concerns about some of the crucial experiments and baselines that I have detailed in my main review. Also, I think that the paper could be clearer about its contributions and implementation details. ----- Post rebuttal: thanks to the authors for the detailed response addressing many of my concerns. My biggest concern about the prior in the VAE baseline is somewhat alleviated given that the the authors used different fixed priors for the two settings. While this could be improved by having learnable priors/better priors, I think the current setting makes the experiments reasonably sound. I have raised my score. <doc-sep>The authors propose to use a Brownian bridge process to model global coherence of a long piece of text. They show how to train such a model in an encoder-decoder style setup, using a contrastive loss to model the Brownian bridge dynamics. The authors then verify aspects of their model with a series of experiments to show that their model with an underlying generative process outperforms competing approaches on a variety of local and global coherence and generation tasks. I really like the main modelling contribution of this paper. It is this reviewer's personal opinion that to do long-form text generation, it is not enough to generate token-by-token, but that some high-level planning is required, and the Brownian bridge process model (Time Control; TC) the authors propose is definitely a good candidate to model the latent drift of discourse (indeed, papers like [1] already used random walk-style models to explain properties of word vectors). There are some prior works on using structured probabilistic models, such as switching latent dynamical systems, for text generation [2], which should also be cited. The motivation of the model present is clear, and the description of how the model is trained is generally clear enough to reimplement. It wasn't immediately clear that training the model on triples only is enough to guarantee general Brownian bridge dynamics for the entire text trajectory, I feel a note should be added to clarify this. My other quibble here is with how the model is presented: although the general probabilistic model is written down in Equation 1, the likelihood function (i.e. the functional form of p(z_t | x_0, x_t, x_T)) is not explicitly written down anywhere, which leads to confusing things like the variance of the process \\sigma^2 being used in Equation 3 without prior introduction. I feel like explicitly writing down the likelihood would make the equations in the paper flow much better. I feel the major weakness of this paper is with the experimental sections. For various reasons, I have objections to each of the experiments, which I will go through below: - The first experiment attempts to show that TC is a better model of local discourse coherence. The authors take two sentences from a document k steps apart, embeds them and them attempts to predict the sentence ordering from the embeddings. They say that for k=1 all models considered perform at chance level on all datasets, and only show results for k=5 and k=10. However, models trained using the k=1 objective (such as ALBERT [3] and StructBERT[4]) seem to be able to perform the task better than chance, so theoretically this should be possible. Therefore, I think the baselines should at least include an ALBERT model to show the performance upper bound on this problem. Further, k=5 (or even 10) starts meaning the sentences start becoming very far apart (10 dialogue turns is more than enough to complete some of the simple dialogue agent tasks!), so it's questionable whether the model is really modelling 'local' dynamics at this point. - The second experiment looks at text infilling on the ROCStories dataset, and use BLEU and BLEURT to automatically evaluate their models (although the BLEURT results do not appear to be anywhere in the paper). The reported BLEU results are really low, to the extent that it's unclear whether an improvement from 2 to 5 BLEU is really meaningful. Part of the issue is that BLEU measures precision, which penalises text generation where there are a variety of possible outputs; for this reason, [2] report ROUGE results on ROCStories, which are much better. The missing BLEURT results would help contextualise model performance here. The human evaluation shows the model performs about as well as the ILM baseline from [5], which is ok I guess? In addition, the table ordering is incredibly confusing. Table 6, which shows the human evaluation for experiment 2, appears much later in the text, after tables for the later experiments. It took me a long time to find it. Can you group the tables a bit better, in thematic order? - The third experiment attempts to measure 'global text dynamics' by measuring length mismatch per section on Wikisections. It's unclear what notion of 'global text dynamics' the authors are referring to - there are many theories on discourse coherence of long text, and none of them easily map onto a simple measure of section length. If the authors simply mean whether the model has learnt a notion of document structure, I think it would be better to be more explicit about this: showing that fine-tuned GPT2 can't even replicate the structure of a homogenous document corpus is an interesting negative result. - The fourth experiment forces models to generate beyond the expected document length by suppressing generation of the EOD token. I'm really not a fan of this experiment, because I don't even expect TC to perform well on it. Do the authors just keep on conditioning the decoder on z_T, and force the model to generate from this? At this point, the model is just a standard autoregressive model, so the modelling contribution should have no effect. Alternatively, do the authors resample z_{t+1} each time the model finishes generating a sentence? In which case, how do the authors preserve the Brownian bridge dynamics, conditioning on hitting a target state z_T? There are a few methodological issues with this experiment. A better experiment to run would be to simply ask the human annotators to score texts freely generated from GPT2 and TC for coherence, as a measure of how well TC can generate coherent text. Overall, while the experimental section is weak, I really believe the core idea of directed Brownian dynamics for planning is a cool one, and deserves to be shared more widely. This is why I recommend acceptance. References: - [1]: RAND-WALK: A latent variable model approach to word embeddings, Sanjeev Arora et al. 2015 - [2]: Generating Narrative Text in a Switching Dynamical System, Noah Weber et al. 2020 - [3]: ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, Zhenzhong Lan et al. 2021 - [4]: StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding, Wei Wang et al. 2021 - [5]: Enabling Language Models to Fill in the Blanks, Chris Donahue et al. 2020 ================== Post author response: > Nonetheless, we think these observations fit well with the intuition our work proposes: Neighboring sentences are close to each other and act like Brownian motion where ordering is difficult to infer, and goal-orientedness / discourse structure emerges on longer stretches of sentences in a document. I like this framing - currently it's implicit in the paper, but maybe it can be made more explicit that we expect the larger k results to be better, and that this verifies the Brownian bridge approach towards modelling text dynamics. > Nonetheless, the end arbiter of this task is a human (how coherent do the generations sound to a human?) and we care about at least matching ILM, a method developed specifically for text-infilling. So, it’s promising that our method performs better and/or competitively with ILM on human-based metrics (BLEURT and Human evaluations in Table 6). I think it should be made explicit then that ILM is in effect an upper bound for model performance, as it is a model trained specifically to do the task, and that matching the performance of ILM is actually a strong result for the TC model. > So, to directly answer the reviewer’s question: we do not condition the decoder on z_T and do not resample new latents during generation. The model is thus primed to generate much longer text than it was typically exposed to? Thank you for the clarification. > We in fact do already ask human annotators to score the generation (rf. Table 7). In this setup, we remove the middle section of the generated output as the text is extremely long. See Figures 3-6 for examples of the full forced long text generation results. I believe the stronger (and more realistic) human evaluation is to not just evaluate the tail coherence on forced long text generation, but instead directly sample from the model naturalistically and evaluate that output using human annotators. If TC better captures global coherence, this should be visible even in this setting. Overall, I would like to thank the authors for their response. Many of my concerns have been addressed, and I am happy to increase my score. Interesting modelling contribution to ensure global coherence of generated text. The proposed modelling approach could have wide applicability, which is why I recommend acceptance.
All reviewers found that the proposed LM with Brownian motion is interesting and novel. Several reviewers raised (minor) concerns about experiments, but have been generally resolved by the authors.
The authors propose a new explanation type called subgoal-based explanations in the setting of sub-optimal and non-robust Intelligent Decision Support Systems (IDS). The aim of the explanation type is to serve as a training benefit for users to (i) determine when to trust that a recommended action is optimal, and (ii) make better decisions in the absence of a recommended action through an enhanced understanding of the task due to prior provided explanations. The proposed explanation contributes to improved user task performance and is preferred over other explanation types by users in the study performed. The proposed explanation type is intuitively simple and straightforward. The objective of the explanation is to guide naive users rather than the domain experts that most of the current explanation types cater to. The proposed method does not assume that the IDS system is ideal or optimal. The approach is measured in four different dimensions: (i) Are users able to reject more suboptimal action recommendations? (ii) Are users able to make better decisions in case the IDS breaks down and becomes unavailable? (iii) Do users perform the task better with subgoal explanations? (lower plan cost) (iv) Do users prefer the explanations over other explanation types? There are, however, a few points I'd like to highlight: 1. It would be helpful to the readers if the authors discuss how do contrastive explanations relate to the proposed explanations. Would it help to have contrastive explanations as a baseline for comparison? It would also help to clarify why CLC explanations are most relevant. 2. Is rejecting a suboptimal action always a good decision for the users? It is possible that the user ends up making a decision that is worse than the suboptimal recommended action. It'll be nice to see a study of how many optimal and suboptimal decisions (different from the ones that the IDS suggests) a user makes after it rejects the suboptimal action recommended by the IDS with subgoal explanations. 3. In the restaurant game planner description, I suggest adding the reasons for why a horizon of 35 was prefixed. 4. From the explanations, it seems that replanning is needed to generate explanations if the user decides to reject the recommended action. Is that correct or does it replan after every action performed by the user? Should replanning cost (maybe in terms of time) be added to the overall cost for the task? 5. A discussion on how rejected or accepted recommended actions by the user relate to the trust of the user on the IDS would be interesting. 6. Did the participants involved in the study know the probability of the IDS recommendations being accurate? It would be helpful to conduct studies with IDS systems with different accuracies (currently only done with 85%). 9. In future work, it'll be interesting to see how subgoal explanations will perform in domains with dead-ends or reversible actions. Consider a situation where the user rejects the optimal action suggested by IDS assuming it to be suboptimal and performs a worse sub-optimal action. This worse sub-optimal action may either be impossible to recover from or may require reversible actions or re-performing the actions in the correct order. Can subgoal explanations also output a confidence level that will help a user to identify such critical actions and trust the IDS recommended action more? There are minor grammatical mistakes that need to be corrected for easy readability and to avoid confusion: 1. The notation for p-value in empirical evaluation coincides with the notation for probability p with which a recommended action is changed to a suboptimal action. 2. In figure 2, the meaning of asterisks needs to be added to the caption. 3. In the abstract, "a suboptimal actions" --> "suboptimal actions" 4. In the introduction, in the 3rd paragraph and 2nd line: "characters" --> "characteristics"? 5. In the introduction, in the 4th paragraph and 3rd line: the task performance is not negatively impacted by the sudden absence of the previously available recommendations --> the task performance is not negatively impacted by the sudden absence of the recommendations. "previously available" should be removed as the previous recommendations are available but it is the current recommendations that become unavailable. 6. In the paragraph above related work: "broadly applicable across" --> "are broadly applicable across" 7. In figure 1 caption: the statement that the planner will replan for a new plan gives the wrong notion that it replans as soon as the user rejects the recommended action. However, the replanning occurs after the user has performed an action (even though different from the one recommended by IDS). The statement can be modified to "the planner will replan for a new plan for subsequent action suggestions..". 8. In the planning problem definition: the notation for model M is different for the transition function (mathcal notation is not used). 9. In the paragraph below Hypothesis 1: "Specifically, that with the aid" --> "Specifically, we hypothesize that with the aid" 10. In the paragraph above Restaurant Game planner, in the last second line: Given a recommendation, the user can "choose either" to conform... 11. Figure 2 caption has errors: User optimal action conformance and "suboptimal" action avoidance percentages for participants that received the "three types of explanations" from suboptimal IDS systems. 12. I suggest changing the word "condition" to "study condition" throughout the paper.<doc-sep>The paper suggests explaining planning decisions by indicating the sub-goal that the action aims to satisfy. The approach has been implemented, and evaluated in a user-study, comparing both objective performance and participant preferences against baselines (prior approach and no explanation). The evaluation also explores settings where the plan is optimal/sub-optimal. Results show some benefits of the proposed approach. One limitation of the current approach (which is acknowledged in the paper) is that the sub-goals are predefined in the planning domain. An interesting avenue for future work would be to explore sub-goals at different granularity levels, and perhaps allow the user to explore the hierarchy interactively. The paper is well-written, and the work seems both novel and fairly mature. I think this work will make for an interesting discussion in the workshop and would be happy to see it presented.
The paper proposes a novel approach for generating explanations for sub-optimal IDS systems. Both reviewers agree that that the methods described are technically sound and the paper is in a fairly mature stage. It would be a valuable addition to the workshop. As you move forward, we suggest you take into account the reviewers’ comments, especially those of reviewer 1 as they highlight some interesting points. Thank you for submitting to the workshop. We are looking forward to your presentation.
The authors introduce two strategies for predicting interatomic potentials. The first one is based on label augmentation. In this case, an auxiliary training is performed to classify the best-performing physics-based empirical interatomic potentials (EIP) for a given atomic configuration. If a given configuration (C) results in an a reliable energy E (there is also a label for no good classification), then the pair C-E is appended to the DFT-E training set. This strategy yields a performance boost from 18% to 51 %. The second method is based on transfer learning, where a NN is trained using EIP alone and then fine-tuned based on DFT energies. In this case, the improvements are from 18% to 26%. Strengths: The paper proposes a neural network potential based on physics-based EIP and DFT-energy labelled dataset. The novelty of the manuscript is exploiting this multifidelity data using label augmentation, a novel approach for neural-network potentials. Also, the increase in the performance is obtained with little cost in the computational load (mainly the auxiliary training). Weaknesses: 1. There is no comment on how this potential generalizes outside the selected species. 2. The methodology is limited to only two single-specie materials. 2. The paper is overall well-presented, but it lacks clarity is several aspects (see below for more details). The authors state that their work is limited to single-specie materials (Si and Al in this case) and that future work will possible include multi-species ones. I believe limitations are adequately addressed. <doc-sep>This paper propose to "inject" the domain knowledge in empirical interatomic potentials (EIPs) into neural networks by using the data generated by EIPs. EIPs is much faster than DFT, and reasonably accurate. However, * multiple EIPs may be applicable (which EIP to trust?) * their accuracy varies in the configuration space (when should we trust EIPs?). Two strategies are presented: **LA**: label augmentation (semi-supervised learning). A classifier is trained to jointly handle the two issues above. * it predicts the best-performing EIP for a given configuration. (which EIP to trust?) * If none is sufficiently accurate, outputs a dummy indicator. (when should we trust EIPs?). This classifier is trained on configurations where DFT data is available (how much can it be generalized to unseen configurations?). This classifier is then used to augment data using the predicted best-performing EIP on each configuration. This builds an augmented dataset consisting of both EIP samples and DFT samples. As EIP labels are less accurate, a Tukey loss is used as it's less sensitive to outliers (as it's capped). MSE is used if the label comes from DFT. **MP**: multi-task pretraining (transfer learning) instead of only using the label from the predicted best-performing EIP, this strategy use all EIPs labels in pretraining in a multi-task way. Then DFT data is used to finetune the model. The strategies are then demonstrated with two typical model backbones (SOAPNet as descriptors + MLP, and SchNet as xyz + GNN, ) on two material datasets (KIM-Si for silicon, and ANI-Al for aluminum). Both datasets are energy prediction tasks. Both strategies are shown to be able to reduce MAE and combining MP and LA can achieve even more improvement. The main contribution is the two strategies proposed and demonstrated to help neural networks with EIPs data, which is usually much cheaper than DFT. **Strengths**: * They propose two strategies shown to improve neural networks with EIPs data, which is usually much cheaper than DFT. * They demonstration with both "descriptors + MLP" and "xyz + GNN" models. These two cases are very representative and therefore support the significance of their results. * Several designs are reasonable: (e.g., dummy EIP, Tukey loss) **Weaknesses**: * The sensitivity of the performance to the selected set of EIPs (8 for silicon and 10 for aluminum as in the paper) is not clearly addressed. How will the performance change if only 2 or 3 EIPs are selected? What's the trend of model performance as the number of EIPs increases? * can be more sound if the author compare the performance between the learned models vs. the EIPs used to build dataset. This can be evaluated for configurations with DFT labels. some typos: * line15, "bolster" => "booster" * line165, "DFT and EIP energies" => "DFT energies" * see weakness. The sensitivity of the performance to the selected set of EIPs is not clearly addressed. <doc-sep>In this manuscript, the authors proposed to incorporate domain knowledge into machine learning empirical interatomic potentials with two techniques, a weakly supervised learning based on auxiliary classifiers and a pretraining/fine-tuning mode based on transfer learning. Their experiment results have shown a comprehensive outperformance over the baseline methods on systems with a single atomic species. On the strengths side, the presented method attempts to solve an import problem by leveraging unlabeled training instances generated from EIPs. The paper is well written and easy to follow. The main ideas are clearly explained and the empirical evaluation protocols and results are well presented. However, both technical and theoretical contributions are inadequate. The auxiliary task modules and pre-training strategies have been widely applied to the GNNs. Albeit effective, these techniques are quite straightforward and lack technical motivation and insights. Furthermore, the evaluation of the experiment is weak. The authors chose two old methods (SOAPNet in 2013 and SchNet in 2017) for validation and cannot show that these strategies are still valid in the current SOTA frameworks (e.g., DimNet++, arXiv:2003.03123 and GemNet arXiv:2106.08903). The evaluation conducted only on the single atom system also raises serious concerns about its generalization ability. Overall, I think this work may be more appropriate for a journal in computational chemistry rather than a machine learning conference. I did not find any potential negative societal impact. <doc-sep>This paper proposes to improve the prediction performance of the expensively computed material energy from density functional theory (DFT) by making use of the cheaply computed material energy from empirical interatomic potentials (EIP). Two strategies are proposed to use EIP based labels, including label augmentation and multi-task pretraining. Experimental results show that these two strategies can both improve the prediction performance of two models. Strengths: (+) This work gives a very meaningful exploration of improving the neural network model for DFT computation prediction with large amounts of EIP labeled data given that DFT labeled data is limited. The success of the proposed method can motivate researchers working on developing machine learning models for DFT prediction to consider using cheap data source to improve their models. (+) The writing and organization of the paper is clear and easy to follow. Weaknesses: (-) In experiments, SOAPNet and SchNet are used as prediction models. However, they are designed for molecules, not the best models for material data. It is better to use existing material property prediction models, such as CGCNN [1], in experiments. (-) Necessary description about how label augmentation and multi-task pretraining strategies are combined in experiments is lacking. [1] Xie, Tian, and Jeffrey C. Grossman. "Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties." Physical review letters 120.14 (2018): 145301. N/A
This paper proposes two strategies for injecting domain knowledge into neural networks for predicting material propreties, these strategies lead to substantial accuracy gains. All reviewers had positive feedback on the paper and their suggestions helped improving the paper and the experiments. Accept
This paper aims to facilitate feature learning in NN models by exploiting more from reliable examples. This is very similar to self-paced learning where the model learns from the easier samples at first and proceeds to learn from difficult and challenging samples. The authors should discuss their difference with self-paced learning. The method is positioned as a general one for feature learning. I do not know the reason why the authors only apply for object detection on a very specific dataset. It is expected to see whether the proposed method is also effective for image classification. More datasets for evaluation are needed, even only for the object detection application. <doc-sep>OVERVIEW: The authors tackle the problem of detecting small/low resolution objects in an image. Their key idea is that detecting bigger objects is an easier task and can be used to guide the detection of smaller objects. This is done using the "Feature Intertwiner" which consists of two branches, one for the larger objects (more reliable set that is also easier to detect) and one for the smaller objects (less reliable set). The second branch contains a make-up layer learned during training (which acts as the guidance from the more reliable set) that helps compensate details needed for detection. The authors define a class buffer that contains representative elements of object features from the reliable set for every category & scale and an intertwiner loss that computes the L2 loss between the features from the less reliable set & the class buffer. They also use an Optimal Transport procedure with a Sinkhorn divergence loss between object features from both sets. The overall loss of the system is now a sum of the detection loss, the intertwiner loss and the optimal transport loss. They evaluate their model on the COCO Object detection challenge showing state-of-the-art performance. They also provide thorough ablation analysis of various design choices. The qualitative result in Fig.1 showing well clustered features for both high & low resolution objects via t-SNE is a nice touch. COMMENTS: Clarity - The paper is well written and easy to follow. Originality & Significance - The paper tackles an important problem and provides a novel solution. Quality - The paper is complete in that it tackles an important problem, provides a novel solution and demonstrates via thorough experiments the improvement achieved using their approach. QUESTIONS: 1. The Class Buffer seems very restricted in having a single element per object category per scale to represent all features. The advantage of forcing such a representation is tight clustering in the feature space. But, wouldn't a dictionary approach with multiple elements give more flexibility to the model and learn a richer feature representation at the cost of not-so-good clustering ? 2. Any comment on why you drop performance for couch ? (and baseball bat + bedroll) 3. In Table 4 of Appendix where you compare with more object detection results, I find it interesting that Mask RCNN, updated results has a might higher AP_S (43.5) compared to you (27.2) and everyone else. I was expecting you to be the best under that metric due to the explicit design for small objects. They (MaskRCNN, updated results) are also significantly better than the rest under AP_M but worse under AP_L. Can you explain this behavior ? Is the ResNeXt backbone that much better for small objects ? <doc-sep>This paper proposes a novel approach with the hypothesis that the reliable features can guide the less reliable ones. This approach is applied to the object detection task and show consistent performance improvements. pros) (+) This paper is well-written and easy to follow. (+) The base idea that divides the learned features into two sets; the reliable feature set and the less reliable one is very interesting and looks novel. Plus, the hypothesis, which is that reliable features can guide the features in the less reliable set is also interesting. (+) The performance improvements are quite large. (+) Extensive ablative studies are provided to support the proposed method well. cons) (-) The method of obtaining the representative in buffer B is not clearly presented. (-) The overall training and inference procedure are not clearly presented. (-) Some notations and descriptions are vague and confusing. (-) More than two datasets are necessary to show the effectiveness of the methods comments) - What is the higher level feature map P_m? and How did you choose the higher level feature map at the m-th level in option (b) and (c) in Section 3.3. - What is the meaning of the "past" features in Section 3.2? - It is better to show the exact architecture of the make-up module and the critic module. - Can this method apply to the other backbones such as VGG or ResNets without FPN? - The sentences at the bottom of p.4 starting with "Note that only~" looks ambiguous. - f_critic^j may be the j-th element of F_critic, please denote what f_critic^j stands for. Even if the paper needs to be revised for better readability, I think this paper is above the standard of ICLR because the idea is interesting and novel. Furthermore, the experimental studies are properly designed and well support the main idea. I am leaning toward acceptance, but I would like to see the other reviewers' comments.
The paper proposes an interesting idea (using "reliable" samples to guide the learning of "less reliable" samples). The experimental results and detailed analysis show clear improvement in object detection, especially small objects. On the weak side, the paper seems to focus quite heavily on the object detection problem, and how to divide the data into reliable/less-reliable samples is domain-specific (it makes sense for object detection tasks, but it's unclear how to do this for general scenarios). As the authors promise, it will make more sense to change the title to "Feature Intertwiner for Object Detection" to alleviate such criticisms. Given this said, I think this paper is over the acceptance threshold and would be of interest to many researchers.
Let me note that I have very little expertise in quantization and so cannot really judge the significance of such contributions. I am, however very familiar with the GNN literature. Summary ------------- A method to train GNNs such that later quantization works well is presented. The authors first analyse the message passing definition to identify those computation steps whose results show the largest variance, and hence suffers most from the imprecision introduced by quantization. Consequently, hey focus on the message aggregation phase of message passing. They then propose two improvements to more standard quantization-aware training (QAT): (1) applying quantization during the forward pass only on message aggregation outputs (and doing it more often on nodes that receive many messages); and (2) using percentile-based statistics for determining the ranges of values considered during quantization. Finally, experiments show that the resulting training procedure works well for GNNs on a number of datasets, matching or slightly improving the baseline performances. In most cases, the proposed Degree-Quant method also outperforms baseline QAT methods. Strong/Weak Points ------------- * (+) Empirical results show moderate gains over the baseline QAT methods for int8 quantization, and substantial gains for very coarse quantization to int4. * (+) Thoughtful experimental ablations study the effect of the two improvements separately, and further empirically verify the theoretical analysis of sources of errors. * (-) The paper is not self-contained and hence not easily readable for people without background knowledge in quantization. While GNNs are fully (though very densely) defined in Sect. 2.1, no technical details on quantization are provided in Sect. 2.2. I ended up skimming some of the cited papers to even understand how values are practically mapped between fp32 and int8. Consequently, Sect. 3.2 is discussing extensions and alternatives to concepts that are simply not explained in the paper. Recommendation ------------- I think this paper can be accepted and would be useful for the very narrow segment of people interested and knowledgeable in GNNs and quantization. However, in the current form, it is inaccessible to a wider audience and I believe that it could be significantly improved in that regard. Questions ------------- (1) Message aggregation is identified as a key source of quantization error due to the variance in the number of messages. For graph-level tasks (such as MNIST, CIFAR and ZINC), the aggregation of node representations to a graph representation should lead to a similar problem. Do you have deeper analysis on this aspect? Detail Feedback ------------- * Sect. 3.1, end: the mixing of GCN and GIN is somehow confusing and it would be wortwhile to restructure this. (i.e., $\\mathbf{y}_{GIN}^{(i)}$ is defined before the equation its used in, but $\\mathbf{y}_{GCN}^{(i)}$ after, etc.) * Sect 3.2 / Alg. 1: I found the use of "mask" / "masking" here highly confusing, as I associate it with removing a value (as in masking of loss components, dropout masks, hiding a human face behind a cat mask, ...), but here the semantics is inverted: masks determine which values are "more visible" (by not applying the quantization to them). Unless this term is already in standard related use in the quantization literature, I would strongly recommend to use a different term here (e.g. "preserved", "protected", ...) * Fig 5/6 are not readable for colorblind people.<doc-sep>The authors propose a new technique for quantization aware training of neural networks that is specially suited for graph neural networks. They do a good job of motivating the problem by demonstrating that the large variation of input degree in GNNs can lead to unique challenges for numerical precision, forcing a compromise between truncation error and rounding error. Th proposed technique incorporates stochastic masking and quantization proportional to the input degree to allow higher input-degree nodes to operate at higher resolution on average. The authors demonstrate strong improvements over quantization aware training that treats all nodes equally, achieving relatively small drops in accuracy for a large compression and speedup of GNN inference. The work is presented in a straightforward and clear manner, with clear applications to important problems. Two small things that could improve the paper. * Percentile tracking is a component to the methods, but relies on a reference for full explanation. A more precise statement of this part of the method in the paper itself would help clarify for readers. * Minor nit, but some acronyms are used before they are defined (such as GCN). <doc-sep>(Edit: Sorry, the previous review was for a different paper that ended up in here due to a copy-paste issue) This paper uses quantization and Quantization Aware Training (QAT) to improve the speed performance of GNN inference for three types of GNN modes: GIN, GCN and GAT. The paper identifies the aggregation step to be where quantization introduces the most numerical error, and use stochastic masking and clipping the top/bottom values to mitigate the issue. This topic is very relevant and interesting, and novel to the best of my knowledge---although I'm not familiar with the literature surrounding quantized neural networks. There are places where the writing can be more careful. For example, in the abstract the authors write: "little research exploring methods to make GNN more efficient at inference time". However, there has been research focusing on both hardware acceleration [1] and making GNN models smaller [2]. Quantization isn't the only approach to make GNN inference faster. Claims like "it is not possible to deploy this technique on smartphones" (from intro paragraph 2) should be supported, since it's difficult for a reader to verify such a claim. Some of the claims, like the one bolded in Table 1, should be in the abstract. I'm not sure if this is typical in the quantization literature, but a wallclock time comparison would be useful in Table 2 to compare the time speedup against the baseline. One other presentation feedback: in figure 1, the x-axis is not continuous. A line chart is not appropriate since the slope of the line segments in the chart is meaningless. Removing the lines connecting the dots would make more sense. [1] Zeng and Prasanna. (2020) GraphACT: Accelerating GCN training on CPU-FPGA heterogeneous platform https://arxiv.org/abs/2001.02498 [2] Yan et al. (2020) TinyGNN: Learning Efficient Graph Neural Networks https://dl.acm.org/doi/abs/10.1145/3394486.3403236
The paper presents a quantization aware training method for GNNs. The problem is very well motivated, the method is well-executed, and experiments are also well designed. The paper does seem relatively low on technical novelty. All the reviewers are positive about the paper, and the paper has certainly improved significantly over the rebuttal phase. So, we would like to see the paper accepted at ICLR.
The paper addresses a timely topic for identification of samples, which are out-of-distribution at test time. The approach is evaluated for a segmentation task and delivers promising results. Their method is compared to other state-of-the-art OOD methods. 1) As far as I understood (e.g. Fig. 2), the proxy task is learned together with the task of interest (segmentation). I am wondering how this effects the performance of the segmentation network G itself, in comparison to isolated training of the segmentation network. Does the performance degrade, when OOD is an additional goal? 2) It is known that segmentation networks for LV perform very well on center slices but have problems in the apex and base regions, because such regions look very different from the majority of slices. Have you used 2D or 3D segmentation networks? Is the OOD detection based on 2D or 3D samples? Your OOD detection might be too sensitive towards these regions. Please add additional result descriptions showcasing the performance of OOD in center slices vs. apex vs. base. 3) From the description in sect. 4, the evaluation strategy using these 3 dat sets is not completely clear to me. It would be helpful to provide an illustration showing the crossvalidation and the respective ID and OOD settings. 4) In the $L_{ss}^C$, , segmentation loss is not mentioned. 5) The narrative of the paper is easy to follow, however, some statements are confusing and should be changed: - "require no modification in the network architecture or training procedure" (p.2) -> There might be no changes necessary to a segmentation network itself, yes, but the presented approach might degrade the performance of the target task (see point 1). I think, changing the loss, such as in Eqn. 2, actually changes the training procedure. - "Self-supervised tasks do not require manual annotations, and so the performance in test samples can be assessed"(p.2) this statement is not clear to me - "Unlike current state-of-the-art, the proposed approach does not require the use of a speciffic proxy task, or training the model with the explicit goal of OOD detection [...]across three CMR datasets and for two different proxy tasks." (p.2) First you argument that you do not need a proxy task and then you are actually using 2 proxy tasks. I guess you want to highlight that you do not require additional labels for the proxy tasks? Similar statements can be found at the end of the related work. - Fig. 1: From which dataset is the ground truth segmentation? - I recommend using Violin plots instead of boxplots for the results. Please indicate mean, and median in the violin plots. 6) What is the difference between M&M data sets and sunnybrook? Does Sunnybrook data come from a different vendor than the two M&Ms, which would qualify it for such an OOD analysis? <doc-sep>The paper is well-structured and well-written. This suggests its maturity. Literature review is decent and points at the right direction. Methods are clearly explained and easy to understand. Experiments include 3 datasets with different manufacturers and sources. Promising results with potential developments for other domains. The segmentation networks here benefit from two auxiliary losses or self-supervision tasks. One is based on contrastive learning and the other on edge detection. My intuitive understanding is that edge detection provides a template for the segmentation branch to fill in. Contrastive task, however, makes it robust to certain types of transformations while reinforcing the representational capabilities of the network. Considering the overarching goal of the study, which is detecting OoD examples, contrastive task makes more sense to me. The other one also seems to work but maybe I like the contrastive one better. Anyways, the results show that the contrastive loss works preferably better too. So, my question is along the lines of the contrastive learning. The contrastive learning algorithm typically require large minibatches or memory banks to exploit the similarities between a good amount of data points. These, in practice, can trick implementations. However, there is no information regarding these aspects in the current work. Did you implement something new? Do you have an interesting design to share? If you used an existing solution, what was that? Any modifications or special tricks played in this study? I can see that you have space limitations here. But, 1-2 pages in Appendix could help. In addition, training on images from Vendor A seems to allow for better generalization overall (I also checked Appendix B). What is special about Vendor A. What are the main differences from Vendor B. I am not asking you to give names but some comparison could help us understand what is going on better. For instance, old vs. new machine/technology, image resolution, technical capacities of operators, procedural differences, ... 3-fold cross validation: In general, DNNs exhibit a great deal of diversity in their function estimations due to various factors, such as optimisation trajectories, randomness in data (shuffling and augmentation), etc... Using a larger number of folds, e.g., 10, would allow for a better established results and improve the trust in findings here. Finally, I need to ask an obvious question. Did you consider using both contrastive and edge detection tasks together? It seems like these could be combined into the same network architecture. These tasks could complement each other, possibly. Can you speculate a bit in this regard? What would happen if this was implemented? Did you have any concerns for not implementing it this way? I am just curious. <doc-sep>• The paper is easy to read and interesting • The paper addresses the very relevant topic of OOD sample detection • The authors provide an extensive literature review • The author’s method description is detailed and mostly clear • The authors provide extensive results with comparisons to multiple other methods • The authors highlight both advantages and limitations of their methods • The author’s description for data splitting and evaluation (4) is not entirely clear. Which dataset is used in which fold? Which datasets are used for hyperparameter tuning and which datasets are used for reporting results? The current description sounds like validation and test results are averaged, which the authors probably did not do • The authors should be moderate with their claim of novelty. The authors cite the paper Hendrycks 2019 which employs a self-supervised rotation loss which is also used for OOD scoring. I recommend to explicitly state that self-supervised losses have been used for OOD scoring and that the authors adopt this approach for CMR segmentation (see end of introduction). <doc-sep>Interesting use of an edge detector as a proxy task to detect outliers. Methods evaluated on multiple external datasets. .............................................................................. Presentation unclear. The first time that the proposed method is stated in details is page 4. Not clear how OoD is defined in the experiments. ..............................................................................
The paper proposes a method to indicate when a test sample differs from those in the training distribution. Thus, by detecting such OOD test cases, the proposed method aims to raise a flag that the learned method cannot be used on such OOD test data. As the reviewers have pointed out, this is an important limitation of current methods and needs addressing. While, I am positive about the paper (after the rebuttal stage), some issues raised by the reviewers still remain. The methodology relies on existing methods combining voxel-level uncertainty estimation with the value of the self-supervision loss. Empirical analysis doesn't employ existing methods for outlier/novelty detection, for which a lot of literature exists (both without and with deep learning). The employed baselines are standard DNN-based methods. For the self-supervised task of edge detection, the method relies on edge locations that is a very laborious task.
null
This exhibit describes the rationale behind converting a PhD thesis to an HTML page, and presents a first-person account of the process. Both reviewers appreciated this idea and found the submission to be "clear". However, reviewers also note that there weren't any references to prior efforts that undertook a similar initiative. Furthermore, one reviewer sees limited value in this as solely an 'exhibit' if the associated code weren't released (or minimally, enough details to reproduce the exhibit weren't provided). Having carefully read through the submission, I agree with the reviewer assessment. I believe that presenting this exhibit at the workshop would a) expose the ML audience to the idea of hosting a thesis as a webpage, and b) allow the author(s) of the exhibit to elicit meaningful feedback in improving their exhibit (particularly from a pedagogy and accessibility standpoint). I strongly encourage the authors to include more details about the process (and code) when preparing the camera ready version. I also urge the authors to voice the scope to which this format is accessible. Further, the eventual submission must also discuss existing packages or efforts to convert papers / LaTeX to web formats. (e.g. https://learning-from-play.github.io/ is a CoRL 2020 paper that was converted to a webpage, https://github.com/latex2html/latex2html is a LaTeX to HTML translator; there may be more efforts that I might have missed out on). Note about double-blind violation: The program chairs deduce that the authors intended to submit an exhibit, but have inadvertently submitted to an incorrect track. In the eventual workshop proceedings, we will list this submission as an "exhibit / workflow".
I really liked this paper and believe it could be useful to many practitioners of NLP, conversational ML and sequential learning who may find themselves somewhat lost in the ever-expanding field of dynamic neural networks. Although the format of the paper is seemingly unusual (it may feel like reading a survey at first), the authors propose a concise and pedagogical presentation of Jordan Networks, LSTM, Neural Stacks and Neural RAMs while drawing connections between these different model families. The cornerstone of the analysis of the paper resides in the taxonomy presented in Figure 5 which, I believe, should be presented on the front page of the paper. The taxonomy is justified by a thorough theoretical analysis which may be found in appendix. The authors put the taxonomy to use on synthetic and real data sets. Although the data set taxonomy is less novel it is indeed insightful to go back to a classification of grammatical complexity and structure so as to enable a clearer thinking about sequential learning tasks. An analysis of sentiment analysis and question answering task is conducted which relates the properties of sequences in those datasets to the neural network taxonomy the authors devised. In each experiment, the choice of NN recommended by the taxonomy gives the best performance among the other elements presented in the taxonomy. Strength: o) The paper is thorough and the appendix presents all experiments in detail. o) The taxonomy is clearly a novel valuable contribution. o) The survey aspect of the paper is also a strength as it consolidates the reader's understanding of the families of dynamic NNs under consideration. Weaknesses: o) The taxonomy presented in the paper relies on an analysis of what the architectures can do, not what they can learn. I believe the authors should acknowledge that the presence of Long Range Dependence in sequences is still hard to capture by dynamic neural networks (in particular RNNs) and that alternate analysis have been proposed to understand the impact of the presence of such Long Range Dependence in the data on sequential learning. I believe that mentioning this issue along with older (http://ai.dinfo.unifi.it/paolo/ps/tnn-94-gradient.pdf) and more recent (e.g. http://proceedings.mlr.press/v84/belletti18a/belletti18a.pdf and https://arxiv.org/pdf/1803.00144.pdf) papers on the topic is necessary for the paper to present a holistic view of the matter at hand. o) The arguments given in 5.2 are not most convincing and could benefit from a more thorough exposition, in particular for the sentiment analysis task. It is not clear enough in my view that it is true that "since the goal is to classify the emotional tone as either 1 or 0, the specific contents of the text are not very important here". One could argue that a single word in a sentence can change its meaning and sentiment. o) The written could be more polished. As a practitioner using RNNs daily I find this paper exciting as an attempt to conceptualize both data set properties and dynamic neural network families. I believe that the authors should address the shortcomings I think hinder the paper's arguments and exposition of pre-existing work on the analysis of dynamic neural networks.<doc-sep>Summary ========= The paper analyses the taxonomy over memory-based neural networks, in the decreasing order of capacity: Neural RAM to Neural Stack, Neural Stack to LSTM and LSTM to vanilla RNN. The experiments with synthetic and NLP datasets demonstrate the benefits of using models that fit with task types. Comment ======== Overall, the paper is well written and presents interesting analysis of different memory architectures. However, the contribution is rather limited. The proposed taxonomy is not new. It is a little bit obvious and mentioned before in [1] (Unfortunately, this was not cited in the manuscript). The theorems on inclusion relationship are also obvious and the main contribution of the paper is to formally show that in mathematical forms. The experiments on synthetic tasks give some insights into the models’ operations, yet similar analyses can be found in [2, 3]. To verify the models really learn the task, the authors should include tests on unseen sequence lengths. There remains questions unexplained in NLP tasks such as why multi-slot memory did not show more advantages in Movie Review and why Neural Stack performed worse than LSTM in bAbI data. Minor potential errors: In Eq. (6), r_{t-1} should be r_t The LSTM presented in Section 3.2 is not the common one. Normally, there should be x_t term in Eq. (3) and h_t=g_{o,t}*\\tanh(r_t) in Eq. (6). The author should follow the common LSTM formulas (which may lead to different proofs) or include reference to their LSTM version. [1] Yogatama et al. Memory Architectures in Recurrent Neural Network Language Models. ICLR’18 [2] Joulin et al. Inferring algorithmic patterns with stack-augmented recurrent nets. NIPS’15 [3] Graves et al. Neural Turing Machines. arXiv preprint arXiv:1410.5401 (2014). <doc-sep>The authors propose a review-style overview of memory systems within neural networks, from simple RNNs to stack-based memory architectures and NTM / MemNet-style architectures. They propose some reductions to imply how one model can be used (or modify) to simulate another. They then make predictions about which type of models should be best on different types of tasks. Unfortunately I did not find the paper particularly well written and the taxonomy was not illuminating for me. I actually felt, in the endeavor of creating a simple taxonomy the authors have created confusing simplifications, e.g. "LSTM: state memory and memory of a single external event" to me is mis-leading as we know an LSTM can compress many external events into its hidden units. Furthermore the taxonomy did not provide me with any new insights or display a prediction that was actually clairvoyant. I.e. it was clear from the outset that a memory network (say) will be much better at bAbI than a stack-augmented neural network. It would be more interesting to me, for example, if the paper could thus formalize why NTMs & DNCs (say) do not outperform LSTMs at language modeling, for example. I found the reductions somewhat shady, e.g. the RAM simulation of a stack is possible, however the model could only learn the proposed reduction if the number of write heads was equal to the number of memory slots --- or unless it had O(N) thinking steps per time step, where N is the number of memory slots, so it's not a very realistic reduction. You would never see a memory network, for example, simulating a stack due to the fixed write-one-slot-per-timestep interface. Nit: I'm not sure the authors should be saying they 'developed' four synthetic tasks, when many of these tasks have previously been proposed and published (counting, copy, reverse copy).
This paper presents a taxonomic study of neural network architectures, focussing on those which seek to map onto different part of the hierarchy of models of computation (DFAs, PDAs, etc). The paper splits between defining the taxonomy and comparing its elements on synthetic and "NLP" tasks (in fact, babi, which is also synthetic). I'm a fairly biased assessor of this sort of paper, as I generally like this topical area and think there is a need for more work of this nature in our field. I welcome, and believe the CFP calls for, papers like this ("learning representations of outputs or [structured] states", "theoretical issues in deep learning")). However, despite my personal enthusiasm, the reviews tell a different story. The scores for this paper are all over the place, and that's after some attempt at harmonisation! I am satisfied that the authors have had a fair shot at defending their paper and that the reviewers have engaged with the discussion process. I'm afraid the emerging consensus still seems to be in favour of rejection. Despite my own views, I'm not comfortable bumping it up into acceptance territory on the basis of this assessment. Reviewer 1 is the only enthusiastic proponent of the paper, but their statement of support for the paper has done little to sway the others. The arguments by reviewer 3 specifically are quite salient: it is important to seek informative and useful taxonomies of the sort presented in this work, but they must have practical utility. From reading the paper, I share some of this reviewer's concerns: while it is clear to me what use there is the production of studies of the sort presented in this paper, it is not immediately clear what the utility of *this* study is. Would I, practically speaking, be able to make an informed choice as to what model class to attempt for a problem that wouldn't be indistinguishable from common approaches (e.g. "start simple, add complexity"). I am afraid I agree with this reviewer that I would not. My conclusion is that there is not a strong consensus for accepting the paper. While I wouldn't mind seeing this work presented at the conference, but due to the competitive nature of the paper selection process, I'm afraid the line must be drawn somewhere. I do look forward to re-reading this paper after the authors have had a chance to improve and expand upon it.
This paper aims to provide theoretical understanding for contrastive learning where "similar pairs" of points $x$ and $x^+$ are encouraged to have similar representations through an InfoNCE inspired objective function. Some prior works show the benefit of learned representations for linearly classifying downstream classes, by making conditional independence like assumption on the similar pairs or positive samples, i.e. $x$ and $x^+$ are (approximately) conditionally independent given downstream label $y$. This work argues that these assumptions are quite strong for contrastive learning with data augmentations, and aims to show guarantees under the following weaker and more realistic assumption: support of augmentation distribution of different inputs from the same class overlap to form a "connected graph" of inputs within a class, whereas support of augmentations of inputs from different classes do not overlap. Lower and upper bounds using this and some other assumptions, connecting the downstream performance of representation function to the contrastive loss. Some simulation experiments are presented to support some aspects of the theoretical analysis. Using the insights from the analysis, the paper proposes an "Average Confusion Ratio (ACR)" metric that can be used to predict the ranking of downstream performances of different augmentations **using only unlabeled data**. Experimental evidence is provided on CIFAR and STL datasets to verify the efficacy of this metric for some practical augmentations. While there are some interesting aspects in the paper (especially the ACR metric), the theoretical analysis seems to have raised many questions and concerns that I have summarized below (details in main review). - **Soundness of assumptions**: Assumption 4.6, which is crucial, seems questionable and may not be coherent or appropriate to make in this setting. More on this in point (W2) of main review - **Deeper dive into theoretical results**: There is a lack of discussion about the (non-)vacuousness of the bounds in the main results Theorem 4.2 and 4.8, that puts the interpretation and significance of the result in question. More on this and related issues in point (W2) of main review. - **Comparison to prior work**: The work of HaoChen et al. in particular is not adequately compared to, especially since some of the points being addressed here are covered through a different kind of analysis in that paper. More on this in point (W3) of main review. **Strengths**: - (S1) The problem being addressed is very relevant. Contrastive learning has enjoyed a lot of empirical success, and various works on theoretically understanding lack in one of many aspects when it comes to closeness to practice. This paper addresses issues with the theoretical assumptions and results in many prior work. - (S2) Theorem 4.2 that upper bounds the downstream classification loss without conditional independence is new and interesting. The ACR metric that can select good augmentations using just unlabeled data is also an interesting finding. - (S2) Various parts of the paper are accompanied with experiments (simulations and on standard datasets) to relate the theoretical analysis to practice - (S3) Paper is clearly written and easy to follow **Weaknesses**: Here are many concerns about the theoretical assumptions and results that would help to have addressed by the authors. - (W1) Assumption 4.6: One of the main concerns is the perfect alignment assumption, which *assumes* that the optimal solution $f^*$ of the NCE loss will satisfy $f^*(x) = f^*(x^+)$ for all positive samples $x$ and $x^+$. This seems like an unnatural assumption to make directly on the optimal solution, and is implicitly an assumption on the distribution of positive samples $p(x, x^+)$, since the optimal unit norm representations that minimizes that infoNCE loss depends strongly on this distribution. While some arguments for perfect alignment have been made in prior work [3], it is not clear whether that can be coherently imported here as an assumption. In fact, it is quite likely that the optimal infoNCE solution will not satisfy this assumption exactly for most joint distributions $p(x, x^+)$ (at the very least, a lot more justification is needed). This benign looking assumption undercuts the point that results here are shown under "less restrictive assumptions" compared to prior work, and it kind of trivializes the result in Theorem 4.8. **Note that the concern here is not just that the assumption is too strong or unrealistic (which is often unavoidable and acceptable), but that its not clear when the assumption can even be true and whether or not it is mathematically compatible with the rest of the setting.**. - (W2) (Non-)vacuousness of bounds: I found Theorem 4.2 interesting since it can show a bound similar to the bound from [1] but without the conditional independence. One discussion I found missing is about how vacuous/non-vacuous the upper bound can be. Since the upper bound looks like $\\mathcal{L}_{NCE}(f) - \\log(M/K) + \\sqrt{\\text{Var}(f(x) | y)}$, it is not entirely clear whether this bound can ever be non-vacuous, i.e. are there cases where the sum of these terms can be very small. For e.g., in Theorem 4.8 where $\\text{Var}(f(x) | y) = 0$, I can estimate a rough lower bound on this upper bound of $\\log(M/K + M(1-1/K)/e^2) - \\log(M/K) \\approx \\log(1 + (K-1)/e^2)$ which can be large for a large value of $K$. (here I used $\\|f(x)\\| = 1$). A discussion about the vacuousness (or not) of the bound can be critical in understanding whether the bound is indeed meaningful. A side note, given Theorem 4.2 and Proposition 4.7, Theorem 4.8 just seems like a corollary rather than a theorem. - (W3) The result in [2] does not need conditional independence kind of assumption and in fact does analyze a more general case, albeit for a different spectral version of the contrastive loss. In particular, Assumption 4.1 from this paper will lead to $\\alpha=0$ from that paper, and Assumption 4.5 from this paper will lead to reasonably high value for the Dirichlet conductance $\\rho_{K}$ that shows up in their bound. Given that their results for spectral contrastive learning hold for the setting being considered in this paper, it is worth making a more detailed comparison to that paper. **Other comments and questions** - Section 5.1 seems to have some potentially interesting hypersphere example to demonstrate many of the points, but I thought it was not discussed enough in the main paper. It would help to give a short and clear summary of the results in Section B in the main paper. - Some statements made in the paper deserve much more justification or could be toned down. E.g. "the class collision terms that are incompressible in Saunshi et al. (2019) now disappear in our bounds by adopting the InfoNCE loss, which also explains why InfoNCE performs better in practice": this does not really seem like an explanation for why InfoNCE performs better in practice, it is a weak justification at best. "increasing M indeed leads to a lower approximation error and helps close the gap" this is not clear since $\\mathcal{L}_{NCE}(f)$ also depends on $M$. - The setting for Proposition 3.1 is not described clearly, with regards to what kind of augmentation distributions (overlapping or not) does it hold for. I can only guess that it is for the case where they don't overlap for any pair of inputs, so it is not applicable when Assumption 4.5 is satisfied for example. Some clarification on this would be appreciated. - Assumption 4.1 says that the conditional label distribution $p(y|x) = p(y|x^+)$ matches for positive samples $x$ and $x^+$. However this assumption is invoked in many places to say that inputs from different classes do not have overlapping support of augmentations and that the label is deterministic given $x$ or $x^+$, e.g. "Besides, because proper data augmentation will not cause inter-class support overlap (Assumption 4.1)" on page 6. Perhaps this assumption needs to be modified appropriately, or may be a separate assumption is needed about augmentation distributions not overlapping between inputs from different classes. - The ACR metric makes sense and it is interesting that it helps in practice. but connection to theory is weaker than it is made out to be. After all the theory only talks about the overlap between augmentation distributions, but nothing about the nearest neighbors w.r.t. randomly initialized network features or the learned features. - Will help to explain what $f_j(x)$ means in Theorem 4.2; seems like it means that $j^{th}$ coordinate of $f(x)$. - Proposition 4.7 should only be true for $f^*$ and not all $f$. - Is "chaos" used as a technical term? If so, any citation for its prior usage would be useful to include. - Missing citations: [4,5] theoretically analyze contrastive learning for downstream task, [6] reports inverted-U shaped curves as in Figures 7 and 8 in this paper. [1] Arora et al., A theoretical analysis of contrastive unsupervised representation learning. 2019. [2] HaoChen et al., Provable guarantees for self-supervised deep learning with spectral contrastive loss. 2021. [3] Wang et al., Understanding contrastive representation learning through alignment and uniformity on the hypersphere. 2020. [4] Tosh et al., Contrastive estimation reveals topic posterior information to linear models. 2020. [5] Tosh et al., Contrastive learning, multi-view redundancy, and linear models. 2020. [6] Tian et al., What Makes for Good Views for Contrastive Learning? 2020. The paper aims to provide some theoretical analysis for contrastive representation learning under weaker assumptions than prior work (like conditional independence) and has some interesting empirical findings about how performance of augmentations can be ranked using a metric that depends just on unlabeled data. While the general idea is nice, there are issues with the theoretical setup (as described in the main review), raising questions about the meaningful-ness of the assumptions and results. Furthermore the comparison to prior very relevant work is also inadequate. This leads me to assign a score of reject for the current version. <doc-sep>The current leading theory of what contrastive losses are doing and why they work interprets contrastive learning as balancing alignment with uniformity, as proposed in [2]. This paper seeks to augment that understanding of contrastive learning using a new perspective, focusing on the role of data augmentation. It is well-known that contrastive learning techniques are highly sensitive to the data augmentation schemes used, most notably discussed in [1]. In this work, the authors interpret augmentation as a way to connect different intra-class images together. Then, the contrastive loss is seen as a way to gradually cluster intra-class samples together by aligning augmented views, producing representations that are class-separated even in feature space. On top of introducing a new lens with which to understand contrastive learning, the authors also provide proofs on performance guarantees, as well as a new evaluation metric. The metric is inspired by their augmentation-oriented understanding, and was also found to align well with downstream performance. The authors provide a scenario where alignment and uniformity are satisfied, but fails to translate well to downstream classification accuracy. This suggests to them that the instance discrimination task alone cannot guarantee the learning of class-discriminative features that would enable better downstream classification, and directs their attention to the other important component of contrastive-learning to help explain the story: augmentation. They then build off the analytical work of [3] to prove guarantees for the downstream performance with a relaxed assumption. [1] Chen et al., A Simple Framework for Contrastive Learning of Visual Representations, 2021. [2] Wang and Isola., Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere, 2020. [3] Saunshi et al., A theoretical analysis of contrastive unsupervised representation learning, 2019. In this exploration of data augmentation, much emphasis has been placed in the concept of "augmentation strength" - but what about the choices of augmentations themselves? Can we perhaps use the ARC metric to evaluate, compare, and select data augmentation schemes themselves? Separately, can the ARC metric be used to guide the selection of data augmentation parameters? For example, for an arbitrary given augmentation scheme we can calculate the parameters that would maximize the ARC metric in an unsupervised way - then would applying those augmentations lead to comparable performance across different *choices* of augmentation strategies? In other words, is the ARC metric a strong-enough metric that supercedes the selection of data augmentation strategies? I would like to see more thought, analysis, and application regarding this new metric to fully convince me of its value and uses. Additionally, to bridge the synthetic scenario and real data, I would like to see an augmentation graph of real augmented images, drawn with T-connections (where T can even be 1), and perhaps varied over different strength parameters. I think there is definitely a gap between the authors' theoretical proposals/scenarios and that of actual natural data that can be closed with extra effort. For example, the authors only mention one augmentation scheme to measure augmentation strength in real-world datasets, the RandomResizedCrop operator, and only evaluate it using their proposed metric. Lastly, the reference section appears rather sparse, given the massive catalogue of work (including theoretical) surrounding contrastive learning. Some typos: - "alone cannot guarantee to learn class-discriminative..." should be "alone cannot guarantee the learning of class-discriminative..." - "Comparing to Saunshi..., while ours only..." should be "Compared to Saunshi..., ours only" (page 6, Section 4.2). - "and the surrogate could complete its mission..." should be " and the surrogate can complete its mission..." (page 7, Section 4.3) - "different augmentation strength affects" should be "different augmentation strengths affect" (page 9, Section 6). - "We take 500 sample as...For the encoder class , we..." should be "We take 500 samples as...For the encoder class, we" (page 16, Section D.1) The authors expand our understanding of contrastive learning on top of the existing alignment and uniformity perspective, by studying the role of data augmentation. They provide theoretical guarantees on downstream performance, and propose an interesting new metric that can be evaluated using only the given unsupervised data. Overall I think this is a strong submission, and would recommend an accept. <doc-sep>The paper proposes a new theory for understanding contrastive representation learning. The novelty is the focus on the interplay between alignment and augmentation. Prior work has identified alignment as one of the factors of contrastive learning, but have not investigated how different types of augmentations may affect the learned embeddings. This work adds that missing piece. The results intuitively make sense, showing that proper amount of augmentation (that connects samples of the same class) has positive effect on downstream classification. Empirically, the authors verify that too weak or too strong augmentation harms performance. Based on observations, the authors define a metric on ratio of positive pairs among nearest (embedding) neighbors, and found the change of this metric throughout training positively correlate with performance. Strengths: + Theory considering both augmentation and alignment, without making too much assumptions. + Empirical verification on the niceness of a proper amount of augmentation. + The ACR and ARC metrics characterizes the interplay between augmentation and alignment, and are indicative of task performance. Weaknesses: + The theoretical results are a bit weak. E.g., as pointed out in paper, Thm 4.8 only talks about the minimizer of the contrastive loss. Maybe this is unavoidable with the current set of augmentations. But can there be a version with the perfect alignment assumptions relaxed into approximate alignment? If so, it might be possible to talk about non-minimizers. + Proposition 3.1 is incorrect (but fixable I think). No finite samples can attain uniformity, because perfect alignment $\\implies$ features are concentrated among finite number of vectors $\\implies$ not a uniform distribution. The exact stated form is wrong, but I think some variants of it is true. + Figure 6. What is the experiment setting for this? + Sec. 5.1 "... And when $r$ is too large ($r=3$), ... " Is $r$ the geodesic distance on sphere or Euclidean distance in the ambient space? Either case, it is really large... (almost) containing the entire sphere! Is there not a milder augmentation that can also show the difference? The paper provides a theoretical analysis on the interplay between alignment and augmentations. Empirical experiments nicely complement the theory, and lead to interesting metrics that reveal the properties of this interplay. Overall the paper is also nicely written. While there is one slightly incorrect claim (which I think is fixable) and some places that would need clarification, I think the findings in this paper are valuable to the field. Thus, I recommend acceptance. <doc-sep>The authors provided a new understanding of contrastive learning from the perspective of data augmentation for intra-class samples. In particular, the authors proposed to understand the role of data augmentation as to create certain ``chaos'' between intra-class samples so to encourage the clustering of intra-class samples and also the learning of class-separated representations. Additionally, a new metric ARC is proposed to evaluate the downstream performance. The conclusion is validated via both synthetic and real-world datasets. Strengths: The authors provided a new understanding of contrastive learning from the perspective of data augmentation for intra-class samples. Moreover, to evaluate the effect of data augmentation a quantitative analysis is provided along with a new metric. ############ Weaknesses: . Theorem 4.2: For the downstream classification, the loss is upper and lower bounded in terms of the L_NCE loss. The authors provided comparison with Saunshi et al. (2019) from the technical perspective. Is there any intuitive explanation on how to evaluate the classification performance in terms of contrastive learning (loss)? . Assumption 4.5 (intra-class connectivity): This assumption is strong. Without the label information, it seems impossible to derive such augmentation set. Please add discussion on the practicality of this assumption, and show an example on some datasets if possible. . Proposition 4.7: Based on the proof provided in the appendix the conclusion not only relies on the existence of such augmentation set (Assumption 4.5), but also that such augmentation should be applied to intra-class samples, ie, t_i(x_i) = t_j(x_j). This kind of operation is impractical without the label information. Please add comment on that. . In the experiments, RandomResizedCrop is used to illustrate the relationship between Aug Strength and ACC(ARC). The best performance for different datasets all achieves at Aug Strength = 0.92. Any comments on that? eg., in terms of data augmentation for intra-class samples at Aug Strength = 0.92? . In practice, there are different kinds of data augmentation, eg, flipping, rotation, and scaling. The authors only showed results on RandomResizedCrop. Can you show results for other data augmentation types? Do you have similar conclusion as that for RandomResizedCrop? . Different data augmentation types are often used together in practice (eg, randomly pick two augmentations from the augmentation set for the raw image). Then how to apply the proposed analysis in such practical case? In particular, how to measure the Aug Strength? . The authors emphasized the importance of the data augmentation design for intra-class samples (ie, perfect overlapping). 1) The study on applying the analysis to existing contrastive learning algorithms is, however, preliminary (only with RandomResizedCrop). 2) Based on the proposed analysis how to find the sweet spot of data augmentation for contrastive learning is crucial, but this is not discussed. The idea of understanding contrastive learning from the perspective of data augmentation for intra-class samples is interesting. However, 1) some key assumption for the analysis is too strong; 2) the analysis on the existing contrastive learning algorithms is preliminary and needs more work; and 3) the authors emphasized the importance of finding the sweet spot of data augmentation (ie, perfect overlapping). But how to achieve that in practice is not discussed.
The paper under review provides a theoretical analysis for contrastive representation learning. The paper proposes a guarantee on the performance (specifically upper and lower bounds) without resorting to previously used conditional independence assumptions. Throughout, the theoretical results and assumptions are supported by experiments. After a lively discussion, and after changes made to the paper in the revision stage, all four reviewers recommend this paper for acceptance. - Reviewer tWSB appreciates that the paper makes weaker assumptions than prior work (i.e., not assuming conditional independence), but raises a number of serious concerns on the theoretical results: The review questions whether assumption 4.6 used in the theory can be true, and whether the bound is vacuousness. The authors argue that this assumption was used in prior work, point out that only some of their results rely on this assumption, and that the assumption is compatible with the theory. The response of the authors partly resolved the reviewers concern and the reviewer raised their score. - Reviewer bTLa finds the idea of understanding contrastive learning for intra-class samples interesting, but finds some key assumptions too strong, a critique similar to that raised by reviewer tWSB. The authors responded and the reviewer increased their score, and mentioned that most concerns were addressed. The response partially resolved the reviewers concern, and the reviewer now also recommends acceptance. I recommend to accept the paper. Understanding contrastive learning better is an important problem, and based on my own reading, I agree with the reviewers that the paper contributes to the understanding of contrastive learning. Two reviewers had concerns about unrealistic assumptions, but those have been largely resolved in the discussion.
#### Summary: In this paper, the authors propose to replace commonly-used shooting-based methods for action sequence planning in learned latent-space dynamics models by a collocation-based method. They argue that shooting-based methods exhibit problematic behavior especially for sparse-reward and long-horizon tasks, as shooting methods do not allow for planning trajectories which (slightly) violate the learned dynamics. The authors propose a collocation method based on Levenberg-Marquard optimization with a scheduled Lagrange multiplier which outperforms two shooting methods (CEM and gradient-based) on a set of robotic tasks. #### Pros: - The paper is clearly written and experiments demonstrated improved performance over CEM and gradient descent optimization of actions. #### Weaknesses: - The experiments are limited to sparse-reward tasks, it may be interesting to compare the performance of LatCo and CEM on DeepMind control suite tasks (same as PlaNet), also to see how LatCo performs on dense-reward tasks. - It is unclear why collocation should find goals better than CEM or gradient descent for sparse rewards. If the reward function network learns this sparse reward, there is no meaningful gradient towards the goal for an optimization based method. CEM seems to have a better chance to find the goal due to randomization of actions. If not reward shaping has been used, why is the learned reward by the PlaNet network useful for collocation? - Conclusions claims that the approach would be "removing the need for reward shaping", however the task is simplified by the oracle agent for training data collection which uses reward shaping. The manual interaction is shifted from reward shaping to training data augmentation. Please clarify. #### Recommendation: The main concern about the paper is that optimization-based collocation might not be appropriate for the sparse reward case for a method that learns to predict reward for states. Hence experimental results are questionable. The rebuttal should carefully address this issue. The idea is evaluated in a sufficient range of experiments, although further experiments on standardized benchmarks (DeepMind control suite) would significantly improve the paper. The points raised in weaknesses above should be addressed. #### Questions for rebuttal: - See "weaknesses". - Why not use gradient descent to update the Lagrange multipliers? - What is the role of $\\epsilon$ in the Lagrangian in algorithm 1 / l5? - How do the terms in the Lagrangian relate to the residual terms? Especially, why does the quadratic action objective in the Lagrangian relate to the residual $\\max(0, |a_t| - a_\\mathrm {max})$? - In 6.3, you write "To provide a fair comparison that isolates the effects of different planning methods, we use the same dynamics model architecture for all agents". Is it only the same architecture, or the same dynamics model (at least for the models trained only on the oracle data)? - What is the task in Sec. 6.4 to generate the plots in Fig. 5? - Why do the returns get negative if the reward is sparse and positive ? #### Further comments: - Rename $\\lambda_t$ in eq. 6 to $\\lambda_t^\\mathrm{dyn}$, to match l5 of algorithm 1 - What is the value of $\\lambda_t^\\mathrm{act}$? - "For the reward objective, we found it convenient to map the reward to the negative part of the real line with the softplus operation" sounds confusing to me, I associate negative numbers with the negative part of the real line. Maybe phrase it like "For the reward objective, we form residuals by squashing the negated reward through a softplus function". - Algorithm 1: $T_\\mathrm{rep}$ is not defined - Algorithm 1 / l13: The ELBO is maximized -> gradient *ascent* (with some learning rate) $\\theta := \\theta + \\alpha \\nabla ...$ - \\emph{} seems to give underlined instead of italic characters (see the references section), this is probably not intended - Please plot lagrange multiplier values in Fig 5 #### Post-rebuttal comments - The paper should further elaborate on the smooth reward predictions and how online learning in the sparse reward setting can be possible with LatCo. It seems the method requires a specific initialization/implementation of the reward predictor, for instance, to overestimate rewards so that the method has to explore the areas where reward is overestimated and pull down the predicted reward. The paper should explain how this was implemented. This kind of exploration would be prone to the curse of dimensionality if the state representation of the environment is high-dimensional. The authors should discuss this limitation thoroughly. This might also explain why the tasks in the experiments are limited to 2-dimensional states. - I wonder about the discretization of the colors in Fig 8. Higher quantization of color should be provided so gradients of the reward landscape can be assessed. - The paper still does not detail the update rule for \\lambda_act Overall, the author response has addressed some of my technical concerns, but the main challenges are only addressed partially. The paper is still borderline and might need another thorough round of improvement and resubmission to another venue. <doc-sep>## summary The paper proposes to transpose colloction methods to solve planning problems in a learned latent state space. This can then be used as a replacement for shooting methods in model-based RL, particularly suitable for image-based tasks, where planning in the observation space is impractical. ## pros - Basic shooting methods are a primitive planning technique; we should be able to do much better. Using collocation methods in learned latent state spaces makes sense. This paper is one of the first to provide a working realization of this. ## cons - The problem is only difficult because of the attempt to learn the task directly from visual inputs. From a practical robotics and planning perspective, the task problems are very dated, e.g., from 30 years ago. In this sense, the tasks are "straw man" problems that are uninspiring. - Shooting methods provide exploration that the gradient-driven collocation methods do not allow for. The tradeoffs are not as simple as portrayed. ## recommendations I currently lean marginally in favor of acceptance, purely on the grounds that transposing collocation methods to latent spaces does havae future potential. However, the given examples are uninteresting. ## questions - How would the results compare to simply using the latent state to estimate a traditional compact state descriptor and then using that with a classical motion planner? For the given example tasks, that seems very feasible. - Can planning methods like CHOMP also be realized in the latent space? What are the general constraints or restrictions, if any, on transposing the many known planning methods into the latent space? - What is the impact of choosing a time horizon T that is too short or too long? - What is stochastic about the dynamics, if anything, for the chosen experimental tasks? - What is the action space for the given tasks? What is a-max for the tasks? ## feedback The output is a trajectory, not a policy. To make it actionable would require using the optimized trajectories to learn a policy or to use MPC. This aspect is missing from the paper. Similarly, the exploration issue is avoided (cf sec 6.1). Thus, overall, the paper is not really solving an RL problem. The title could more directly address the contribution, i.e., motion planning via latent-space collocation. "To this, collocation methods" (sic) Figure 2: the text refers to a decoder, but this is missing in the figure. The dynamics model is left unlabeled. It is worthwhile briefly discussing the broader space of collocation methods, and where your method fits within that taxonomy. Section 5, Constrained optimization: "balance between the strength of the dynamics constraint." missing: "and the objective" ? <doc-sep>## Paper summary This paper introduces a vision-based motion planning approach using collocation. Many existing approaches to vision-based control rely on computationally expensive planning approaches using shooting to perform model-based control, which is often only useful in simple control tasks. Collocation approaches are effective in settings with difficult path constraints, and thus exploited by this work to dramatically improve model-based reinforcement learning. I like the idea, but it is a relatively small extension to existing work, so I am inclined to rate this paper as marginally below the acceptance threshold. I would be willing to revise my score if the paper was revised to - better clarify the algorithm to align with the methods used in experiments - better justify the reasons why ilQR trajectory optimisation with locally linear dynamics models was not used as a baseline (or even better, include this as a baseline) ### Pros - The paper is well written and clearly laid out. - Solving a collocation problem in the latent space is a sensible approach, and a much better idea than using CEM planning or shooting. ### Cons - It's a reasonably straightforward application of collocation in a learned latent space. While I have not seen this done previously, it is a relatively obvious improvement. - The paper motivates the need for collocation in the context of *long horizon tasks*, where shooting performs poorly. However, none of the tasks (pushing and reaching in free space) considered in this work are long horizon tasks, or particularly challenging. ### General recommendations for improvement and queries - I'd recommend replacing the term *long horizon tasks* with something more suitable, along the lines of what is actually demonstrated in the experimental results, eg. *vision-based motion planning*. - Page 2 - Latent Planning. The paper mentions work on structured latent dynamical systems (Watter et al. 15), but disregards these "*However, these approaches relied on locally-linear predictive models, which may be difficult to design.*" No design is required for latent dynamical systems with local linear latent dynamics (eg. Watter et al. 15, [Fraccaro et al. 17](https://arxiv.org/pdf/1710.05741.pdf)) - all transition matrices and parameters are learned, using a slightly different ELBO. The benefit of this approach is that it allows for standard trajectory optimisation approaches like iLQR to be applied directly. I would like to see a comparison against trajectory optimisation using a dynamical system with learned locally linear models, which arguably allows for simpler planning and control. - Along the lines above, there is a recent body of work looking at imposing more structure in the latent dynamical system to simplify and improve downstream control (eg. embedding for proportionality - [Jaques et al.](https://arxiv.org/abs/2006.01959), koopman embeddings for open loop control with QP [Li et al.](https://openreview.net/forum?id=H1ldzA4tPr) In contrast, this work seems to advocate the opposite approach - ignoring the latent dynamical system learned, and focusing on better methods to solve a more challenging optimisation problem. I believe that more discussion on the contrasts between these ideas would be a useful addition to this paper. - Algorithm 1. The algorithm and training approaches lack clarity and cause some confusion, which needs to be improved. The algorithm seems to indicate that dynamics model learning and planning happen jointly, which doesn't really make sense - we shouldn't need to re-learn a dynamics model at planning time. Unless the intention was to imply that this is an online learning approach? I assume that this is not the case, as experimental methods seem to indicate that dynamics and reward models are pre-trained, separately from trajectory optimisation using collocation. Please clarify, and ensure that the methodology lines up with what was demonstrated in the experiments section. <doc-sep>Summary: The paper studies the problem of planning in domains with sparse rewards where observations are in the form of images. It focuses on solving this problem using model-based RL with emphasis on better trajectory optimization. The proposed solution uses latent models to extract latent representations of the planning problem that is optimized using the Levenberg-Marquardt algorithm (over a horizon). The experimental results show improvements over a) zeroth-order CEM optimization, b) PlaNet (Hafner et al., 2019) and c) gradient-based method that optimizes the objective in Eq. 1. Strengths: i) The motivation, organization and the overall writing of the paper are clear. ii) The tested experimental domains are good representatives of the realistic planning setting identified in the paper. Weaknesses: i) Discussion of literature on planning in latent spaces [1,2,3,4,5] is left out and should be included. Namely, [1,2] performs (classical) planning from images, and [3,4,5] perform planning with learned neural models. Here, space can be saved by removing Figure 4 since all of its subfigures look identical given their (visual) quality. ii) Have you tried solving Eq. 2. directly similar to [4]? It seems more appropriate baseline compared to c) (i.e., as labeled above). iii) How do you reason about the length of the horizon T? For example [1,2] use heuristic search. iv) There does not seem to be any presentation of hyperparameter selection/optimization, runtime results or quality of solutions. Table 1 is too high-level to provide any meaningful insight into understanding how each method compares. Similarly, Figure 5 is very hard to read and not clear what each axis represents. Overall, I would say this is the weakest part of the paper. References: [1] Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary, Asai and Fukunaga AAAI-18. [2] Learning Neural-Symbolic Descriptive Planning Models via Cube-Space Priors: The Voyage Home (to STRIPS), Asai and Muise IJCAI-20. [3] Nonlinear Hybrid Planning with Deep Net Learned Transition Models and Mixed-Integer Linear Programming, Say et al., IJCAI-17. [4] Scalable Planning with Deep Neural Network Learned Transition Models, Wu et al. JAIR. [5] Optimal Control Via Neural Networks: A Convex Approach, Chen et al., ICLR 2019. ** Post Rebuttal ** To best of my understanding, the authors have addressed all my questions and suggestions with the appropriate revision of their paper. Specifically, the necessary discussion of hyperparameter selection is added and presentation of the runtime&solution quality results (i.e., raised in point iv)) have been improved with the inclusion of important details, additional discussion of related work is added (i.e., raised in point i)) and questions are addressed (i.e., raised in point ii) and iii)). As such, I have updated my rating accordingly.
This work applies collocation, a well known trajectory optimization technique, to the problem of planning in learned visual latent spaces. Evaluations show that collocation-based optimization outperforms shooting via CEM (PlaNet) and shooting via gradient descent. Pros: - I agree with the reviewers that this idea makes sense, and will very likely be built on in future work - the authors have very actively addressed most comments of all reviewers that engaged in discussion cons: - I agree with the reviewers that this is a very simple and straightforward application of collocation methods to the visual latent space domain. Furthermore, the chosen tasks are fairly simplistic, meta-world has a variety of tasks, most of which are more complex than the reaching and pushing task that were chosen for this manuscript. - Even with all the updates, the evaluation is still very shallow. I agree with the reviewers that obtaining results for both settings: a) visual MPC with pre-trained (or even ground truth) dynamics model b) in the model-based RL setting, for which the model is being learned, is important. While the authors have added some of these experiments, a detailed discussion of how the results change from a) to b) is missing. Furthermore, when using collocation in this MBRL setting, how should dynamics constraints be enforced (should they even be enforced when the model is still really bad?). How does the comparison between collocation and shooting fare when you use dense/shaped rewards for the sawyer tasks? Many questions come to mind, some of which that have been raised by the reviewers, and my main point is that simple idea + in-depth analysis of some of these questions would have created a stronger contribution. - Alternatively, real system experiments would have increased the significance of this work. - I don't see any direct references of gradient-based visual latent-space planning (shooting), but related work on this does exist. In my opinion, a simple straightforward idea is no reason to reject a paper. However, currently, the reader does not learn when collocation should be considered over other trajectory optimization methods, when attempting to plan in a learned visual latent space. And what some of the main remaining challenges are. Because of this I lean towards recommending reject, and would encourage the authors to deepen their analysis of collocation in visual latent space.
The paper performs an exhaustive empirical study to propose model patching where the goal is to improve accuracy for open-vocabulary models on specific tasks (i.e., patching tasks) without degrading accuracy on tasks where performance is already adequate. Model patching refers to interpolation of model weights between a finetuned model and the original model. Among other experiments, the paper shows results of model patching on nine tasks where zero-shot CLIP performs poorly, and obtains improvement of over 15-60 percentage points while not losing performance on ImageNet itself. The paper also talks about broad transfer across multiple tasks, and notes that the proposed approach becomes more effective with increasing scale of datasets. Strengths: —----------- + The methodology tested and proposed is very simple, and easy-to-implement. + The results are interesting, and of broad relevance to the community, especially to those in large-scale ML practice. + The results are comprehensive, and cover experiments across various settings, including ones such as typographic attacks, counting and visual question-answering. + The Appendix is loaded with even more results, which make this an elaborate empirical effort. Weaknesses: —--------------- - The primary weakness of the paper is the limitation mentioned in the paper itself (L315-316): “....our method provides no guarantees on which data the model performance might change…”. This limits the robustness of the takeaways from the paper, and where one may be able to use them – especially considering the contributions are largely empirical. - Considering \\alpha is the key hyperparameter for the interpolation, it would have been nice to see how the performance changes with different \\alphas. This, to me, is another weakness of the paper - amid the large number of results, it lacks a clearer perspective on how a reader can take away lessons that can be used in practice (especially considering practice is the focus of this work). - In continuation to the above comment, there is no evident trend in the results (or a summary of it in the paper) to see when this method works best. - Considering the large number of results presented in the work (including the Appendix), the paper definitely needs a summary or discussion summarizing the lessons and takeaways for the work to be useful to the readers. It becomes the reader’s burden otherwise to sift and find the takeaways. - The paper doesn’t compare with other papers that linearly interpolate neural network weights as mentioned in the baselines section. Please see the weaknesses listed above. <doc-sep>In this submission, the authors attempt to improve the CLIP model on the tasks that it performs poorly. They propose to fine-tune pre-trained CLIP model on the target task with frozen classification layer and derive the final model with a linear interpolation between pre-trained and fine-tuned model. The mixing coefficient is decided by validation. This task-patching procedure could be adopted for patching multiple tasks. In the experiments, the authors show that the proposed method retains good performance on the task where the pre-trained model is already good at while improves the performance on the target tasks. In addition to classification tasks, the authors also show that their patching approach could improve CLIP model on 1) typographic attacks, 2) object counting, and 3) visual question answering. Strengths - Improving CLIP model is an active research problem. The authors propose a method to improve the model performance on wider range of tasks for open-vocabulary classification. - The proposed approach is simple yet effective. To improve CLIP model on poorly performed task, the authors propose to freeze the classifier weights derived fro text encoder and fine-tune the model on given tasks. This preserve the ability of open-vocabulary classification while improve model performance on new tasks. - The experiments show decent improvement (1% ~ 20%) over the tasks for which the model is fine-tuned while preserving the performance for the task that CLIP model is already good at. - In addition to classification tasks, the authors show that their patching approach can improve object counting and VQA comparing to the pre-trained CLIP model Weaknesses - The writing could improve. The design of proposed method to retain open-vocabulary ability is based on freezing classification layer derived from text encoder. This only mentioned in one sentence in experimental setup. It was pretty confusing to me how the proposed method retain open vocabulary ability until I locate that single statement in experimental setup. I would recommend the authors to make it more clear through out the paper. - The proposed method is simple. However, in practice, it requires non-trivial effort. To patch the model, the proposed method requires supervised annotations and fine-tuning model with hyper-parameter search and validating the mixing coefficients. The procedure may not be simpler than adding downstream data to the pre-training dataset and then re-train image-text model. - The authors only compare the performance against pre-trained CLIP model. I am curious how the proposed method compare with a simple baseline: adding downstream data to the CLIP training dataset and re-train image-text model. The authors discussed the limitations in the submission. <doc-sep>This paper proposes a simple yet effective way to do model patching, by interpolating the weights before and after fine-tuning. Experiments are performed with the image encoder from a recent dual-stream vision-language model CLIP. Empirical results show that the proposed method improves performance on other tasks while preserving the performance on ImageNet. Moreover, the proposed method can make CLIP more robust and more powerful for tasks like VQA. The paper is overall well structured and easy to follow. The method is simple and effective. One of my main concerns is about the motivation of "patching models on a single new task". More discussions on the real applications of this scenario would be necessary to better show the motivation of this research. Yet another concern is that this work keeps saying the "open-vocabulary" model. However, it is not clear why the proposed method only works for open-vocabulary models, or whether can be extended to other open-vocabulary models beyond CLIP. The experiments also need some more clarifications. Though the authors describe some connections and differences with some continual learning methods in the related work section, it would better show the efficacy of the proposed method by empirically comparing with them. In particular, in lines 264-266, the authors said that "in contrast to regularization or replay-based methods, patching requires no extra computational cost during training". Since the proposed method also requires training (i.e., fine-tuning on the data of the target task), it is not clear to me what the "training" means here. Can the authors provide an exact comparison of the performance VS training time/data between the proposed method and regularization methods like EWC and EWC++? Yes, there is slight discussion about the limitation at the end of the paper.
The reviewers had some concerns about clarity of motivation and baselines. My own opinion is that this work is valuable for the community because of the simplicity of the method and depth of experiments.
This paper proposes a transformer-based method for vehicle trajectory forecasting. It proposes to combine the tasks of global localization and local refinement for more accurate trajectory forecasting. On the structure side, the authors design a mechanism of motion query pair to model motion prediction as the joint optimization of the two tasks. Moreover, the interaction among agents is considered in the proposed method and collaborates to make the dense future prediction. The proposed method is demonstrated to be efficient on the large-scale Waymo Open Dataset. And an end-to-end variant of the proposed method is also provided for a broader study. Strengths: 1. The proposed method is well elaborated and the implementation details are necessarily provided to help understand the model design. 2. The performance of the proposed MTR on the Waymo Open Motion Dataset is good, advancing the SoTA under the setting further. 3. The paper is mostly well written that I can understand the motivation, high-level intuition, and model design quickly. Weaknesses: 1. Some details are not clear, for example, the implementation of MTR-e2e is hard to follow for me given the limited illustration at L273. 2. Though it may not be necessary, it would be helpful to ablate the choice of query pair number for end-to-end version as well as there is a claim that "since 6 intention points are too sparse to well cover all potential future motions." may need some experiment backup. 3. Given the proposed method stresses on the design of query pair, it would be helpful for us to better understand the efficiency of this by making an ablation study of using only one type of query or both. Some technical design of the proposed method is explained at the end of the draft. I don't recognize more potential negative societal impact or limitations of this paper. <doc-sep>This paper proposes an decoder method for motion prediction task, which refines different modes with the static prior and dynamic attention. Its performance on the Waymo Open Motion Dataset is impressive, which demonstrates the effectiveness of iteratively refine the prediction (similar to DETR/DAB-DETR). Strengths * The idea of iteratively refining the prediction by Transformer is novel in the motion prediction area. * The performance on Waymo Motion Dataset is great. * The ablation experiments is well-organized and convincing Weakness * The proposed method is only evaluated on one dataset. It would be more convincing if the experiments could be done on other large scale datasets. However, considering the Waymo Open Motion is fairly large, I think it is okay if there is no enough time/resource to try other datasets. * Some parts of the proposed methods is not clearly described in the manuscript, which is understandable considering the space limit. * The proposed method has similarities with the DAB-DETR in the objection detection area. I think it would better position this work if the authors could discuss about the proposed method's relation with recent DETR related works. The limitation part is ok. <doc-sep>The work addresses the motion forecasting problem for autonomous driving. The authors introduce a transformer-based framework (MTR) that works in the following ways, as highlighted in the contribution section of the paper: - Separates the modeling of the global intention from local movement refinement in the trajectories in the transformer framework. The predictor is inspired by DETR. - Interaction modeling between agents via an auxiliary dense prediction task, essentially letting the model to predict the future directly (I didn't fully understand that part... see below) - SOTA results on the Waymo dataset. Ranked #1 among results without using ensembles. Since I don't understand the 2nd contribution, I cannot recommend accept at this point. Looking forward to understanding it post-rebuttal! POST REBUTTAL UPDATE: I understand the contribution better now. In line with other reviews' feedback, I'll change the rating from 4 -> 7, and recommend an accept. Strengths: Strong results: Significant improvement over the SOTA. I checked the leaderboard on Waymo Open Dataset, and verified the claims. Somewhat novelty: Using DETR for motion forecasting is not that new anymore. I reviewed several papers for CVPR'22 and ECCV'22 that contained similar ideas. But those should be considered concurrent work. Additionally, the hierarchical separation of intent vs. fine control adds an additional hint of novelty, though the hierarchical separation by itself is not novel either [9, 51]. Good ablation: The paper contains ablation for all the contributions and novelties in their methods. This is great! Weaknesses: - Unclear writing: Abused notations, missing descriptions of variables, figures that are not self-contained. See below under "questions". - Insufficient experiments (minor): The only results are on the Waymo dataset. There are other popular datasets such as Argoverse and nuScenes. Having strong results on a secondary dataset would make the representation a lot stronger. Yes, but it didn't hit the mark. The limitation section is about high-level limitation, like "the method is not great for long tail behaviors", not a wish list of future works. <doc-sep>In this paper, a Motion TRansformer (MTR) framework is proposed for the motion prediction task, including marginal and joint motion prediction. Specifically, motion query pairs are designed for global intention localization and local movement refinement, which takes advantage of both goal-based methods and the regression methods. Experiments on Waymo Open Dataset indicate the effectiveness of the proposed method. Strengths: - Strong motivation and well-organized. - Promising results. Weaknesses: (See Questions for details) - Some insights/details are not clear. - Lack of some experiments. There is a limitation section in the main body of the paper.
This paper proposes to model traffic vehicles using a transformer-based architecture for iteratively refining multimodal trajectory predictions. While the method is related to and builds upon several similar works in the area, it does also introduce some interesting new components such as the iterative refinement and the dynamic attention. Further, the strength of the experimental results from the combined system alone makes this paper important for researchers working in these areas: the method achieves the state of the art for trajectory prediction on two very widely used datasets (Waymo and Argoverse), compared to published leaderboards. All four reviewers unanimously agree that this paper is above the bar for acceptance, and I concur.
This paper investigates particle-based state estimation under the presence of unknown observation and transition models. This is challenging for a number of reasons, in particular due to the non-differentiable resampling step in particle-based methods. Prior work has proposed leveraging Fisher’s identity to derive a maximum likelihood objective for the model parameters that bypasses the resampling step; however such existing approaches are computationally expensive for large models. This paper builds on this prior work and introduces a particle approximation that trades off bias for computations efficiency. The authors compare this approach with existing baselines and demonstrate superior performance on a real-world and synthetic AV dataset. Strengths: * This paper studies a well-motivated problem — in real world scenarios the observation / transition models may be unknown, necessitating methods that can estimate the parameters of these models. * The introduction and methods section are well-written. The paper does a good job detailing the shortcomings of prior work, providing background information for the method, and laying out the proposed approach in a step-by-step basis. * The experimental results are convincing, demonstrating superior performance compared to a number of baselines across a number of quantitative metrics. Weaknesses: * The paper highlights that this approach is useful in multi-agent settings, yet there is nothing about the method specifically tailored for multi-agent problems. While the experiments focus on a multi-agent setting, this method seems designed for any general state estimation problem. This makes the message of the paper somewhat confusing. * Even after reviewing the appendix, the paper provides sparse details about the datasets. Ideally the authors can help to answer the following questions: who collected the real world dataset? What is the nature of the agents in the dataset? In which specific ways are the real and synthetic datasets different? * The paper has a limited number of figures and visualizations. It would help to provide visualizations of the datasets and qualitative analysis of the results. Some of these are available in the appendix, perhaps they can be brought up to the main section of the paper. * The limitations that the authors describe in section 6, namely scaling this method to more complex empirical domains with occlusions and high-dimensional observations. While these are not strictly necessary components, they can help to strengthen the empirical findings. <doc-sep>This paper proposes a method for learning observation and transition models for birds-eye view multi-car autonomous driving scenarios. Using a fixed-lag approximation of the score function along with a deterministic motion model, inference of high-dimensional models can be achieved. Strengths: The paper is well-written, and the method is generally well-justified. The method leverages specific assumptions from the bird's eye view autonomous driving scenario to learn observation and transition models. Weaknesses/Questions: More evaluations on the fixed-lag size would be beneficial. At what point is path degeneracy an issue? At what point is a fixed-lag uninformative? Since this most likely depends on the number of particles as well, it would be nice to see a comparison with the fixed-lag window size and number of particles being modified simultaneously. Why does the real data show less performance difference when modifying fixed-lag? Why did real data cause large gradients across all methods? If performance is independent of fixed-lag size, what is the method gaining? Plot shows minimum of L=5 for real-data, but what happens for L=1,…,5? If this small of window size gives similar results for real-data, then learning the observation and transition models may not require longer horizon inference. If DPF-SGR performs better in terms of accuracy for 25 steps, what is preventing methods from also limiting the number of steps that is used for training? Since other methods may or may not differentiate through the marginal log-likelihood, does it make sense to even compare the baselines using this? Relative bearing is usually not available as a measurement, and estimates would be somewhat noisy. Is this a realistic setup? Other comments: Figure 1 is somewhat wasteful in terms of white space, and also does not illustrate much. Consider improving the figure as this is the only one in the paper. One of the figures in the supplementary material may better illustrate the method and application than the current Figure 1. How sensitive is the method to the agent's state noise? In reality, this will be imperfect. Is assuming the motion model to be deterministic adequate? In real-world scenarios, this may not be the case. <doc-sep>The paper proposes a new approach for particle filtering approach for estimating the score function of state-space models (SSM). The authors do so using the Fisher Identity to circumvent the non-differentiable sampling step in particle filtering for estimating the score function. Moreover, they circumvent the potential issue of path degeneracy where the particles converge to a single one by using a fixed lag $L$ up to which the estimates are calculated, based on the assumption that observations after a time $t+L$ are not very useful for estimates at the current time step. They also derive the use of a motion model for policy-based approximations by showing that the gradient of the policy corresponds with that of the SSM, allowing it to be plugged into the gradient of the score function. Their results for approximating the states of external objects from Bird's eye views of a vehicle show good log-likelihoods and state estimates on both real and simulated datasets. ### Strengths - The proposal for using the Fisher Identity for approximating SSMs enables low variance state estimates and also circumvents the non-differentiable nature of the sampling process. - The computational capacity is drastically reduced by considering a fixed time window up to which the estimates are calculated rather than going through the whole trajectory. - The paper is very well written. It is very easy to follow and the relevant works, preliminaries and approaches are explained in simple, understandable terms and the paper has a good flow from start to end. ### Weaknesses - The paper would benefit from providing some more extensive information about the real dataset that is used, such as how the data was collected, sensors used, example data etc. It would have been better to showcase the results on some existing datasets, like KITTI for example, or even on a subset of it. This would allow the results to be more comprehensible. - The paper is currently missing an ablation study to show the extent to which the performance is affected by the different components. These could be different Lag lengths in Sec. 4.2 <doc-sep>The model proposes training non-linear, non-Gaussian SSMs using gradients approximated with Fisher's identity. In order to efficiently compute the required smoothing distributions, it proposes using fixed-lag smoothing. The resulting method is evaluated on a state estimation task in an autonomous vehicle setting where 2D poses of surrounding vehicles need to be estimated from observations. The paper is well written and easy to follow, and the main idea, intuitions, and mathematical details are clear. Using fixed-lag smoothing to approximate the required smoothing estimates efficiently is a simple idea. Yet, as this seems to be the paper's main algorithmic contribution the effects and limitations of this approximation would need further investigation. While I agree, that the assumption seems reasonable for many systems, ablations for different smoothing lengths $L$ would be good to see. In general, I do not believe the conducted experiment is sufficient to allow assessment of the method's full potential, as it relies on unrealistic assumptions (lines 218-220), largely pre-engineered transition and observation models, and only considers a narrow scope of applications.
The authors propose a multi-object state space estimation approach based on particle filters where the gradients are computed through the Fischer's identity. Strength: - Clear story and well motivated (to avoid biased or high variance gradient approximation) - Well-structured paper - Comparison to two baseline methods - Evaluation on a real-world dataset and two synthetic datasets - Detailed discussion of all model assumptions and limitations Weakness: - The real-world av tracking task is not described in sufficient detail - Therefore it is difficult to assess how relevant the approach is and how well it could work on more complex real-world tasks - In general, a discussion on the applicability of the model to other problems, not only multi-object tracking, would be important Update: The modified Figure 1 and the additional explanations at multiple places improve the paper.
HyperBO assumes the tasks are independent given the hyperparameters, unlike typical metalearning approaches which assume tasks are related. This allows for an efficient Kronecker decomposition of the kernel and thus linear, rather than cubic scaling, across tasks. Using this model, HyperBO performs BO as usual; maximize the acquisition function to obtain the next point to evaluate. HyperBO also makes the critical assumption of an offline pre-training of hyperparameters on a representative set of completed tasks; during optimization itself the hyperparameters are fixed. I have a few key concerns about this paper. - Why fixed hyperparameters? This is clearly the bottleneck of Metalearned BO, and if these hyperparameters are learned offline, this seems to (A) somewhat eliminate the strength of HyperBO which is the linear scaling per task ---obviously this still helps significantly during the offline training, but still a point of concern of mine, and (B) seems not robust, especially if the set of representative completed tasks is heavily biased. - HyperBO, in the experiments, uses the PI acquisition function. Is there a particular reason why this is? PI is quite greedy (even more than EI), so is there any intuition as to why PI is appropriate in this situation. - In Figure 2b, I am somewhat concerned about the empirical performance of HyperBO. Though it beats the baselines, it does so in a 4D search space, using thousands of tasks; this seems like overkill. The error bars are also all over the place. This is somewhat unfair of me to ask for I admit, but I am curious if a much simpler approach involving restricting the search space (given that it is fixed) will help (see the paper “Learning search spaces for Bayesian optimization, Perrone et al., 2019). I feel like there is definitely enough data for this to make a difference. - Also, the experiments only really concern one optimization problem involving optimizer hyperparameters. Though this one experiment is quite impressive in terms of the data involved, iit would be nice to see another experiment (say for tasks that might be easier like tuning a random forest). I have some concerns about the assumptions used in the methodology, as well as the experiments, which leave a number of open questions. In particular, the fixing of GP hypers seems to largely remove the need for scaling, which is the primary strength of HyperBO. Furthermore, though the experimental set up uses a large amount of data to achieve somewhat unconvincing results in my mind, and only one optimization problem is presented (though worth noting, is thoroughly analyzed). Thus, I can't recommend acceptance at the time. <doc-sep>This paper presents a Bayesian optimization method based on meta-BO. The motivation is tasks can share the same parameter structure and this shared information, e.g. correlation between tasks, can be transferred to new and similar tasks. An example is to optimize the the hyper-parameters of a same optimizer across different architectures and different datasets. This problem is a very important one in the community of Bayesian optimization and a reasonable method can lead to a potentially dramatic decrease in the required computation, especially when the objective function is very expensive. This work tries to overcome limitations of existing methods. For example, the method proposed in this work does not need to evaluate all objective functions associated with all tasks on the same parameters. The reviewer appreciates the authors putting effort into the empirical evaluation of the proposed method. However, the proposed approach is not interesting to the Bayesian optimization community and is trivial to some degree. The reviewer believes that the targeting problem presented in this work is a very important one and an effective method could be of great practical value. In the abstract, authors claim that "data from similar functions" could lead to a better prior for GP. Obviously a better prior for GP is desirable and that is why the marginal likelihood is used to optimize parameters of a GP. From such a claim, it is expected that an efficient method for BO will be presented by exploring novel similarities between tasks. However, throughout this paper, there is no definition of a similarity between tasks and tasks are treated as independent. This raises my concern on this work's novelty, which is my biggest concern on this paper. Authors claim that the critical difference between this work and standard BO algorithms is the initial learning process in line 2 of Algorithm 1. The corresponding likelihood of this approach is given in eq(2). I do not get the point how this approach is different from existing GP modeling and eq(2) is simply the unnormalized marginal likelihood for all data points since all tasks are assumed to be independent. Such a formulation is not only trivial to the GP community , but also to the empirical Bayes community. Additional (minor) issues: 1. the graphical model for GP in Figure 1 is wrong 2. there exist a lot of inconsistencies in this paper. In assumptions section, it is assumed the variance is known however the variance is a hyper-parameter in the marginal likelihood. 3. lots of claims and statements are superfluous. For example, authors claim one limitation of existing approaches is the total number of BO iterations must be set in a manual way. However, throughout this paper, the number of iterations is still pre-defined. What is the point of saying this is a limitation while not touching it at all? Another example, authors claim "interpretability of intermediate steps" is lost in existing methods, however, this problem is not touched either. 4. Another contribution of this paper is a tuning dataset. I can see the value of such a dataset, however, failing to explicitly describe the required computation resources makes claiming this being a contribution less convincing. The proposed method is trivial. The theoretical part presented in this paper is very minimal and incremental. <doc-sep>This paper suggests a meta Bayesian optimization strategy that optimizes free parameters of GP including a prior function and noise variance, where multiple sets of historical observations are given. In particular, the proposed method chooses a free parameters using one of three approaches: (i) optimizing a marginal likelihood, (ii) measuring KL divergence, (iii) considering both marginal likelihood and KL divergence. The authors finally show the theoretical analyses on regret bounds and the numerical results on hyperparameter optimization. ### Reasons to Accept + It is well-written and well-organized. + It solves a very interesting problem, which transfers a history to the current task in Bayesian optimization setup. + Compared the work by Wang et al. (2018b), it solves more realistic setups. + It provides promising numerical results and sound theoretical results. ### Reasons to Reject - I do not think that it degrades the contributions much, but four-dimensional search space is relatively small, compared to other Bayesian optimization or hyperparameter optimization papers. - Following the above point, is there any specific reason why the authors use four-dimensional search space? I do not think this algorithm is not scalable. Moreover, for example, batch size can be one of the meta-parameters to be optimized. ### Questions to Authors 1. Can you elaborate why the proposed method does not train a GP model every iteration, e.g., every $t = 1, \\ldots, T$? I think that it can be possible without (relatively) expensive computational costs. 1. H* NLL does not use a matching dataset, right? If you did not use multi-task GP regression, which has an additional input to indicate task information, does H* NLL (i.e., optimizing Equation (2) with $D_N$) work appropriately? I think that this paper addresses an interesting problem and suggests a novel method as described above. Thus, I would like to recommend acceptance. <doc-sep>This paper is concerned with speeding up Bayesian optimization by using evaluation data from previous, related tasks defined over the same configuration space. The authors propose to model the data from each experiment (or "task") by independent Gaussian processes, which all share the same mean and covariance function. This surrogate model can be learned from past data. The paper also presents experiments on a fairly simple search space of 4 optimizer parameters. This is done for a bunch of datasets and NN models. And there is a pretty simple extension of theoretical results from (Wang, 2018b). The problem of "warmstarting" HPO by making use of data from previous experiments is an obvious idea, and it has seen a large amount of past work, much of which the authors of this submission do not seem to be aware of, neither apparently was (Wang, 2018b) which seems more of a theoretical paper. In particular, there is quite a lot of work which uses GP models and scales linearly in terms of the number of past experiments, contrary to what is stated in the introduction. Two of the most interesting ones are maybe [1], [2]. The authors here cite (Perrone, 2018), which has these citations and more, so it is pretty odd the authors do not mention (or compare against) any of them. Given the straightforward nature of what is proposed here (a setup closely related to what is done in [3]), I'd be quite surprised if for example [1] would not outperform it. After all, the assumption that data from experiments on quite different models can be modeled by the same mean and covariance function, is pretty strong. There are all sorts of issues with this idea, for example what if data from some tasks is much larger than data from others? Moreover, in what is proposed here, the surrogate model parameters do not even seem to be adapted to the current task, even as data from it becomes available. Here, methods like [1], [2] seem much more compelling to me, as they try to for example rank previous experiments by closeness to the current one. [1] is doing this without having to define any meta-features of the dataset, and also of course without relying on observations at the same configurations (given you model your data with a GP, you should certainly not need that anyway). The experiments are not meaningful, because essentially all relevant prior work is missing for comparison. The authors more or less compare their proposal (in two variants) against a bunch of baselines, as if there was no revelant prior work. In fact, they even seem to invent on their own methods to compare against, such as "MIMO", in a way which has never been used for transfer HPO. Why? Please read about and compare against relevant prior work. Given they cite work (e.g., Perrone 2018), they should have been aware. Apart from that, I also do not get much out of the experimental setup. Why was it chosen that way? Does it have any practical relevance? Does anybody else use this learning rate schedule, or was it just made up for this paper? I also did not find a discussion of a pretty critical point: how are the datapoints chosen for tasks you offline train on? In order to be realistic, these would have to be active choices themselves, because that is data we could have been obtained by running BO on them. Instead, my suspicion is that past data was sampled randomly, which would correspond to pure exploration (random search). Such data is obviously more valuable to obtain a good surrogate model fit, but also more expensive to obtain in the real world (one would have to run random search). [1] Feurer etal: Practical Transfer Learning for BO, https://arxiv.org/abs/1802.02219 [2] Wistuba etal: Two-stage transfer..., ECML 2016 [3] Golovin etal: Google Vizier, KDD 2017 This paper proposes a simple idea for wam-starting BO by fitting the parameters of a GP surrogate model on past data. Unfortunately, a lot of relevant prior work is ignored here and not compared against. Instead, the proposed approach is compared against simple baselines, as well as methods that mostly seem to have been made up (such as "MIMO").
This paper claims a practical improvement over one of earlier meta BO methods. Warm-starting BO or HPO by making use of data from past experiments or tasks seems to be interesting and useful for some applications. In fact, there are a large amount of work on this topic, but a lot of relevant prior work is ignored in this paper unfortunately. I appreciate the authors for making efforts in responding to reviewers’ comments. However, after the discussion period, most of reviewers had serious concerns in this work, pointing out that the proposed method is rather trivial and the comparison is made only against a simple baseline. It was also suggested to improve the experiments. While the idea is interesting, the paper is not ready for publication at the current stage.
Overall: This paper proposed soft decoupled encoding (SDE), a special multilingual lexicon encoding framework which can share lexical-level information without requiring heuristic preprocessing. Experiments for low-resource languages show consistent improvements over strong multilingual NMT baselines. General Comments: To me this paper is very interesting and is nicely summarized and combined previous efforts in two separated directions for sharing multilingual lexicons: based on the surface similarity (how the word is spelled, e.g. subword/char-level models), and based on latent semantic similarity (e.g. Gu et.al. 2018). However, in terms of the proposed architecture, it seems to lack some novelty. Also, more experiments are essential for justification. I have some questions: (1) One of the motivation proposed by Gu et.al. 2018 is that spelling based sharing sometimes is difficult/impossible to get (e.g. distinct languages such as French and Korean), but monolingual data is relatively easy to obtain. Some languages such as Chinese is not even “spelling” based. Will distinct languages still fit in the proposed SDE? In my point of view, it will break the “query” vector to attention to the semantic embeddings. (2) How to decide the number of core semantic concepts (S) in the latent semantic embeddings? Is this matrix jointly trained in multilingual setting? (3) Is the latent semantic embeddings really storing concepts for all the languages? Say would you pick words in different languages with similar meanings, will the they naturally get similar attention weights? In other words, do multiple languages including very low resource languages learn to naturally align together to the semantic embeddings during multilingual training? I am a bit doubtful especially for the low resource languages. (4) It seems that the language specific transformation does not always help. Is it because there is not enough data to learn this matrix well? (5) During multilingual training, how you balance the number of examples for low and high resource languages? <doc-sep>This paper focuses on the problem of word representations in multilingual NMT system. The idea of multilingual NMT is to share data among multiple language pairs. Crucially this requires some way to tie the parameters of words from different languages, and one popular method is to share subword units among languages. The problem is that subword units in different languages may not be semantically equivalent, and many semantically-equivalent concepts are not represented by the same subwords. This paper proposes an alternative way to share word representation, in particular by proposing a common set of "semantic" concept vectors across languages which are then folded into the word representations via attention. The problem is well-motivated and the proposed solution is reasonable. Previous works such as (Gu et. al. 2018) have been motivated in a similar fashion, and the proposed solution seems to outperform it on the TED dataset of Qi et. al. 2018. The experiments are informative. The main open questions I have are: (a) Varying the latent embedding size. It seems like only 10,000 is tried. Since this is the main contribution of the work, it will be desirable to see results for different sizes. Is the method sensitive to this hyperparameter? Also suggestions on how to pick the right number based on vocabulary size, sentence size, or other language/corpus characteristics will be helpful. (b) What do the latent embeddings look like? Intuitively will they be very different from those from Gu et. al. 2018 because you are using words rather than subwords as the lexical unit? (c) The explanation for why your model outperforms Gu et. al. 2018 seems insufficient -- it would be helpful to provide more empirical evidence in the ablation studies in order really understand why your method, which is similar to some extent, is so much better. The paper is generally clear. Here are few suggestions for improvement: - Table 1: Please explain lex unit, embedding, encoding in detail. For example, it is not clear what is joint-Lookup vs. pretrain-Lookup. It can be inferred if one knows the previous works, but to be self-contained, I would recommend moving this table and section to Related Works and explaining the differences more exactly. - Sec 4.2: Explain the motivation for examining the three different lexical units. - Table 3: "Model = Lookup (ours)" was confusing. Do you mean "our implementation of Neubig & Hu 2018? Or ours=SDE? I think the former? - Are the word representions in Eq 4 defined for each word type or word token? In other words, for the same word "puppy" in two different sentences in the training data, do they have the same attention and thus the same e_SDE(w)? You do not have different attentions depending on the sentence, correct? I think so, but please clarify. (Actually, Figure 2 has a LSTM which implies a sentential context, so this was what caused the potential confusion). - There are some inconsistencies in the terms: e.g. latent semantic embedding vs latent word embedding. Lexical embedding vs Character embedding. This makes it a bit harder to line up Sec 4.4 results with Sec 3.2 methods. - Minor spelling mistakes. e.g. dependant -> dependent. Please double-check for others. <doc-sep>This paper presents an approach to creating word representations that operate at both the sub-word level and generalise across languages. The paper presents soft decoupled encoding as a method to learn word representations from weighted bags of character-n grams, a language specific transformation layer, and a "latent semantic embedding" layer. The experiments are conducted over low-resource languages from the multilingual TED corpus. The experiments show consistent improvements compared to existing approaches to training translation models with sub-word representations. The ablation studies in Section 4.4 are informative about the relative importance of different parts of the proposed model. Can you comment on how your model is related to the character-level CNN of Lee et al. (TACL 2017)? In the experiments, do you co-train the LRLs with the HRLs? This wasn't completely clear to me from the paper. In Section 4.2 you use phrases like "concatenated bilingual data" but I couldn't find an explicit statement that you were co-training on both language pairs. What does it mean for the latent embedding to have a size of 10,000? Does that mean that W_s is a 10,000 x D matrix? Is Eq (4) actually a residual connection, as per He et al. (CVPR 2016)? It looks more like a skip connection to me. Why do you not present results for all languages in Section 4.6? What is the total number of parameters in the SDE section of the encoder? The paper states that you encode 1--5 character n-grams, and presumably the larger the value of N, the sparser the data, and the larger the number of parameters that you need to estimate. For which other tasks do you think this model would be useful?
although some may find the proposed approach as incremental over e.g. gu et al. (2018) and kiela et al. (2018), i believe the authors' clear motivation, formulation, experimentation and analysis are solid enough to warrant the presentation at the conference. the relative simplicity and successful empirical result show that the proposed approach could be one of the standard toolkits in deep learning for multilingual processing. J Gu, H Hassan, J Devlin, VOK Li. Universal Neural Machine Translation for Extremely Low Resource Languages. NAACL 2018. D Kiela, C Wang, K Cho. Context-Attentive Embeddings for Improved Sentence Representations. EMNLP 2018.
The paper presents a new loss function for survival analysis based on proper scoring functions to less then penalty wrong predictions that are confident make under the log-loss. The paper is interesting however the benefit over the traditional maximum likelihood estimator is small and the writing needs a bunch of work. I would also like to see an eval on data with far less censoring. A couple of comments 1) EHRs have only been generally adopted in the last couple of years. Only A couple of places have more. 2) Binary classifier citation on page 1 (Avati, Rajkomar) should also cite the plethora of recent machine learning for healthcare results in this field 3) Likelihoods are calibrated (as is any error measured by a proper scoring loss) 4) There are other methods to fit survival functions such as "Adversarial Time-to-Event Modeling" by Chapfuwa in ICML 2018. There are probably also moment methods 5) I think the evaluation might also want utilty because sharpness is a utility claim 6) Some of the statements in the writing are funny like probability distributions are uniquely identified by parameters. I'm not sure this is true with neural nets with symmetries. The paper doesn't need such claims 7) Instead of log-normals, I would like to see something nonparametric like the categoricals  used for maximum likelihood estimation without latents in the limiting model in "Deep Survival Analysis: Missingness and Nonparametrics" by Miscouridou at MLHC 2018 <doc-sep> My main concern is that the authors fail to compare their appproach to any of the modelling approaches discussed in the related works section. In particular, as mentioned by the authors the WTTTE-RNN has a similar architecture and thus would have been a crucial baseline for comparisons. Furthermore, I would have liked to see an evaluation on more datasets, especially since the data in Appendix H indicate that the proposed approach is only marginally better than MLE-based model fitting. Finally, in addition to the metrics presented, conventional metrics such as the C-statistic would have been interesting. I further miss a discussion of alternative approaches to achieve well calibrated scores, especially posthoc calibration using the validation set as discussed in Guo et al, ICML 2017. Related work is incomplete, for example the use of tensor-trains in RNNs to model EHR data (Yang et al) - would the proposed approach not benefit for the use of such tensorization to better model the high-dimensional, sparse EHR data? references: Guao et al, On Calibration of Modern Neural Networks, ICML 2017 Yang et al, Modeling progression free survival in breast cancer with tensorized recurrent neural networks and accelerated failure time models, Machine Learning for Healthcare Conference 2017<doc-sep>The authors introduce an extension of Continuous Ranked Probability Scores (CRPS) to the time-to-event setting termed Survival-CRPS for both right censored and interval-censored event data. Further, the authors introduce a scale agnostic Survival-AUPRC evaluation metric that is analogous to the precision-recall curve used in classification and information retrieval systems/models. The claim that that the proposed approach constitutes the first time a scoring rule other than maximum likelihood seems too strong, unnecessary and irrelevant to the value of the presented work. It is not clear how did the authors handle the irregularity (in time) of EHR encounters in the context of an RNN specification. Also, if the RNN specification considered is similar to Martinsson, 2016, why this wasn't considered as a competing model in the experiments? In Table 1 , it is not clear what the error bars are also they seem too small. The proposed approach addresses important questions in time-to-event modeling, namely, calibration and interval censoring. Although the connection with CRPS is interesting (first of the two equations in page 3), it is quite similar to an accelerated failure time formulation, which for a log-normal specification is standard and popular due to similar reasons to those highlighted by the authors, but not mentioned in the related work. The interval censoring is also interesting, though straightforward and perhaps not as relevant in more general time-to-event settings where events other than age are considered. The Survival-AUPRC is not sufficiently motivated. Without motivation or an intuition of why it should be used/preferred, it seems disconnected from the rest of the paper and its contributions. Without a more comprehensive evaluation that includes additional datasets and competing models (described in the Related Work Section) it is difficult to assess the value of the proposed approach.<doc-sep>The paper proposes the use of Survival Continuous Ranked Probability score instead of maximum likelihood estimation for personalised probabilistic forecasts of time-to-event data, thus estimating a distribution over future time. The authors describe the evaluation their method using (1) proper scoring rule objectives; (2) evaluation of calibration using sharpness as a metric; (3) the survival precision recall curve. The authors then apply these techniques to predicting time-to-mortality using an RNN that takes EHR patient records to predict the probability of death at a given time point. It’s not clear how this is related to the Survival CRPS model or how this model is incorporated into the RNN. Overall, this is an important framework for estimating personalised predictions of survival events for patients with interval-censored data. The authors present a well thought-out paper with clearly and realistically articulated modelling assumptions. The authors also give an excellent critique of the underlying assumptions of current state-of-the-art survival methods. The authors are also to be commended for the mathematical elegance Although the paper is very well written and extremely well structured, I struggled with the lack of experiments available in the paper. The text embedded in Figure 3 is too small. The results section is somewhat sparse. Although the mathematical formulation is well-motivated and structured, it’s not clear what the contribution of this work is. The difference between CRPS-INTVL and MLE-INTVL is incremental and it’s unclear what the significant benefits are of CRPS vs MLE. What would the interpretation of these differences in a real-world setting?
All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.
In this work, the authors extend the DABS benchmark [1] to include five more datasets. The new datasets are carefully chosen to cover a variety of underserved domains -- i) bacterial genomics, ii) semiconductor wafer manufacturing, iii) particle physics, iv) protein biology, v) satellite imagery. The authors provide evaluation of two new variations of masked autoencoders and contrastive learning under varying levels of corruption (masking) in the data for the five new datasets and the seven datasets from the original DABS benchmark. Generalized MAE does well on the satellite image dataset but it is not consistently the case across domains. Furthermore, the optimal amount of corruption is not just domain dependent and can depend on other factors making it hard to choose. [1] Tamkin, Alex, et al. "Dabs: A domain-agnostic benchmark for self-supervised learning." Neurips 2021 datasets and benchmarks track. The authors consider an interesting set of datasets from a wide range of disciplines. Given the wide range of datasets covered the updated DABS benchmark should be of interest to the broad community. To the best of my understanding, the authors have properly answered questions regarding the datasets containing offensive content. I have several concerns regarding the paper. Significance of the contribution: 1. Out of the five datasets that the authors use, three of them (higgs, genomics, eurostat) are directly available in tensorflow already. This implies the datasets are already in a usable format and I do not see what is the authors contribution in terms of these datasets. The introduction and description in the main text feels misleading. 2. The two algorithms that the authors use are a minor variation of MAEs and contrastive learning. Therefore, I do not think there is any insightful algorithmic variation that the authors introduce either. I understand that this is the benchmarks track and we do not expect completely new methods. Despite that I think what is introduced should not be stated as a new universal SSL method but should be honestly acknowledged as natural extensions/variations of well known approaches. 3. Thirdly, the authors show a table of results. I did not gather any new insights from the results and that was not very pleasing. 4. Finally, there is a bunch of experiments that the authors state ran into an issue and have been marked as pending (for some other results are marked OM). I understand that issues can happen and do not want to penalize authors for this. At the same time, it seems unfair to others. By accepting this we are allowing authors extra time to gather results that others did not have. <doc-sep>Five new real-world datasets in science and engineering are added to the original 7 datasets in DABS 1.0. Also, two additional unsupervised learning methods are introduced - Capri, and a generalized masked encoding. Both these algorithms and the ShEd algorithm from DABS 1.0 are then applied to all the 12 datasets and the variation of influence of corruption fraction on performance, across the 12 domains (and the three algorithms) is observed. The process and results convey the potential of using DABS 2.0 for assessing robustness of domain-agnostic SSL methods and also for studying the effect of design decisions across domains. - Supports generalization of SSL algorithms to less-studied and diverse domains and modalities - Enables assessment of how certain design choices for one domain affects the performance on another domain. In this work, the cross-domain difference in the influence of masking / permuting of embeddings becomes evident. - Tasks associated with the datasets introduced are real-world tasks. - One of the datasets - Bacterial Genomics includes in-distribution and out-of-distribution data enabling an assessment of robustness of models to distributional shifts. - Each newly introduced dataset is of a different size which can help get an idea of how a domain-agnostic algorithm is able to scale. - The baseline results indicate that the same SSL algorithm and settings do not perform well in all domains. This is the challenge for SSL algorithms - to generalize to multiple domains without needing manual domain-specific tuning. The fact that the results indicate this challenge, supports the purpose of this dataset. - Easy execution of pre-training and transfer learning with the help of newly added code is mentioned but the code / dataset are not made available for review. - Claims surrounding 'Universal SSL' may be a bit too broad, since graphs, point clouds, etc. are not included. <doc-sep>The paper extends the DABS benchmark with datasets from five new domains, proposes a new universal SSL algorithm extending masked auto-encoding. The paper also investigates an interesting metric, the corruption rate across the different domains. * Extension of a well-established benchmark * Significant contribution with 5 new datasets domains * New domains are novel in terms of being less studied than extensively studied, such as images, text, speech, etc... * Interesting investigation of corruption rate and how it differs based on the domain at hand * Discussion of external and internal validity The scope of the paper is relatively wide with the multiple domains and the two new algorithms. This also makes it difficult to supply a datasheet for all newly introduced datasets, elaborate on the datasets in terms of split, distribution, etc... Additionally, it makes it challenging to cover the details of the benchmarking with a good level of detail. <doc-sep>The authors contribute DABS 2.0, an extension to DABS dataset for universal self-supervision now with additional five new domains. The authors also propose two new algorithms and evaluate them over all the 12 domains with different corruption fractions. The paper takes universal SSL one step further in both dimensions: datasets and algorithms. This is a good contribution for the community. The 5 new domain datasets are legitimate contributions; however, in my view, MAE does not account as a novel contribution to this research. A lot of implementation details (in section 3) could have been provided to make it a thorough read. <doc-sep>The paper proposed a benchmark DABS 2.0, which contains extended real-world science and engineering dataset domains and a universal SSL algorithm. 1. A universal SSL algorithm is proposed along with the benchmark and works as a strong baseline. 2. The insight of evaluating and employing the universal self-supervised on real-world science and engineering datasets would be helpful for both AI for Good and AI for Science. 3. Comprehensive discussion including both internal validity and external validity 1. The results part mainly explored the masked ratio experiments on the proposed benchmark, compared with two baseline methods. More details on the experiments, models, hyper-parameters are expected to be provided. 2. The documentation of the codebase is constrained into README file without the detailed APIs .
DABS 2.0 extension of DABS to include five more datasets, and serves as benchmark for self-supervised representation learning. The new datasets cover new domains such as genomics, industrial images, biology, satellite imagery, etc. In addition, two new self-supervised learning methods are evaluated and their robustness to domains and hyper parameters are evaluated. In addition a new technique called Capri is introduced, which combines benefits of masked auto-encoders and contrastive learning to learn representations. The reviewers are generally positive except R1 who recommends reject. The main concern of R1 is that the contributions over the DABS is not significant (e.g., several datasets are available in tensor flow). Contrast this with R2, R3, R4, R5 who find the contributions valuable. Given the importance of unsupervised representation learning and the significant effort being put in the research community this work is valuable. Thus they recommend accept.
Interesting work. The imbalance problem is essential problem for machine learning and its applications. Even though many works focus on foreground-background class imbalance problem such that polyp detection—two classes of polyp and background—, this work focuses on foreground-foreground class imbalance, where multi classes exist. The proposed method sounds technically. The proposed method randomly sampled images from imbalanced dataset following the probability distribution, which is computed before the training by solving a quadratic optimisation problem. This optimisation finds probability distribution that gives equal expectation of class frequency for all classes. The authors demonstrated the validity of the proposed method by comparing the detection performances among the proposed, and the classical two subsampling- and oversampling-based method in the application to fetal anatomy detection. In this comparison, the proposed method improved the mean average precision for small-number-image classes more than the classical methods. As the results, the proposed method achieved the best performance over all anatomies among the three methods. Setting of data splitting in experiments is unclear. For fair evaluation, dataset should be split into training, validation and test data without duplication of patients. Without this no-duplication splitting, we cannot evaluate the generalisation ability of the trained model. The presented experiments look only the evaluation in training data. I'm interested in the generalisation ability of the model trained with the proposed method. Only one dataset is adopted in experiments. Evaluations with a few dataset are welcome for the demonstration of the validity of the proposed method. Survey also looks limited. <doc-sep>The paper tackles an important problem of training machine learning models on imbalanced dataset. A problem that is prevalent in medical imaging settings. The paper is mostly well-presented. The results demonstrate improved results over the two other methods compared against, showing potential for real-world use. The major weakness of this paper is the lack of context for related work in this area. This manifests in two ways: both the introduction/discussion and experiments. The imbalanced dataset problem is a well-studied problem with many existing rebalancing methods available. The paper mentions only two foreground-foreground papers, and does not compare the results with either of those existing methods. A fairly common method is to oversample methods inversely proportional to its frequency, yet this is not used as a baseline. The paper would be significantly improved with more acknowledgement of the context of balancing methods and improved comparisons in the experiments. At current, only heuristic oversampling of two minority classes, and uniform sampling is compared against the proposed methods. These are decent positive controls, but they do not place the work in the context of existing work. Note I am not setting a criterion that they must outperform all previous methods, but the relative performance to alternative ideas is important. <doc-sep>The paper is generally well written and contains thorough proof of their method. Their method performs significantly better than baselines and requires only minimal overhead. They train their method multiple times in order to calculate standard deviation and p-values stating the significance of their results. The baselines are very weak! A follow up paper of one of their main references proposes a method regarding foreground-foreground class imbalance in object detection. This is not mentioned in the paper and would make for a great comparison.[1] The relevance for the medical context could be discussed a bit more. Just using a medical dataset is not enough. They evaluate only on one dataset. Figure 1 & 2 are not mentioned in the papers text. 1. Oksuz, K., Cam, B. C., Akbas, E., & Kalkan, S. (2020). Generating positive bounding boxes for balanced training of object detectors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 894-903). <doc-sep>The paper is very well structured and written. I enjoyed reading it. The research problem is stated clearly, which is really appreciated. The introduction is clear. The related work section could be more detailed but still succeeds in positioning the work properly. The presentation of the proposed solution is well written. I appreciate the balance between textual explanations and mathematical formalism, leaving advanced details for the appendixes. The conversion of the sums into matrices in equation (1) builds bridges between the theory and the implementation. In 3.2.1, the choice of the hyperparameters is well detailed. In 3.3.2, the statistical analysis of the results, presenting not only the best results but also the mean and standard is very good and something I would like to see more often in the papers. The conclusion in 4 is well written, presenting not only positive points of the method (increased performance VS baseline) but also negatives (increased complexity due to the introduction of new hyperparameters) What could be improved is mainly the experimental setup. My main concerns are: 1) Why did you choose Yolo (and not Faster-RCNN for example which yields better performance)? And why only Yolo? Testing the method on different models would strengthen the paper a lot. 2) Also, validating the approach on other datasets would make the paper much stronger. 3) The Yolo model used is pre-trained on VOC and fine-tuned on the fetal anomalies detection dataset. Could you detail more how the fine-tuning is done? For example, if any part of the model is frozen or how you handle the learning rate. Minor points: 4) In table 2, how were the thresholds chosen? And why only 2? 5) In the discussion section, I am missing a word about complexity VS performance. The method increases the performance on the final task but increases the complexity of the training. I am wondering if there is a way to quantify this increase of complexity, for example by measuring the extra time or extra computation needed.
The paper receives overall positive comments from four knowledgeable and independent reviewers. They all like the novelty of the proposed work in addressing imbalanced sampling --- one of the most serious issues in medical image analysis. However, they also share the common concern, that is, lack of sufficient validation. Currently, only one dataset is used. The authors also provide a rebuttal about this. While I agree with the argument they present (time and resource constraints, medical imaging focus, etc.), it is not difficult to find another medical imaging dataset to test their idea. Therefore, I strongly encourage the authors to conduct such an additional experiment to make their final version much stronger.
This paper tackles the problem of generative modeling by using Langevin dynamics to sample from the denoising score function. Recently, this family of approaches (Song and Ermon 2019, Song and Ermon 2020) has shown promising and competitive results being positioned as a potential alternative to GANs. The paper introduces different improvements over Song and Ermon (2020). A different sampling dynamic (Consistent Annealed Sampling) that produces a more stable training that the traditional annealing scheme by carefully scaling the injected noise. Second, it is empirically shown that running a denoising step on the generated sample leads to an improvement of the FID score. Based on this observation, the paper proposes to use a denoiser trained in an adversarial fashion to synthesize more realistic images. The work addresses the very relevant problem of how to synthesize images in a realistic way, introducing some modifications to existing works that lead to an improvement on the quality of the generated image. The paper is well written, presents a nice introduction to the method, which allows to motivate the different modifications in a natural way. The proposed modifications are analyzed in low-dimensional toy experiments and in small-scale images (CIFAR, LSUN-churchers, Stacked-MNIST). In what follows I list a few questions: 1. Would it be possible to analyze the strategy of sampling presented in Kadkhodaei and Simoncelli 2020 (concurrent work), and compare to the one proposed in the paper? Both strategies seem to improve the stabilization of the procedure by scaling the noise. 2. Regarding the step of applying the denoiser to the generated sample. I wonder what happens if the denoiser is re-applied? Also, is this connected to the fact that the denoiser may have a fixed point and this fixed point might lead to a better sample? 3. Regarding using an adversarial denoising. In the denoising literature, there are a few works connecting score matching and state-of-the-art image denoisers. I would like to see a better discussion of this. For example, see, Romano, Y., Elad, M. and Milanfar, P., 2017. The little engine that could: Regularization by denoising (RED). SIAM Journal on Imaging Sciences, 10(4), pp.1804-1844. Reehorst, E.T. and Schniter, P., 2018. Regularization by denoising: Clarifications and new interpretations. IEEE transactions on computational imaging, 5(1), pp.52-67. --- After Discussion: I think this is a good paper and I would like to see it presented at ICLR2021. <doc-sep>The submission presents three contributions. First, the authors show the inconsistencies in the existing annealed Langevin sampling used in score-matching generative models and propose to correct it with the newly proposed Consistent Annealed Sampling (CAS) algorithm. The second contribution claimed is in providing evidence of the benefits of Expected Denoised Sample (EDS). Furthermore, the submission introduces a hybrid adversarial score-matching model that demonstrates improvements in terms of FID on simpler architectures. The proposed CAS algorithm is theoretically well-motivated based on the observation that ALS is inconsistent with the scaling of the noise during sampling process (although the question whether noise should follow none other than geometric progression is still an open question). The paper is well-written, and the ablation study is carried out well. However, it is a bit confusing as to whether the EDS (although under a different name — denoising jump) is a contribution of this paper or is something proposed prior to this work. I understand that this denoising procedure has already been presented as a necessary technique in score matching models. Nevertheless, I believe the authors contributed by showing that both ALS and CAS move samples towards the EDS (Proposition 3) and show additional empirical evidence of its benefits on synthetic and real datasets. Taking EDS on the last Langevin step diminishes the impact of CAS (doesn't bring unambiguous improvement in FID scores in the experiments), otherwise very interesting finding both theoretically and algorithmically, and substitute for ALS. The effect of the hybrid model is also not persistent and depends on the architecture used. For an incremental improvement (a combination of two models), the improvement is not consistent across architectures. The paper does not explain whether there is a good rationale for such a combination; therefore I remain sceptical about the results. Given all the above, I am still leaning a bit towards accepting the paper as it covers an interesting finding relating to the ALS. Although the CAS effect on performance is limited by the EDS, score-matching models are of broad interest for the ICLR community.<doc-sep>The paper presents a novel approach for denoising score matching, where the Annealed Langevin Sampling has been substituted by Consistent Annealed Sampling, which adds more stability to the process. The paper is in general clear and well-written. The contributions are clearly highlighted and the proposed approach is conveniently compared with other state of the art methods, demonstrating its superiority. Positive aspects: - The Consistent Annealed Sampling proposed in this paper is more stable than the Annealed Langevin Sampling - The combination between GAN and score matching improves the quality/diversity of the generated sample Negative aspects: - The limitation of the method to Gaussian noise - The presentation of a real scenario for your approach would have been a plus However, I have some questions: 1. Who is n_sigma parameter in Algorithm 1? 2. Algorithm 1, line 4: there is no iteration over 't' in the loop? 3. How does your denoising scheme work? Do you create noisy samples from your real data and try to denoise them using the proposed approach? Because taking a sample affected by random noise (in the test phase) I guess it won't work. 4. The denoising scheme is used in a GAN framework, the denoised samples being perceived as real by the discriminator. Is the system trained end-to-end or first you denoise the image and afterwards you train the GAN? 5. Could you please indicate an application scenario which could benefit from this approach, e.g. image-to-image translation, domain adaptation, etc.? 6. Your method is assuming Gaussian noise. Can it be extended to the case of general noise (a noise model which could be also learnt)? <doc-sep>The article deals with generative models based on “Annealed Langevin Sampling“ rather than a GAN. Theses models suffer from worse FID than GAN. Authors proposed to denoise the last Langevin samples to reduce the gap in performance with Adversarial Network. The paper is really easy to read with good illustrations and supporting experiments. In order to gain in comprehension, especially for people new to ALS, it would have been great if authors have proposed an illustration (and comparison) of the samples evolution along Alg 1 and Alg 2 . Authors are honest in their revised results comments but I don’t known if they will be able to include the erratum in a final version As I was not aware before this review of “Annealed Langevin Sampling” my rating may not be confident.
This paper introduces an alternative to Langevin sampling and also the idea of adversarial score sampling. The reviewers are generally supportive of the paper. Pros: - The idea behind improving Langevin sampling is theoretically justified and leads to a simple algorithm. - The idea behind adversarial score matching is also shown to be effective - Improvement over baseline Cons: - Two ideas packed into one paper, which is reflected by the title as well. - From the narrative it could be thought that using EDS on the last step of CAS is the contribution of the paper.
This paper designs an equation, i.e., equation (5) in the paper, to measure the impact or contribution of each participant/agent in federated learning. The designed measurement method is applied to attention aggregation algorithm of federated learning. Few experiments using Penn Treebank are conducted to support its claims. This paper should be rejected because (1) the paper is unpolished and thus is hard to read, (2) the novelty appears quite weak, and (3) the experiments are difficult to understand and generally do not support its contributions Concerns: The paper is difficult to read due to the poor use of English. Many sentences are incomprehensible. Thus, it was often impossible for me to determine exactly what the authors would like to say or describe. Please have your submission proof-read for English writing style and grammar issues. Moreover, please treat the equations as the parts of sentences and make sure that the caption formats of Figures obey the ICLR format. I also have a serious concern about the novelty of this paper. If my understanding is correct (due to the aforementioned reason), Subsection 3.3 is the only new material proposed by the authors. However, the proposed equation, i.e., equation (5), seems like a design choice without any theoretical justification or providing any intuitive reason, which significantly degrades the novelty of this paper. Finally, the experiments should be refined to support its main claims. As claimed in Section 1, the proposed measurement method is real-time and has low computational complexity. However, no experiment nor quantitative comparison addressing the running time and complexity between the proposed method and Shapley Value. Actually, the authors compared their method with the method of approximating Shapley Value instead of exact Shapley Value. Furthermore, please cite for Shapley Value papers. <doc-sep>Summary: The paper proposes a new contribution measurement approach for federated learning. The basic idea is that the agent with a larger model update has a larger contribution. Specifically, based on FedAtt [1], the impact of a client is computed as the local updates plus the impact of the previous round times a decay rate. The experiments on a dataset show that the proposed approach can have a similar contribution measurement compared with Shapley Value. [1] Learning private neural language modeling with attentive aggregation. IJCNN 2019 Strengths: (1) The motivation of the paper is clear. (2) The studied area is important. Effective incentive mechanisms in federated learning are still an open challenge. Weakness: (1) The proposed idea lacks novelty and may not be applicable in general federated learning algorithms. The contribution of each client is simply evaluated by its local update in FedAtt. FedAtt is not a widely used federated learning algorithm currently. It is not clear whether the proposed approach is applicable to other standard federated learning algorithms such as FedAvg. Also, I do not understand why the paper focuses on FedAtt instead of FedAvg. (2) The paper lacks reasonable explanations for the proposed approach. A client may have arbitrary bad data and the local updated model may be far from the global optimal model. In such a case, since the distance between the local model and the global model is large, the contribution is also large according to the proposed approach, which is not reasonable. It is not clear how the proposed approach can handle such cases. (3) The experiments are weak and not clear. a) It is not explained how the agent contribution rate is computed. b) The experiments are conducted on a single dataset. More datasets are needed. c) From Figure 2, it is hard to say that the proposed approach has a similar measurement with SV. d) Since the motivation is to reduce the computation overhead, the authors should show compare the computation complexity or the computation time of the proposed approach and SV. Minor issues: (1) The writing can be improved, e.g., “Such” -> “For example,” (2) Figure 1 is not referred to in the text. (3) Figure2-5: orange and blue colors are not explained. <doc-sep>The paper is to measure each client’s contribution to training the federated learning model. In particular, the contribution is measured by the distance between the local model and the global model in each iteration. The targeting problem is interesting, and the use of attention-based model divergence is also an interesting idea to measure the contribution. However, the paper lacks strict theoretical discussion to prove the proposed solution is a reasonable one rather than a heuristic method. Moreover, the experiment is too weak to support the claims. The paper’s technique contribution and originality are also limited. Below are some detailed concerns. 1) The authors need to make a clear definition of the assumed application scenario so that the below problems can be avoided or solved. If the client’s contribution is linked to rewards, it is unavoidable that some clients will produce fake data to gain more contribution to the commercial federation system. Therefore, the paper should discuss the prevention of “attacking by fake data”. For example, if the client randomly shuffles the index of neurons in the trained local model w_k, then the client’s local model will get a bigger s_k^l calculated by equation 2. Thus, this client is likely to gain a big reward at every iteration. According to equation 5, the contribution at the early stage will be discounted. It is unfair for the clients to be selected at an early stage. Therefore, from a systematic perspective, some clients may refuse to contribute to the training process at an early stage. 2) Contribution is not enough The core method comes from the FedAtt algorithm – an attention-based federated aggregation method. The paper’s primary contribution relies on section 3.3 to measure the contribution according to the gradients. 3) The experiments are too weak to support their claim. More datasets and baseline methods are required, for example, the FEMNIST, FeCeleba. It is unclear how to define an objective metric to measure the quality of the proposed method. The contribution is a subjective feeling that various to different tasks and assessor.<doc-sep>The paper proposes a low computational complexity method for weighting contributions of clients in a federated learning setting. The main contributions are to compare the weighting method with Shapley values and their sensitivity to low data volume and quality. The paper is based on the FedAtt paper that calculates weights based on the Euclidean distance between the server model and each client and for each layer. The experimental setup is well described, including details about the hardware, software, datasets, model, and evaluation criteria. However, the model only specifies a "smaller GRU-based model" without giving any details of what that model is. They do not clearly describe some parameters of the approximation of the Shapley value calculation, reducing the value of the comparison between FedAtt and Shapley values. They could also have taken additional steps to improve the claims' confidence, e.g., only one dataset was used, which is relatively weak compared to the original FedAtt paper. The graphs in the results section could be described with more detail to explain what, e.g., the colors of the "special agents" mean. Also, there are no confidence measures specified, making it hard to evaluate the claims' validity. The references include essential papers but are missing some core references, such as Federated Learning and Shapley values themselves. Also, related papers such as "Active Federated Learning" by Goetz et al. talk about very similar ideas but lack any mention in the paper. The language and grammar could be improved, and some of the formulations make it hard to read. The comparison to Shapley values is also not motived in any detail, thus further reducing the paper contributions' value.
Although this paper tackles an important problem, all reviewers agree that it requires further work before it can be published. First, the paper would need to be polished in order to be easier to read. Stronger experiments would also be needed in order to support the claims of the paper, e.g. by considering additional datasets and proper baselines. Finally, an important concern about this paper is novelty and originality. It is not clear at this point that the contribution is substantial enough for a conference like ICLR. Addressing these points would significantly improve the paper.
This paper proposes another variant of phrase-based MT for African languages, involving native speakers for manual annotations. Instead of just using subwords or statistical phrase identification, the authors propose to use the intuition of native speakers for translating African Fon languages into French (and vice versa). According to their experiments, BLEU and other indexes significantly improved over standard IBM-1 phrase-based machine translation. However, from the description and examples in this paper, I have a little doubt for this improvement: - For creating the aligned corpus, the authors say that they chose only short expressions, namely 1-6 words. According to the results shown in Table 1, this essentially amounts to simply memorizing frequent idiomatic phrases. Therefore, improvements with this kind of human intervention over such an easy sentences is basically trivial. Of course, the paper says that the test data comprises of long and complex sentences; but the examples are not, thus I cannot identify whether the problem is really difficult or not. - Even if the proposed human annotation is effective, that does not seem to leverage characteristic property of African languages. In Section 3, "un d'o ganji" has an ambiguities about "un", but this kind of ambiguity of a word is shared by almost all the other languages (imagine translating "given" in a conditional proposition). The property of African Fon languages, such as diacritics and affixation, are not used here. Finally, the proposed annotation algorithm in page 3 seems to quite vague to me. Where v came from? If w is a word, what is the meaning of "w \\subseteq v"? Also, this algorithm seems to use a simple longest match: however, in many cases the usage of a word is only clear using succeeding words; i.e. some forward-backward algorithm is necessary for correct identification of a phrase. That being said, I strongly agree with the authors that neural machine translation of African low-resourced language is important. I hope that the authors would add more persuasive results and analysis to realize a practical translation of Fon languages. <doc-sep>The authors investigate different tokenization methods for the translation between French and Fon (an African low-resource language). This means that they compare different ways to construct the input and output vocabularies of a neural machine translation (NMT) system. They further propose their own way to create those units, based on phrases, which is called WEB. The NMT system the authors use follows Bahdanau et al. (2015): it is a GRU sequence-to-sequence model with attention. The dataset they use has been created and cleaned by bilingual speakers and consists of roughly 25k examples (this is a really small dataset for NMT, so the authors are taking on a really hard task!). WEB works in the following way: after phrases have been found automatically, bilingual speakers analyze what the longest phrases which correspond to translated phrases in the other language are. Only the longest phrases for each example are kept for the final vocabulary. The authors show that WEB improves the performance in both translation directions by a lot on all metrics, clearly showing that the work they invest into creating the vocabulary pays out. Thus, I think this work is important to be able to provide speakers of Fon with a functioning translation system. However, I am unsure if this work is suitable for a machine learning conference. While the overall goal of this work is to create an NMT system, the main contribution is the manual cleaning of the dataset and semi-manual creation of the vocabularies. I would recommend to the authors to submit this paper to a conference with a stronger focus on NLP and NLP resources (maybe LREC)? I further want to emphasize that I think work like this paper is incredibly important and the authors shouldn't feel discouraged. Importantly, the manual labor needed for WEB has been a lot and it's obvious that it helps for NMT. I just don't think that this paper is a good fit for ICLR. Minor point: has the creation of WEB access to the test data? If so, the authors should change that (or collect new test data?) to ensure a fair evaluation. <doc-sep>Edit after seeing others reviews -- I think I gave this paper a MUCH higher score than the other reviewers, simply because it is very novel with Fon language. I agree with all of your points about what is lacking, but in my mind, the novelty was enough to still give a 7. Now I definitely think that is too high. I think this paper can reasonably be rejected, but I'd like to give actionable of constructive criticism, since I do think the work on this low resource language is important for the NLP community. With such low resources, we cannot expect the same type of work as we would for other languages. Overview: This paper discusses the problems of common tokenization strategies for low resource african languages, and proposes a new tokenization method to overcome these problems. They train low resource NMTs using 4 different tokenization strategies, to show that their proposed tokenization method leads to the best NMT results by several metrics. Contribution: The authors contribute a new tokenization method, code, and a dataset. The good: Very interesting and important work! Many people will be excited to use this data. Paper is mostly clearly written, and easy to read. The paper flows well. Someone with this paper could reproduce the work, more or less. The bad: * Figure 1 is difficult to read and messy. First, by "Input" you actually mean "Source". The input would be the source sentence with its appropriate tokenization, no? Also, I think putting the english translation in a different font or color would be greatly helpful to our eyes. I really think this must be fixed! Figure 1 is presently not pleasant to look at, even though it has interesting results`! * Section 4 - I think you really need to re-state that the algorithm has a human-in-the-loop for clarity. Before describing your algorithm, humans are only mentioned once in the algorithm. Indeed, at first, the words "The following algorithm" confused me, because I thought it was more a "methodology", since Step 2 is where the humans are in the loop, unless you have a Fon POS tagger and I am misunderstanding? But then at the end, I saw you include Encode as step 4, so it is the machine... The fact that I flustered a bit with my understanding here, was confused, and had to spend a few minutes thinking about it, means it needs a bit of tweaking. Maybe add a comment saying Step 2 is the human-in-the-loop step of the algorithm? Suggested additions: * I think more specific linguistic details about Fon are missing. For example, if you could give us one or two sentences of Fon in the beginning of the paper, that demonstrate some of the difficulties of the language, I think this would greatly strengthen the motivation. You *tell* us that Fon is "a language with special tokenization needs" and that "standard tokenization methods do not alwaysadequately deal with the grammatical, diacritical, and tonal properties of some African language", and you cite the relevant papers. But I would still like to be *shown*. I think just including two sentences that have some of these features, and that gets the point accross of "how would we tokenize this?" would really help the motivation. Its not that I/readers dont believe you when we are *told*, but being *shown* makes it much more interesting and give people an appreciation for Fon tokenization challenges! * Can we get any information about how the annotators were trained? I think this is standard for such papers. Other smaller suggested fixes: * Section 5, near the end - Little grammatical mistake. "... bunch of those errors has" should be "errors have". * Section 6.3 - Please change "The results from Table 2 and Table 1" to say "Table 1 and Table 2". It does not make sense to list them out of order. I also think it makes sense to switch Figure 1 and Figure 2 entirely. I.e., Figure 1 should be your results table, and figure 2 should be the examples for us to see. * Section 6.3 - Slightly confusing wording. The second sentence is confusing to me, and I am a native English speaker. "It is important to note that while BLEU of other methods reduced on the Fr→Fon task, WB improved on it." To me saying "BLUE reduced for the other methods" means that you have some other baseline you are comparing to. Am I missing something? Are you comparing against Fon--> Fr? Questions: * Section 6.2 - Does it really take all 500 epochs to run, or do you have early stopping at some point when the loss flatlines? * Because BPE is such a standard baseline, why do you not include it as a baseline? I know you cite the Abbott & Martinus, 2018 paper, stating that BPE is bad for analytical languages, but I still think it would prove a point to show BPE performing badly for your data. Overall: Very interesting work, and can't wait to see this data be used :-) I think the paper could be greatly strengthened by taking some time to include an example that demonstrates the linguistic and typological features of Fon that makes it difficult.
The authors investigate different tokenization methods for the translation between French and Fon (an African low-resource language). Low-resource machine translation is a very important topic and it is great to see work on African languages - we need more of this! Unfortunately, the reviewers unanimously agree that this work might be better suited for a different conference, for example LREC, since the machine learning contributions are small. The AC encourages the authors to consider submitting this work to LREC or a similar conference.
In the paper, Rotograd is proposed as a new gradient-based approach for training multi-task deep neural networks based on GradNorm. GradNorm is first formulated as a Stackelberg game, where the leader aims at normalizing the gradient of different tasks and the follower aims at optimizing the collective weighted loss objective. Under this formulation, one can utilize theoretical guarantees of the Stackelberg game by making the leader have a learning rate that decays to zero faster than the follower. To further account for the different gradient directions, a learnable rotation and translation are applied to the representation of each task, such that the transformed representation match that of the single-task learning. By adding an additional term accounting for learning this rotation, the leader in the Stackelberg game will minimize the loss to homogenize both the gradient magnitude and match the representation to single-task learning as close as possible. In general, I find the direction of gradient homogenization for multi-task learning very important and interesting. The paper provides an interesting perspective through the Stackelberg game formulation, which provides a framework for selecting the learning rate of GradNorm type of gradient homogenization methods. The other contribution of the paper is a learnable task-specific rotation that aligns the task gradients with single-task learning. The proposing of a learnable rotation matrix seems an interesting idea, although I am not sure if it has been proposed previously for multi-task learning. I find the first contribution of formulating the problem as a Stackelberg game to be interesting and novel. However, in terms of the second contribution, I have some concerns about whether it makes the most sense by aligning the transformed representation with that of single-task learning. For MTL, one of the key benefits is learning a better representation by sharing it across different tasks to encourage helpful transfer between the tasks; by constraining the transformed representation to be as close to the single-task learning representation, it might limit the transfer between tasks since the representation are constrained to be equivalent to that learned by single-task learning. I think it is helpful to think about using rotation invariant representations for aligning the gradient directions, but it is questionable to align it to that of the single-task learning. Another major concern is about the experimental results, full experiments are only conducted on one real-world dataset. The experiment on the second dataset seems to be very preliminary, which might not be sufficient to justify the proposed method empirically. Also on the second dataset, it seems the two different implementations of Rotograd have a large discrepancy in the results, which might need more investigation about why this happens. Meanwhile, many ablation studies seem to be missing. I am mostly interested to see experiments that validate the Stackelberg game formulation, for example by using different learning rates for the leader and the follower. Also, it would be interesting to see how the proposed Rotograd compares with pure GradNorm on gradient direction alignment. Overall, I feel the experiments are not complete for validating the effectiveness of the method. Some minor points: the description of d-grad method seems to be missing. Also, Yu et. al [2020] also deals with gradient aligning for MTL which could be considered as a baseline to compare with. Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., & Finn, C. (2020). Gradient surgery for multi-task learning. arXiv preprint arXiv:2001.06782. --------After author's response---------- I am not fully convinced by the explanation of the motivation behind rotation matrix, in particular why it is aligning with the single-task learning, which is counter-intuitive. The authors provided more ablation studies, however, the evaluation on datasets is still quite preliminary with some questions remaining (such as why there is a discrepancy between the two versions of Rotograd on the second dataset). Therefore I am keeping my original score. <doc-sep>This paper presents an extension of Gradnorm to address task conflicting due to discordant gradient direction. Specially it introduces a rotation matrix to rotate the hidden representation from the last shared layer. The authors put the proposed method in the context of game theory to show stability and convergence of the training, which might be of merit. The writing of the paper doesn’t meet the publication standard, needing major work to improve. There are many typos and awkward sentences, hindering understanding of their work. Also, there are many places that need clarification, for example, in Proposition 4.1, the inverse of the gradient of Z with respective to \\theta needs to be calculated. So, what is the shape of this gradient matrix? How it is necessarily to be a square matrix? What ||\\Delta_{\\theta} Z|| represents? the F-norm? There is lack of adequate explanation of the motivation behind the objective in Eq. (6). By reading the paper, I have no idea about the two oracle functions, and why they are defined in the way shown in Eq. (8). Eq. (3) is inaccurate, not aligning with that proposed in the GradNorm paper for the computation of L_{grad}^k. Eq. (9) is problematic. Why R_k z_i^t does not appear in the objective function of the first optimization problem? If this is because z_i^{k,t} = R_k z_i^t + d_k, then the objective in the second optimization problem would be just 0. Why operating on z instead of the gradient in Gradnorm can resolve the discordant gradient issue among tasks is not properly justified. The reported empirical results are weak and do not support this method works as claimed. <doc-sep>Summary: This paper proposes an MTL method that encourages the gradients on shared parameters to have similar directions across different tasks. The motivation is to reduce conflicts between gradients of different tasks, so that training can proceed more smoothly, and fit multiple tasks more easily. The paper introduces a new way of thinking about this kind of method, i.e., through the lens of Stackelberg games, which could be useful in reasoning about the convergence of such methods. The method is shown to perform favorably against related methods, especially in regression settings. Strong points: Minimizing gradient conflict is a well-motivated way to reduce negative transfer. The algorithm description is detailed, and should be straightforward for others to implement. Stackelberg games are an interesting framework for thinking about methods like GradNorm and Rotograd that adaptively guide MTL training. Weak points: The theory is interesting at a high-level, but it is not clear that it provides insights on what makes Rotograd work. In the paper, one main takeaway from the Stackelberg games framework is that the methods converge if the leader’s learning rate is asymptotically smaller than the follower’s. This takeaway is implemented by decaying the leader’s learning rate, but it is not shown that this is a key point required for Rotograd to work. I would not be surprised if the results were unaffected if this decay were removed. If this point is really important, it should be illustrated in ablation studies. More broadly, since the point does not only apply to Rotograd, this ablation could also be done on Gradnorm and other methods. Such ablations would be one way to connect the theory to the methods. Another main takeaway from the theory is that the rotation matrices and translation vectors should be updated with gradient descent, instead of simply replacing them each step. Intuitively, the algorithm would still make sense and be simpler if R and d were simply replaced. Experiments showing that the gradient-descent update rule is necessary would help show the value of the theory. Similarly, the value of Proposition 4.1 is not clear. Is it to prove stability? Does this have some particular connection to Rotograd, or is it a useful fact about hard parameter-sharing methods in general? There is one ablation “rotograd-sgd”, but it is not clear how exactly it works: Can it simply update R and d however it wants, or is Eq. 9 still used to regularize the updates in some way? By adding the rotation matrices, it’s possible that information that would be useful to share across tasks is instead stored in these task-specific matrices. That is, conflict between tasks can beneficially lead to more general representations. Restricting R to be a rotation instead of any matrix is one step towards limiting the amount of information leakage into task-specific parameters. Is there a conceptual reason to expect that the benefits from reducing conflicts will outweigh this leakage? The experiments are on an intentionally very small architecture, where one of the main issues is expressivity, which gives Rotograd an edge over methods that do not include an additional task-specific matrix. In Section 5.1, does the method without Rotograd do poorly because there are no task-specific networks in that case? Although Rotograd is motivated to reduce negative transfer, Table 1 shows that Rotograd does not reduce negative transfer, but rather improves positive transfer. That is, uniform does better than rotograd in the tasks where single-task is better than multi-task, but rotograd does better than uniform in the tasks where uniform is already better than single-task. This makes me think that the benefits of Rotograd are not coming from reducing negative transfer, but from somewhere else. Is there an explanation for why Rotograd does not work as well for multi-class classification tasks (i.e., performs worse than all other methods for Left and Right)? Is it because the task-specific heads have larger output sizes? E.g., could it be better to have a separate rotation matrix for each class? Figure 4 in A.3 confirms that there is an issue here: the cosine similarity is not higher for rotograd for the classification tasks. Overall, from the limited scope of the experiments it is not clear that Rotograd would provide practical advantages over competing methods. The ChestXray experiments show that although Rotograd does not hurt much, it does not help overall compared too uniform. That said, it would be still be interesting to see whether insights from Stackelberg games could lead to practical improvements for this problem. Minor comments: The writing has some issues. These issues don’t make the work unclear, but they are a bit distracting. Some example suggestions for fixing distracting word choice: “palliate” -> “alleviate”, “spoiled” -> “noted”, “we have not being able to propose Rotograd, but also to derive” -> “we have proposed Rotograd, and derived”. There is also frequent non-standard mixing of em dashes with spaces and commas. “$[r_k(t)]^\\alpha$ is a hyperparameter” -> “$\\alpha$ is a hyperparameter” The hyperparameter is \\alpha, correct? ---- Update: I am very happy to see the new experiments that validate the implications of the Stackelberg games theory. The main drawback of the paper is that it is not clear that direction homogenization could lead to practical improvements for multi-task learning. The additional experiments in Table 2 are useful, and suggest that much of the benefit comes from the greater expressivity due to task-specific matrices.
The paper is proposing a novel representation of the GradNorm. GradNorm is presented as a Stackelberg game and its theory is used to understand and improve the convergence of the GradNorm. Moreover, in addition to the magnitude normalization, a direction normalization objective is added to the leader and a rotation matrix and a translation is used for this alignment. The paper is reviewed by three knowledgable reviewers and they unanimously agree on the rejection. Here are the major issues raised by the reviewers and the are chair: - The motivation behind the rotation matrix layers is not clear. It should be motivated in more detail and explained better with additional illustrations and analyses. - Empirical study is weak. More state of the art approaches from MTL should be included and more realistic datasets should be included. - The proposed method is not properly explained with respect to existing methods. There are MTL methods beyond GradNorm like PCGrad and MGDA (MTL as MOO). These methods also fix directions. Hence, it is not clear what is the relationship of the proposed method with these ones. I strongly recommend authors to improve their paper by fixing these major issues and submit to the next venue.
This paper studies the effort of anisotropic noise in stochastic optimization algorithms. The goal is to show that SGD escapes from sharp minima due to such noise. The paper provides preliminary empirical results using different kinds of noise to suggest that anisotropic noise is effective for generalization of deep networks. Detailed comments: 1. I have concerns about the novelty of the paper: It builds heavily upon previous work on modeling SGD as a stochastic differential equation to understand its noise characteristics. The theoretical development of this manuscript is straightforward until simplistic assumptions such as the Ornstein-Uhlenbeck process (which amounts to a local analysis of SGD near a critical point) and a neural network with one hidden layer. Similar results have also been in the the literature before in a number of places, e.g., https://arxiv.org/abs/1704.04289 and references therein. 2. Proposition 4 looks incorrect. If the neural network is non-convex, how can the positive semi-definite Fisher information matrix F sandwich the Hessian which may have strictly negative eigenvalues at places? 3. Section 5 contains toy experiments on a 2D problem, a one layer neural network and a 1000-image subset of the FashionMNIST dataset. It is hard to validate the claims of the paper using these experiments, they need to be more thorough. The Appendix contains highly preliminary experiments on CIFAR-10 using VGG-11. 4. A rigorous theoretical understanding of SGD with isotropic noise or convergence properties of Lagevin dynamics has been developed in the literature previously, it’d be beneficial to analyze SGD with anisotropic noise in a similar vein.<doc-sep>The authors studied the effect of the anisotropic noise of SGD on the algorithm’s ability to escape from local optima. To this end, the authors depart from the established approximation of SGD in the vicinity of an optimum as a continuous-time Ornstein-Uhlenbeck process. Furthermore, the authors argue that in certain deep learning models, the anisotropic noise indeed leads to a good escaping from local optima. Proposition 3 (2) seems to assume that the eigenvectors of the noise-covariance of SGD are aligned with the eigenvectors of the Hessian. Did I understand this correctly and is this sufficient? Maybe this is actually not even necessary, since the stationary distribution for the multivariate Ornstein-Uhlenbeck process can always be calculated (Gardiner; Mandt, Hoffman, and Blei 2015–2017) I think this is a decent contribution. <doc-sep>The paper studies the benefit of an anisotropic gradient covariance matrix in SGD optimization for training deep network in terms of escaping sharp minima (which has been discussed to correlate with poor generalization in recent literature). In order to do so, SGD is studied as a discrete approximation of stochastic differential equation (SDE). To analyze the benefits of anisotropic nature and remove the confounding effect from scale of noise, the scale of noise in the SDE is considered fixed during the analysis. The authors identify the expected loss around a minimum as the efficient of escaping the minimum and show its relation with the hessian and gradient covariance at the minimum. It is then shown that when all the positive eigenvalues of the covariance matrix concentrate along the top eigenvector and this eigenvector is aligned with the top eigenvector of the Hessian of the loss w.r.t. the parameters, SGD is most efficient at escaping sharp minima. These characteristics are analytically shown to hold true for a 1 hidden layer network and experiments are conducted on toy and real datasets to verify the theoretical predictions. Comments: I find the main claim of the paper intuitive-- at any particular minimum, if noise in SGD is more aligned with the direction along which loss surface has a large curvature (thus the minimum is sharp along this direction), SGD will escape this minimum more efficiently. On the other hand, isotropic noise will be wasteful because a sample from isotropic noise distribution may point along flat directions of the loss even though there may exist other directions along which the loss curvature is large. However, I have several concerns which I find difficult to point out because *many equations are not numbered*. 1. In proposition 2, it is assumed under the argument of no loss of generality that both the loss at the minimum L_0=0 and the corresponding theta_0 =0. Can the authors clarify how both can be simultaneously true without any loss of generality? 2. A number of steps in proposition 2 are missing which makes it difficult to verify. When applying Ito's lemma and taking the integral from 0 to t, it is not mentioned that both sides are also multiplied with the inverse of exp(Ht). 3. In proposition 2, when computing E[L(theta_t)] on page 12, the equalities after line 3 are not clear how they are derived. Please clarify or update the proof with sufficient details. 4. It is mentioned below proposition 2 that the maximum of Tr(H. Sigma) under constraint (6) is achieved when Sigma* = Tr(Sigma). lambda_1 u1.u1^T, where lambda_1 is the top eigenvalue of H. How is lambda_1 a factor in Sigma*? I think Sigma* should be Tr(Sigma). u1.u1^T because this way the sum of eigenvalues of Sigma remains unchanged which is what constraint (6) states. 5. The proof of proposition 5 is highly unclear.Where did the inequality ||g_0(theta)||^2 <= delta.u^TFu + o(|delta|) come from? Also, the inequality right below it involves the assumption that u^Tg_0 g_0u <= ||g_0||^2 and no justification has been provided behind this assumption. Regarding experiments, the toy experiment in section 5.1 is interesting, but it is not mentioned what network architecture is used in this experiment. I found the experiments in section 5.3 and specifically Fig 4 and Fig 7 insightful. I do have a concern regarding this experiment though. In the experiment on FashionMNIST in Fig 4, it can be seen that both SGD and GLD 1st eigvec escapes sharp minimum, and this is coherrent with the theory. However, for the experiment on CIFAR-10 in Fig 7, experiment with GLD 1st eigvec is missing. Can the authors show the result for GLD 1st eigvec on CIFAR-10? I think it is an important verification of the theory and CIFAR-10 is a more realistic dataset compared with FashionMNIST. A few minor points: 1. In the last paragraph of page 3, it is mentioned that the probability of escaping can be controlled by the expected loss around minimum due to Markov's inequality. This statement is inaccurate. A large expected loss upper bounds the escaping probability, it does not control it. 2. Section 4 is titled "The anisotropic noise of SGD in deep networks", but the sections analyses a 1 hidden layes network. This seems inappropriate. 3. In the conclusion section, it is mentioned that the theory in the paper unifies various existing optimization mentods. Please clarify. Overall, I found the argument of the paper somewhat interesting but I am not fully convinced because of the concerns mentioned above.
The reviewers point our concerns regarding paper's novelty, theoretical soundness, and empirical strength. The authors provided to clarifications to the reviewers.
This paper presents a deeply supervised few-shot learning model via ensemble achieving state-of-the-art performance on mini-ImageNet and tiredImageNet. The authors first studied the classification accuracy on mini-Image across convolutional layers and found the network could perform well even in the middle layer. Therefore, they added classification headers on the selected layers, so that these layers can directly output predictions. The final result is the ensemble of all the select layer predictions, called the Multiple Representation Emsemble. To improve the result, they further average the results of two models with different network backbones, called Multi-Model Emsemble. The results show this method can achieve state-of-the-art performance on the two datasets. Advantage: 1. The motivation and idea in this paper are clear and simple, so the reader is easy to understand it. 2. Figures 2 and 3 are nice, which are clearly demonstrate the motivation and algorithm. 3. The find in Figure 2(a) is very interesting. The middle layer has a better representation than the end on the few-shot image classification task. 4. The results are positive. Disadvantage: 1. The idea in the paper is not very novel. The main contribution of this paper is doing a deep supervision ensemble. However, people have studied deep supervision learning for a while on image classification [1], segmentation [2], and depth estimation [3]. Specifically, [2] [3] also fuse the multi-layers' outputs. 2. The authors only show the ensemble results via averaging scores over the models. It will be good to study more ensemble methods. For example, the deep layer has higher accuracy than the shallow layer. Is it possible to assign a different ensemble weight for each layer based on the accuracy? 3. In figure 2(a), why the middle layer performs better than the last layer? It will be good to show some analysis? 4. In table 1, since the proposed model has done a model ensemble, it cannot directly compare with CAN and CTM. Should add the result without ensemble in table 1. If I put the third-row result "64.03" in table 2 to table 1, the improvement would be marginal. 5. Both mini-ImageNet and tired-ImageNet are the subsets of ImageNet. To verify the generalization, it will be good to add CIFAR, meta-iNat [4], or CUB [5] results. Minor mistakes, 1. Equation 1, should add the superscript `n` to r. 2. Figure 1, the characters are not evenly spaced. 3. Figure 2 (a), the axis label is too small. 4. In section 4.1, the sentence "The model can be pre-trained ......Dtrain or Dval.)" is redundant, which is common sense. 5. In section 5.1, "After pre-training, we added shift and scaling parameters for the convolutional layers in the encoder and trained the parameters by the MTL approach used in". Might add more details about the shift and scale, so that the reader does not have to read another paper. 6. Table 1, the standard deviations in "our" results are not aligned. ----- post rebuttal ---- The authors haven't addressed my questions. I would keep my score unchanged. One more comment: I suggest the authors compare to a related baseline SimpleShot [6] that is arguably less complicated. Overall, given that the novelty and improvement are minor, I think this paper might be not ready at this time. [1] Lee, Chen-Yu, et al. "Deeply-supervised nets." Artificial intelligence and statistics. 2015. [2] Xie, Saining, and Zhuowen Tu. "Holistically-nested edge detection." Proceedings of the IEEE international conference on computer vision. 2015. [3] Chang, Jia-Ren, and Yong-Sheng Chen. "Pyramid stereo matching network." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. [4] Wertheimer, Davis, and Bharath Hariharan. "Few-shot learning with localization in realistic settings." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. [5] Wah, Catherine, et al. "The caltech-ucsd birds-200-2011 dataset." (2011). [6] Wang, Yan, et al. "Simpleshot: Revisiting nearest-neighbor classification for few-shot learning." arXiv preprint arXiv:1911.04623 (2019). <doc-sep>Summary: The authors propose to tackle the problem of few-shot learning (FSL) using ensembling diverse classifiers. The diverse classifiers are obtained using the outputs from different intermediate layers of a pre-trained CNN feature extractor (or multiple CNNs). As a result, the authors demonstrate state-of-the-art accuracy on two mini-ImageNet and tiered-ImageNet datasets. Pros: - The idea totally makes sense since, in few-shot learning, the test distribution may be quite different from the training one. Hence, employing lower-layer features that are more class-invariant must be helpful, even though the space of semantic concepts learned by earlier layers is probably not as reach as for the deeper layers. - The results in mini-ImageNet and tiered-ImageNet are impressive. The experimental section is informative and clear. - The paper is well written and is easy to follow. Cons: - Limited contribution. None of the introduced ideas in this paper is novel. For example, the idea of using ensemble methods for FSL was introduced in [1]. Then, the idea of aggregating information from intermediate layers of a feature extractor to build a reacher classifier for FSL was introduced in [2]. The authors of [3] also used intermediate layers for better classification results. Basically, the contribution of the current work is to combine the ideas of [1] and [2] while using a different backbone network (a new ResNet18) and a different classifier (RelationNet). - I would call the need to manually select the layers from which to build classifiers a down-side of the approach since selecting all representation would lead to degraded performance. Overall, I like how the paper reads. However, the contribution of this work boils down to combining existing ideas and methods into a new pipeline, which I don't find sufficient for the ICLR acceptance standard. [1] - Dvornik et.al. "Diversity with cooperation: Ensemble methods for few-shot classification" [2] - Dvornik et.al. "Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification" [3] - Rusu et.al. "Meta-Learning with Latent Embedding Optimization"<doc-sep>Thanks to the authors for providing such an ensemble approach. This paper aims to find a way to directly utilize representations with the classification layer(s) to obtain better performance. The ensemble method is able to create an ensemble of classifiers. And the ensemble achieves the new state-of-the-art results in a few-shot setting, comparing to previous regular and ensemble approaches. This topic is very straightforward and would be very easy for the audience to understand. While the results might not be that convincing enough. The biggest concern is the contribution of this paper, to be more specific, the proposed method might not be useful and might need to be tuned in other few-shot settings. The mini-ImageNet and tiered-ImageNet results are good, while the authors could provide more evidence to show its strength and how to balance the computation and model performance. For the experimental setup, it is good and reproducible. However, when digging deeper, the reason for the ensemble is that we want to find a way to calculate the features through different classifiers, maybe this is because a single classifier is not able to learn all the features from the images at once. But why it is necessary to use this approach (it also needs pre-training) instead of using a more powerful network to achieve a similar performance? It is really good to see those analyses on Single Encoder multiple Representation, Multiple Encoder Multiple Representation, and Selection of Encoders and Representations for the Ensemble. It would be suggested if the author could give a detailed interpretation of the selected layer and how it could be used in other settings. This paper is well-written, with not many typos. The topic is inspiring and interesting while it is not clear how the ensemble could help FSL tasks. The improvement is not obvious and the results are not enough. Also, it would be better the authors could provide more analysis about why this ensemble works. It would be better the authors could give an analysis of the hyper-parameters of the proposed method. For example, in 5.3 Selection of Encoders and Representations for the Ensemble, τ = 0.93, but how the model performs when τ is different and how we could find an appropriate τ when doing ensemble. The authors should provide enough support to justify the validity of the methods and why this method is worth doing in comparison with other methods. Also, it is worth discussing other aspects such as flops, params, etc. <doc-sep>The authors propose a simple approach, which obtains competitive results with the state of the art of few shot learning. However, I have the following concerns: - the proposed method is somewhat incremental. The authors propose to average the predictions of classifiers that take as input different features from the backbone. - While it’s a sensible thing to try, in my understanding, the proposed method is equivalent to a simpler approach, that would simply concatenate those features and learn a classifier on the concatenated features. I believe that this approach, and a number of other simple baselines employing a wider representation space extracted from the backbone, would be important to strengthen the analysis of the proposed method. - The presentation of state of the art results is incomplete. Dvornik et al. also report results for tiered-Imagenet (which surpass the reported results). Some other relevant works would need to be cited and compared to [1, 2] (some of their results also surpass the reported results.) - Organisation: method and results should be presented separately: the current flow of the paper alternates between empirical findings (motivation), a formal approach (methodology) and experimental results. This structure suggests that the submission would likely be better suited for a more technical venue. It would also better to isolate in a background section the presentation of the baseline approach (Sung et al. 2018), before presenting the proposed method itself, to make it more evident what the contributions are. - the authors do not motivate the chosen experimental setting (FSL): there is no analysis of why the proposed approach should be particularly well suited to address the specificities and challenges of this task. - it seems to me that employing so many linear classifiers (on increasingly larger dimensional features) would lead to a large increase in parameter count - but the authors do not perform any analysis regarding this aspect. - Overall, a lot of polishing of the paper is needed prior to publication. Please find a few comments in that respect below. Comments: - Figure 2: what is ResNet-18 (in red) if it’s not v1 or v2 ? On this note, both papers should be cited when they are introduced in Section 3 (only ResNet v1 is cited.) - “Our ensemble contains multiple encoders (encoders of different network structures).” At this point, this is not very clear: is the method used on top of a traditional ensemble ? - MULTI-MODEL MULTI-REPRESENTATION ENSEMBLE sounds tautological. - abolition should probably be ablation [1] Few-Shot Learning via Embedding Adaptation with Set-to-Set Function, Ye et al. CVPR 2020 [2] Adaptive Subspaces for Few-Shot Learning Simon et al CVPR 2020
This paper introduces an ensemble method to few-shot learning. Although the introduced method yields competitive results, it is fair to say it is more complicated than much simpler algorithms and does not necessarily perform better. Given that ensembling for few-shot learning has been around for a while, it is not clear that this paper will have a significant audience at ICLR. Sorry about the bad news, AC.
Seems like the most direct way to estimate mutual information using a classifier. I like this work because it is much more straight-forward than the prior work such as MINE. It shows sufficient performance on the experiments shown.<doc-sep>This work suggests a new discriminative mutual information estimator that relies on a classifier to directly estimate the log density ratio of p(x,y)/p(x)p(y), without variational lower bound. In general, the idea is easy to follow and simple simulations are done to demonstrate its effectiveness. However, I still have some concerns: 1. A classifier based MI estimator reminds me of a closely related problems: the independence test. For the latter, there are also a few recent proposals based on a classifier to distinguish p(x,y) from p(x)p(y). I understand the methodologies are different, but I still feel some motivations are similar. It would be better if authors can clarify this point. [1] Lopez-Paz, David, and Maxime Oquab. "Revisiting classifier two-sample tests." ICLR 2017. [2] Sen, Rajat, Ananda Theertha Suresh, Karthikeyan Shanmugam, Alexandros G. Dimakis, and Sanjay Shakkottai. "Model-powered conditional independence test." NeurIPS 2017. 2. Authors discussed the theoretical optimum of their estimator when the number of samples approching to infinity. In the simulations, it seems that the number of training samples is also very large (e.g., 160k). What will happen in case of moderate or small number of samples? 3. For me, the simulation on the self-consistency tests does not demonstrate a big advantage of DEMI, especially considering that a few competitors are not included (e.g., GM mentioned in [Song and Ermon, 2019]). On the other hand, lots of work on variational MI (including this one) claim the great potential on representation learning with either mutual information maximization or information bottleneck. However, validations are totally missing. In this sense, it would be much better if authors can provide a simple representation learning demo, just like [Hjelm et al., 2018]. What will happen if we replce MINE with DEMI. 4. It seems from Fig. 1, the advantage of the estimator becomes more obvious with the increase of dimension. Can authors provide some explanation or theoretical analysis? 5. It seems to me the work is prepared in a quick time. There are a few typos (e.g., the 6 line in the second paragraph of page 3, $\\hat{p}(x,y;\\hat{D})$ should be $\\hat{p}(y;\\hat{D})$). The clarity and location of figures can be improved. <doc-sep>## Summary: This paper proposes DEMI, a discriminative approach to estimate mutual information (MI). The main idea is that, instead of learning (generative) distributions of joint and marginals, learning a single likelihood ratio that is discriminative and hence more tractable: a posterior $p(z | x, y)$ trying to distinguish between the joint distribution $p(x, y)$ and the product distribution $p(x)p(y)$. Once the posterior is learned, it can be used to estimate the MI. ## Strength: - This paper studies a very important problem for the representation learning community -- mutual information has been a very powerful, principled technique for deep representation learning and many applications, but there are many challenges in scalable and accurate, low-variance estimation. Therefore developing an accurate MI estimator is of high importance and significance. - I find the idea of “lifting” the distribution and converting the MI estimation problem into a discriminative setting interesting, and looks novel. The method makes sense, and the training procedure is very simple, achieving better estimation than the baselines. - This paper is well-placed and contains comprehensive discussion of recent works about the limitations and research challenges on mutual information estimation. The mathematical connection to existing methods (MINE, InfoNCE, SMILE, etc.) provide an interesting insight. ## Weakness: - The method only discusses estimation of mutual information, not maximization of MI for representation learning. - The biggest weakness of this paper would be: experiments. The training data used in the experiments is either low-dimensional or synthetic, so there remains a question about how well this method will scale to a high-dimensional, and challenging deep learning setting. As in (Song & Eromn 2019), empirical analysis on the CIFAR-10 dataset would be needed --- if provided, my rating would increase. - Bias and tradeoff analysis (similar to Song & Ermon 2019) is missing. ## Question: - The hyperparameter $\\alpha$ is said to be set to 1.0 (Section 3), which does not seem feasible based on the Equation (5) (7). Was it meant to be 0.5? Can the authors clarify on this? Also, I am curious how sensitive DEMI is on the choice of the prior hyperparameter $\\alpha$. This would be a good analysis to have for the completeness of the paper. ## Additional comments: - Section organization: I suggest having an introduction as a separate section, with methods being the following section. Section 3 (experiments) and 4 (results) can be combined. For section 2 (Related work), a different name could be considered because the main content here additionally includes a theoretical connection to existing approaches, which is in fact an important contribution of the paper. - The plots in figure 2 are not properly scaled, with too many lines overlapping one another. I suggest the authors improve the plot for better readability. - Typo in section 4.2: overalll. - Please place a whitespace after the column (DEMI:) in the title.<doc-sep>This paper proposed a discriminative estimator for mutual information, to alleviate the shortcomings of the existing estimators such as MINE and SMILE. A classifier was built to decide whether the sample is drawn from the joint distribution or the independent one (product of marginals). Theoretical justification and experimental results were provided to support the proposed estimator. The paper was written with clarity and easy to follow. Here are some detailed comments on the technical contribution of this paper: 1) There is a closely related piece of work in the literature (see below). They also proposed a discriminative estimator for KL divergence, with mutual information as a special case. It would be nice if the authors could relate to this existing work, and provide experimental comparison to their estimator. Mukherjee, Sudipto, Himanshu Asnani, and Sreeram Kannan. "CCMI: Classifier based conditional mutual information estimation." In Uncertainty in Artificial Intelligence, pp. 1083-1093. PMLR, 2020. (https://arxiv.org/pdf/1906.01824.pdf) 2) From Figure 1 right column, we see that all estimators (including the one proposed) underestimate the mutual information when it is high. Could the authors give more analysis and explanation on this phenomenon? 3) It would be nice if the authors could provide experimental results on more realistic datasets, and show the advantage of the proposed estimator when it is used for other downstream tasks. Often, estimating the mutual information is not the end goal, but an intermediate step to achieve other goals (see the MINE paper for examples). 4) A minor point: In equation (10) the last part, it should be (1-z) * log( 1 - q(...) ) instead of (1-z) * ( 1 - log(q(...)) ).
In the paper, the authors propose a new method for estimating the mutual information based on a neural network classification that is fairly straight forward. The proposed method compares relatively well with known methods for estimating mutual information with a very large number of samples. The main issue of this classifier (a neural network) is that it requires that a classifier that discriminates between x, y pairs coming from p(x,y) and x, y pair coming from p(x)p(y) (this is done via reshuffling). The reviewers point out that the procedure is interesting, but it does not perform significantly better than the other proposed methods. Also, I want to add that the proposed method is trained by using a given NN trained with 20 epochs and a mini-batch of 64. This is a significant issue because if we train the NN to reduce the validation error the posterior probability estimates are typically overconfident a significant work is being done to calibrate them. Why 20? How do we select this number if we cannot use a validation set? With less training example does 20 also work? This is very relevant because in the areas in which p(x,y)/p(x)p(y) is low for very high MI values getting these estimates correctly is critical. The classifier does not need to perform accurately in classification, but an estimation of the posterior probability and NNs will tend to be overconfident here and provide a biased estimate for these values. It will also provide an overestimate probability in the area that both p(x,y) and p(x)p(y) are high. Finally, the authors reference the paper by Nguyen, Wainwright, and Jordan, but they do not acknowledge how that paper actually estimates log(p(x,y)/p(x)p(y)) similarly. That paper is very general and theoretical, and this paper can only be understood as a particular implementation of their solution. I think the authors missed that point in their paper. Also, I think the authors should acknowledge the papers that have come before using nearest neighbor or histograms for entropy estimation.
## summary This paper proposes to use categorical grammars (CG) to model learned protocols in emergent communication. Inspired by work on CCGs for natural language, they use CGI to learn a lambda calculus that can model the emergent language. From there, they propose to use two metrics of the learned CG as metrics of emergent language compositionality: F1-score of the grammar on a held-out test set (CGF) and size of the CG lexicon (CGL). The idea is that if the CG better captures the learned protocol (as shown on the test set), then it will likely be a compositional protocol (CGF) and a protocol that decomposes into fewer lexical items will be more compositional (CGL). To measure the quality of their metrics, they use LSTMs to learn to reconstruct two types of input spaces. Lang-attval which is composed of action-direction-number e.g. look-right-2, and lang-conj which is can combine two lang-attval statements with an "add" between them. The authors compare the learned languages to likely less compositional languages adjswap-{1,2} by swapping 1 or 2 tokens in the learned protocols. They find that on lang-attval, the metrics do not distinguish between the less compsitional protocols. In contrast, on lang-conj, the metrics clearly show the unswapped language to be more compositional. Furthermore, the metrics correlate with topsim providing another argument for their use. ## review Overall, I believe the paper is interesting and very novel. To my knowledge, no one has attempted using CGs to model emergent language. Indeed, most of the current metrics for compositionality in EC do not measure non-trivial compositionality, so it is good to see more people investigate the complex ways meaning may be transmitted. Furthermore, the paper is well written and provides a great intro to CGs and overview of CGI in the appendix. I also appreciated detailing the experimental hyperparameters and showing std deviations in the graphs. I believe this paper will make for excellent discussion so I recommend it to be accepted. The following comments are mainly for the authors so that they may improve their work for future submission and perhaps give them ideas for the discussions they'd like to have. I think the major challenge with this work is that measuring the efficacy of a compositionality metric is difficult because it requires having protocols that are less or more compositional. In TRE, Andreas shows a relationship between his metric and mutual information, human subjective opinion, topsim, and systematic generalization. In a work from last year's workshop, "Measuring non-trivial compositionality in emergent communication" Korbak et al create specific languages with common pitfalls and then demonstrate how different metrics catch different pitfalls. This paper learns a language and then uses adjswap to construct languages that are *likely* less compositional. The issue is that those languages may be worse in many other ways as well, so it isn't clear that compositionality is the exact thing your metric is measuring. Instead, I would suggest following Andreas and learning languages on a dataset and seeing which ones generalize systematically to a test set. It is likely that just changing the random seed will lead to vastly different generalization outcomes and comparing to systematic generalization would be a stronger argument than the adjswap heuristic. Alternatively, you could also specifically learn/create a protocol where TRE could not easily capture the compositionality and demonstrate your metric works better. The other big challenge for this work is the specific metrics themselves. The idea behind F1-score and lexicon size is reasonable but they require that the learned CG is a good representation of the protocol. It becomes an issue that perhaps the reason behind a large lexicon or bad F1-score isn't the compositionality of the emergent protocol but the quality of the learned CG. You could demonstrate the CG accuracy correlates with the EC game accuracy which could help. Overall, it is a difficult thing to show because although we know natural language can be generated by something like a lambda calculus, it isn't clear that this is the sort of thing that LSTMs are outputting and so it isn't clear that CGI is accurately capturing the meaning. A qualitative analysis of the learned lexicon (something like interpretability) would be a big step towards showing this. I would also like to point out the idea of using a CG for emergent language is quite clever and there are many other possible research ideas stemming from this. For example, you could learn a CG and use it to replace the sender's message then retrain the receiver and see if the resulting protocol is even better. Another idea is to use the CG as a loss function to guide learning a more compositional protocol. As mentioned in future work, situated CCGs would be incredibly interesting to see in a gridworld. minor comments - for LSTMs with attention, please also cite (Bahdanau et al, 2014) - comparing to TRE feels like a stronger baseline than topsim although ideally you have both<doc-sep># Summary The paper propose to exmine the CCG grammar induced from the emergent language as a probe to measure the underlying linguistic structure, like compositionality. The authors conduct experiments on the classifical signalling game with a seq2seq LSTM model. They show that the proposed metric has some correlation with existing metric like TopoSim, and offer extra benefits. # Strengths: The introduction of an automatic grammar induction algorithm is a novel idea to me. Besides the grammar tree depths, I imagine there could be other interesting metrics around the induced grammar tree. # Weakness I would love to see more experiments on the proposed metric. To start with, is it sensitive to the grammar induction optimization process? One reason people use TopoSim is that it's simple and stable to compute. Secondly, there are some known algorithm that can improve topoSim in the classical signalling game, e.g., neural iterated learning [1], so it should be more interesting to plot this metric along side the topoSim with iterated learning. # Final Overally, I enjoy reading this paper, and I like the idea, while I think the experiment can be made better with the analysis suggested above. [1] https://openreview.net/forum?id=HkePNpVKPB
This paper takes on the difficult task of a new metric for compositionality. Both reviewers found the idea of categorical grammar novel and interesting and would like to see it pursued further. We accept this paper and look forward to discussions on this topic and future work.
This paper describes a method for making user data unusable for training machine learning models. It focuses primarily on image data. The basic idea is to use error-minimizing noise. In this paper the author propose adding imperceptible to users error-minimizing noise that would make training data unusable for training. The authors proposed 2 methods for generating the noise: sample-wise and class-wise This paper is well written. The code and the datasets used for the experimentation have been provided. ######################## Overall, I would recommend accepting this papers. My only concern is with the effectiveness of the proposed technique given what authors discussed in the appendix (see questions below). The method was used on standard openly available image datasets. The results showed that when close to 100% of training samples have been updated with the error-minimizing noise the model performance went down considerably (as desired). However, when even 20% of training data was left clean model performance remained good. ######################## Questions: From the appendix notes: it appears that adversarial training can significantly negate the effect of adding error-minimizing noise. The resulted model performance would be degraded when compared to model training using clean training data only but considering the fact that authors themselves acknowledged that user data with error-minimizing noise may just constitute a fraction of all the training data available for training a model the effectiveness of this technique may be limited. (due to the effectiveness of adversarial training and the outsized influence of a relatively small number of clean data samples on model performance) Can the authors discuss the issues with the effectiveness of their presented technique? Mostly cosmetic: page 5: section 4.1 title “Error-maximizing” written twice.<doc-sep>Summary: The authors studied the problem of data protection from a new perspective. They proposed one kind of error-minimizing noise to make the data (added noise) unlearnable. The noise is imperceptible to human eyes, and thus does not affect normal data utility. The idea is very interesting and inspiring. The authors conducted a series of solid experiments to validate the effectiveness of the proposed noise, and tested it on a real world task of face recognition. Pros: 1. The idea of the paper is very interesting. Its motivation is intuitive and well explained. Considering adversarial training is to find the worst case example to make the training process robust, the authors proposed an opposite direction to find the easiest case to make the training process to learn nothing. The authors also proposed two types of noise: class-wise and sample-wise, which is a complete formulation. 2. The paper revealed an important problem to protect privacy, and proposed a simple yet effective method to prevent our data from unauthorized exploitation for training commercial models. I think it will attract a broad audience in the ICLR community. 3. The experiments are solid and comprehensive, considering the difference to random and error maximizing noises, effectiveness on different datasets and model architectures. The detailed stability and transferability analysis convince me why and how error minimizing noise works. Besides, they also show a real-world face recognition task to demonstrate its usefulness in practice. Cons: 1. What is the overhead of generating and adding this kind of noise? The author did not mention it in the paper. 2. Revisiting Figure 1, I am curious to know why the sample-wise and class-wise noises perform so differently? especially for random and error-maximizing noise? 3. What is the difference between the proposed noise and the data poisoning methods?<doc-sep>Summary: The authors proposed the idea of using invisible noise to make personal data unusable to authorized deep learning models. To achieve this goal, the authors proposed the idea of error-minimizing noise crafted by a min-min optimization method. The error-minimizing noise is then added to training examples to make them unlearnable to deep learning models. The idea is very well motivated and explained. The experiments not only confirm the exceptional effectiveness of the proposed method but also show its flexibility. Pros: 1. The paper is very well written and easy to read. 2. I find the idea is very attractive and could have a significant social impact, especially considering the fact that personal data has already been overused without consent to train not just commercial but also illegitimate models to fake information or track people’s identity. 3. The idea of using the error-minimizing noise is well explained and the generation method is well formulated. 4. The experiments are very thorough, providing not only evidence of the superb effectiveness of the proposed noise over random or adversarial noise, but also the flexibilities and limitations of the proposed method. The real-world experiment makes the proposed idea even more convincing, although it is just a simple simulated scenario. 5. It seems that class-wise noise can easily break a classification model, which is somewhat interesting from the data protection perspective. Cons: 1. I think the class-wise noise breaks the IID assumption of machine learning. It seems that breaking the essential assumptions in machine learning can break the model. Although this is not new, however, it turns out to be very interesting if used for data protection or similar ideas. The authors could have more discussions on this point. For example, what would happen if someone always used a different background (may be invisible) for each of the photos uploaded to social media, always shifting the newly collected test data to a different distribution. Can this serve as the same purpose? 2. The proposed noise seems not strong against adversarial training. Although adversarial training is costly and decreasing performance at this moment, they may be improved in the future. A discussion on the possible ways to generate the noise against adversarial training can be useful. 3. How the proposed method is related to backdoor attacks? It acts as a type of backdoor attack. Yes, backdoor attacks do not decrease the model’s performance on clean data. I think the “clean data” in the proposed setting should be the “poisoned data” rather than the ground truth clean data since both the training and testing data will be collected at the same time. I guess the only difference is that, in this protection setting, the defender cannot do anything about it unless recollecting or denoising the data, even if the defender finds the model is poisoned. I suggest the authors include more discussions around this point.<doc-sep> $\\textbf{Comments:}$ The paper's motivation is based on protecting private data and preventing its being scraped and used to train models. Even though motivation is clear and very important, the problem is the same as the works in crafting adversarial samples (i.e., the ones under data poisoning and adversarial attacks parts of the related work). The key difference is to apply Projected Gradient Descent (Mandry et al. 2018) in the reverse direction iteratively to *minimize* the loss function. Furthermore, the performance evaluation will be the margin between models trained on completely clean data and sample-wise/class-wise adversarially corrupted data (in contrast to fooling a pretrained network in adversarial attack benchmarks). $\\bullet$ *Percentage of noisy training data:* In the "Assumptions on Defender's Capability" paragraph, the assumption is that only a part of the training set could be perturbed. The margin between error maximization and minimization on CIFAR-10 is remarkable (Figure 1), and this figure is misleading. 100\\% of the training data was perturbed. Besides, Table 2 gives accuracy in different ratios of noisy training samples. To understand whether perturbed training samples contribute to learning or not, I would compare them with clean training. For instance, in addition to the results of 20\\% perturbed training setting in $\\Delta_s$ and $\\Delta_c$, training with only 80% of clean data without perturbed samples. $\\bullet$ *Comparison to PGD (Mandry et al. 2018):* Even under the class-wise perturbation, the noisy training data is learnable. In a sample-wise setting, "error-maximizing noise" is still learnable and performs very well; however, it performs around 20 and similar to "error-minimization" in a class-wise setting (Figure 1). If I am not wrong, Projected Gradient Descent, as proposed and applied in Mandry et al. 2018 (Figure 1, right side), reduces the performance the same as the proposed error minimization approach, and there is no performance gain. $\\bullet$ *Generalization to different Adversarial Attack methods:* Error minimization is shown using PDG only. There are several adversarial attack benchmarks on CIFAR-10 and ImageNet, such as CleverHans, Foolbox, or Realsafe (Considering different evaluation protocols, adopting these benchmarks for evaluation is a reasonable option to eliminate other factors). Is error minimization limited to only PDG or other methods? Did you try the effect of error minimization using any other method? https://arxiv.org/abs/1707.04131 https://arxiv.org/abs/1610.00768 https://arxiv.org/abs/1912.11852 $\\bullet$ *Different source-target models:* In all experiments, the source model is Resnet-18. The classification models used in performance evaluation are ResNet-18, ResNet-50 and DenseNet-121. All three models are based on residual blocks. In practice, we cannot assume the architecture that will be used by third-parties. Did you try completely different target models (such as AlexNet VGG, Inceptionv3, etc.) $\\bullet$ *Application to face analysis:* Face recognition experiment is non-standard. I strongly recommend applying a standard dataset evaluation that would make comparisons possible. Both source and target sets are the datasets' subsets, and the selected identities might show visual (dis)similarities (i.e., ethnicity, age, gender). You can report the full performance on the entire target dataset (WebFace). Furthermore, face recognition models are trained as a recognition problem (with classification losses or metric learning) but tested in face verification settings (calculating the distance to query samples). Reporting the distribution of these distances, for instance, Cumulative Matching Characteristic (CMC) and Receiver Operating Characteristic (ROC), would be more informative.
The paper proposed a *novel* methodology for protecting personal data from unauthorized exploitation for training commercial models. The proposal is conceptually *intuitive* and technically *motivated*. It goes to the opposite direction of adversarial training: by adding certain error-minimizing noise (rather than error-maximizing noise) to the data, the model is fooled and believes there is nothing to learn from the data, and thus this can protect the data from being used for training. The paper is of not only *high quality* but also *broad interest* given the current social concerns about personal data privacy. I think its potential impact should get it a spotlight presentation.
<doc-sep>The authors introduce cell2state, an algorithm that incorporates both genetic barcoding coupled with single-cell sequenced data to model explicit state transitions of cell dynamics over time. Single-cell gene expression profiles are mapped to low-dimensional state vectors that are predictive of cell dynamics. Cell2state is evaluated using barcoded stem cell dataset (Biddy et al. (2018)) and simulation studies are also presented. The model demonstrates better results for cell state prediction, finding dynamically stable clusters, and reveals potential latent meta-states of the underlying cellular evolution process. Strength: The paper deals with a very relevant and challenging problem in biology that of lineage tracing along with states. Weakness: Paper is very hard to read. There is no consistency to notation and variables used. I worry that the main claims might be incorrect but tidying up the notation might help alleviate some of these concerns. Page 2, 1st paragraph: shouldn’t the lossless encoding of states be I(\\Phi(X),\\Phi(X’)) instead of I(\\Phi(X),X’)? Figure 1 has variables X’ and \\lambda. What are these? Also in this figure, label an example of X(t) and X(t+1). What is Definition 1 stating? What is the growth rate and what is y? Shouldn't EXP[N|X(t)] be EXP[all N descendants of X(t)|X(t)]? Definition 2 states ‘p’ but uses ‘f’ in the equation. The ‘p’ in definition 1 is not the same as the ‘p’ in Assumption 1. Please maintain consistency of variables used. Can 1 cell map to more than one latent meta states? Last line of page 3: \\Phi(X(t)) is the low-dim embedding of X(t) and not X(t+1). Check notational consistency. Section 3.1: what is \\Pi? Section 3.1 - what is the lifting of dimensions? The data is reduced using PCA, then lifted to high-dim using another Gaussian kernel. Would this not add too many 'and' irrelevant dimensions to an already noisy dataset? Section 3.1: how do you define the function space H? Section 3.2: what is X’? Section 3.2 - what is P^hat? It is worth dedicating more explanation to the cell2state algorithm, the tetrahedron structures, what these mean before Section 3.3. Section 5 could go to the Supplementary to make space for relevant material in the main paper. In Page 2, summary 3rd point, the authors claim their model would perform with <=7 dimensions. I could not find details to this in the Experiments section. <doc-sep>The authors develop a novel approach to learn a low dimensional embedding of transcriptomic state of cells using data from cell barcoding experiments, which can capture the single cell RNAseq profiles of cells and their descendants. The main contributions of the paper can be summarized as: 1) A novel approach to learn latent representation of the transcriptomic state of cells by utilizing knowledge of 'true' cell lineage 2) A mathematical analysis of the distortion of the learned embedding under certain reasonable assumptions 3) Experiments on simulated and real data to validate the proposed approach. The strengths of the paper according to me are the following: 1) The authors propose a novel way to identify the transcriptomic state of cells 2) The proposed approach is based on sound mathematical intuitions 3) Contingent on certain assumptions holding (more on this in the weakness), the approach can be theoretically shown to work reasonably well 4) The experimental validation is quite reasonable The weaknesses of the paper (according to me) are the following: 1) It is entirely unclear to me who the target audience for the paper is. It reads to a degree like a paper I would find in a life sciences journal, but it also contains aspects of a traditional machine learning paper. This leads to a paper that in my humble opinion would be truly appreciated by a very small number of computational biology researchers who are also sufficiently proficient in machine learning. However, this is largely a problem with the writing of the paper. I would suggest the authors either introduce more biological contexts in the paper (if their goal is to introduce the machine learning audience to an interdisciplinary problem) or focus on the methodological aspects of the paper and introduce the biological details as an application. 2) Assumption 1 is central to the mathematical formulation used in the paper and is a pretty reasonable assumption in my view. However, it is quite unacceptable that the reasonableness of the assumption is not discussed in any depth, given how central it is to the paper. I would suggest citing some papers that explore the assumption in more detail. 3) The simulated experiment need to be better motivated. Currently, it reads like the data generation process simply meets the assumption of the cell2state algorithm and hence it works well. The authors should at least explain why this is a reasonable approach to generated synthetic data and ideally cite other papers that have generated synthetic data using a similar approach. The paper presents an approach to infer cell state from a relatively new type of biological dataset. The work is both important in it's scope and novel in terms of approach. However, the paper has some big writing issues in my opinion. A lot of the biological context is not appropriately introduced for a non life sciences audience and some justifications are missing/lacking. <doc-sep>Kernel-based embedding of barcoded single-cell data that preserves mutual information between barcoded pairs. This manuscript proposes a kernel-based embedding technique to map high-dimensional single-cell gene expression feature vectors to a low-dimensional space so that information is preserved in pairs of single-cell expression pairs that are measured from barcoded parent-descendant pairs. The proposed method uses barcoded single-cell data. Even though each cell can be measured only once, the method builds on the assumption that dividing cells are phenotypically (approximately) similar, and descendants of the same cell lineage reveals dynamics of cell transitions. Overall, the proposed method aims to embed the single-cell pairs so that information between barcoded cell pairs is preserved. The kernel-based feature embedding is implemented with a combination of random Fourier features and with a more more traditional PCA/SVD dimension reduction techniques. Additionally authors provide bounds for distance distortion of state embeddings as well as information loss of the embedding (the details of the proofs I did not check in details). The main drawbacks of this manuscript include i) a non-specific description of the methods in the main texts which leaves a number of technical aspects partly unclear, and ii) description of somewhat unconventional analysis results without any comparisons to previous methods. It is difficult to grasp the true novelty and practical benefits of the proposed method based on the presented results without any comparisons. <doc-sep>Authors proposed cell2state that could embed barcoded scRNA-seq trajectories into low-dimensional representation. Authors provided theoretic analysis of the embedding learnt by cell2state and demonstrated that the learnt embedding was almost lossless. Authors applied this embedding framework on one barcoded scRNA-seq dataset (Biddy, et al., 2018) and demonstrated the learnt embeddings clearly distinguished different cell states. Furthermore, the learnt embeddings were able to substantially improve various downstream tasks other than identifying cell subpopulation. Major Comments: (1) (Fig. 3ik) Why does the error increase dramatically when gamma (kernel width) increases over 10^2? (2) Based on Fig 3d and 3e, it is clear that Day-21 raw data is more helpful to identify cell subpopulation compared with Day-12 raw data. I'm curious to see the difference between Day-21 cell2state and Day-21 raw data embedding. Also, it is not fair to compare cell2state embedding and raw data embedding from single time point (Day-12 or Day-21) as cell2state utilize raw data from both time point to generate the representation. Instead authors can first concatenate Day-12 and Day-21 data along the dimension of cell: Just concatenate the expression profile of cell at Day-12 and the expression profile of corresponding descendant together. There will be missing profiles as cells were not perfectly paired and authors can impute the missing profiles based on nearest neighbors with respect to Day-12. Then authors can perform PCA/kernel PCA+UMAP to generate embedding for Day-12 based on this joint gene expression profile. This serves a simple baseline model as it utilizes the same input as that to cell2state. Authors can compare Day-12 cell2state to this joint embedding to see if cell2state generates better embeddings given the same input. (3) What does the color scale stand for in Fig.4b? For a better visualization, this panel can be rotated 90 degrees as well. (4) 2000 random Fourier features were selected based on algorithm 1. I'm wondering if this number of random features is sufficient to approximate the kernel. Is 2000 an optimized number after tuning in the parameter space? Minor Comments: (1) The color scale in Fig. 3a is hard to distinguish. The gradient is helpful to illustrate the nature of the data, but it is more helpful to distinguish them here in panel a. (2) It is hard to make sense of Fig. 3b. It is more clear to just illustrate the general trend of cell transition instead of drawing all individual connections here. (3) It is better to explain the 366G abbreviation in the caption of Fig. 3 or in the corresponding section (section 4.2.2). Although authors demonstrated that cell2state embedding is lossless and helpful to achieve a superior performance combined with a simple linear classification model, authors didn't directly compare cell2state with any other compatible embedding methods. This makes it hard to evaluate the novelty of the proposed embedding framework. Authors mentioned state representation learning and other related fields in the background section. I would suggest authors to pick up similar embedding methods as a baseline model for comparison.
While the problem tackled in this paper is interesting, there is a consensus among reviewers that the writing of the paper does not allow the reader to fully understand the method developed, nor the biological context and results obtained by the method. We encourage the authors to take into account the reviewers' comments to prepare a future improved version of the manuscript.
Based on a dynamic system perspective, this paper characterizes the convergence of gradient penalized Wasserstein GAN. The analytic framework is similar to the one used in Nagarajan & Kolter but requires very heavy machinery to handle measure valued differentiation. Overall the math seems solid but I have a few questions about the motivation and assumption. 1. To my limited knowledge, it seems that the two-time-scale framework [1] handles both batch and stochastic settings well also from a dynamic system perspective. I am wondering why not follow their path since under their framework adding a gradient penalty does not introduce all the technical difficulty in this paper. 2. The main theorem characterizes the stability or convergence but does not characterize the advantage of gradient penalty. Does it make the system more stable? At least more technical discussion around the theorem is needed. 3. Besides the technicality of handling the support of the measure, what is new beyond the analysis of Nagarajan & Kolter? [1] GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium by Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter I may be missing something and would like to see the author's response. === after rebuttal === I have carefully read the authors' response. I appreciate the explanation. After reading [1] in detail, my conclusion is still that [1] seems to be a stronger framework than the current one and easily extends to the setting with gradient penalty. Compared with Nagarajan and Kolter, the contribution of this paper seems to be minor, although technically involved. I have checked the updated pdf but haven't found the authors' rigorous "more stable" argument.<doc-sep>This paper shows that an ideal equilibrium point of a SGP-WGAN is stable. It makes several assumptions that, while clear why they are needed in the proof, is unjustified in practice. The authors should elaborate on these assumptions and comment on why they are reasonable. Assumptions 1 and 3 essentially say that there is a tube (both in sample space and in parameter space) around the true data generating distribution in which the discriminator cannot distinguish. This seems a strong restriction to the effect of the discriminator is weak. For example, Assumption 1 says if given a sample slightly off the data manifold, it still cannot distinguish at all. A more reasonable assumption is the ability of the discriminator decays gracefully as samples approach the data manifold. Assumption 2 is also unjustified. Its main effect seems to be to eliminate a few terms in the projected Jacobian in the proof, but its relevance and whether it is reasonable in practice is entirely unmentioned. Finally, it is unclear why this notion of ``measure valued differentiation'' is needed. First, differentiation in measure spaces is no different from differentiation in other infinite dimensional functions spaces: the usual notions of Gateaux and Frechet differentiability apply. Second, the derivatives in questions are not true ``measure-derivatives'' in the sense that the argument to the function being differentiated is not a measure, it is a finite dimensional parameter. In the end, this seems essentially a derivative of a multi-variate function.<doc-sep>In the paper, WGAN with a squared zero centered gradient penalty term w.r.t. to a general measure is studied. Under strong assumptions, local stability of a time-continuous gradient ascent/descent dynamical system near an equilibrium point are proven for the new GP term. Experiments show comparable results to the original WGAN-GP formulation w.r.t. FID and inception score. Overall, I vote for rejecting the paper due to the following reasons: - The proven convergence theorem is for a time-continuous "full-batch" dynamical system, which is very far from what happens in practice (stochastic + time discrete optimization with momentum etc). I don't believe that one can make any conclusions about what is actually happening for GANs from such an idealized setting. Overall, I don't understand why I should care about local stability of that dynamical system. - Given the previous point I feel the authors draw too strong conclusions from their results. I don't think Theorem 1 gives too many insights about the success of gradient penalty terms. - There are only marginal improvements in practice over WGAN-GP when using other penalty measures. Further remarks: - In the introduction it is claimed that mode collapse is due to JS divergence and "low-dimensionality of the data manifold". This is just a conjecture and the statement should be made more weak. - The preliminaries on measure theory are unnecessarily complicated (e.g. partly developed in general metric spaces). I suggest that the authors try to simplify the presentation for the considered case of R^n and avoid unnecessarily complicated ("mathy") definitions as they distract from the actual results. ==after rebuttal== After reading the authors rebuttal I increased the my rating to 6 as they addressed some of my doubts. I still think that the studied setting is too idealized, but it is a first step towards an analysis.
All three reviewers expressed concerns about the assumptions made for the local stability analysis. The AC thus recommends "revise and resubmit".
This paper presents a transfer learning strategy for improving compositional generalization of semantic parsers based on pre-trained language models. Before fine-tuning the model on data from the target domain, the authors propose a pre-finetuning step, where models are trained on compositional splits of data from another source domain, with the goal to transfer the model's learned knowledge about language compositionality during this pre-finetuning step to the final learning stage on the target domain, therefore improving compositional generalization. To this end, the authors propose a pre-finetuning method which encourages the model to discover representations of natural language that are invariant against its compositional structures. This is achieved by iteratively freezing the encoder or decoder modules during pre-finetuning, and training the encoder and decoder modules on compositionally disjoint splits of the source data, such that the encoder learns representations that are robust against distributional shift of language compositionality. While a like this idea, there are several issues with the proposal approach: 1. **Comparison with State-of-the-Art** There is little information in Section 5 about comparison with existing approaches in compositional generalization for semantic parsing. Indeed, in recent years several seminal works have emerged, pushing accuracies on some synthetic tasks like SCAN to near 100% accuracy. These works are not mentioned in Section 5. While the model outperformed the currently best approach (Shaw et al., 2021) on GEO_{TMCD}, the lower results on other simpler tasks like SCAN make me feel a bit concerned about the results. Perhaps this is because only few models are evaluated on GEO_{TMCD} so far, and previous approaches more tailored to the context-free utterances on GEO (e.g., Herzig and Berant) would actually perform better? 2. **Methodology** Another issue is related to the proposed approach itself. While pre-finetuning encoder on compositional split A and the decoder on compositional split B could encourage the encoder to learn representations that generalize better to split B, the generalization strategy learned by the encoder might be specific to split B only, and might not be able to generalize to other compositionally novel distributions (e.g., the final evaluation data). Ideally during the pre-finetuning stage the model need to learn to generalize well to *arbitrary* mismatched splits, but only presenting one set of splits (A/B) might not be enough for the model to learn a more "general-purpose" strategy. 3. **Transferability across Datasets** Transfer learning on NLP tasks would require the source and target domains share reasonably amount of common language patterns in order to perform well. However, the tasks used in this paper have drastically different utterances in language styles and compositional patterns, which makes transfer learning quite non-trivial. For example, SCAN only contains toyish words like JUMP and simple composition strategies like concatenation (JUMP TWICE). It would be doubtable if learning to generalize well on this toyish domain would be useful for handling real-world utterances with diverse language style, like GEO. The authors could present more analysis in terms of the language and compositionality styles of those datasets in order to have a better understanding of the upper-bound performance of transfer learning approaches for compositional generalization. This paper presents a nice idea for improving compositional generalization of neural semantic parsers. The results on GEO_{TMCD} outperforms the currently best approach. However, there are issues with both experimentation and the methodology. <doc-sep>This paper is focused on the problem of compositional generalization in semantic parsing, and introduces a method called "DUEL", which involves "pre-finetuning" iteratively on compositional train-test splits from other datasets, before transferring to fine-tuning on the training data from the target dataset. The method involves using the compositional train/test split from one dataset, and training their encoder-decoder model iteratively such that the encoder parameters are updated based on the test data from that dataset, and the decoder parameters are updated based on the training data from that dataset. After this "pre-finetuning", the model is fine-tuned on the training data from the target dataset. They find that their model outperforms baselines involving 1) fine-tuning on the target task only, and 2) pre-finetuning on the merged data from the other dataset, without the encoder/decoder split. They find that their method largely does not help with the extremely low numbers on COGS structural items, but the margins of improvement are larger for GeoQuery data and SCAN data, with the authors claiming a new SOTA result on one of the splits for GeoQuery. Overall I thought that this was an interesting paper, which was mostly clearly written and organized, and generally I liked the method that was introduced. I'm leaning toward acceptance (and I can imagine the concerns below being addressed satisfactorily and further increasing my confidence). I had a few questions and potential concerns that weaken my confidence in the impact of the contribution. The first question involves the reasoning behind the particular design of the method. The authors lay out a rationale for training the encoder parameters on the test component of each split, and the decoder on the train component of each split -- but the reasoning given is not terribly transparent to me, and I was left wondering whether similar results could be achieved by instead training the encoder on the train component and the decoder on the test component. Was this something that the authors tried? I think that including this comparison could be informative with respect to the importance of setting up the method in this specific way. Another confusion I had involved the original purpose of these various datasets, and how this related to the current usage. For instance, not being familiar with GeoQuery, the description in the paper led me to believe that it was designed as a QA dataset, so I was wondering why any SOTA would exist for semantic parsing on a QA dataset. A google search suggests that GeoQuery is in fact annotated with semantic parses, but this confusion could be alleviated by making clearer that the dataset is used for semantic parsing. I was similarly wondering about use of SCAN for semantic parsing, since my understanding was that SCAN was designed for mapping commands to actions. If this is correct, where are the semantic parses coming from for that dataset? I imagine that since it is synthetic/template-based, producing semantic parses in a rule-based manner may be straightforward, but it wasn't clear to me from the paper how this was working. My additional concerns are focused more on the impact of the contribution. The baselines that the paper compares against are for the most part not external models -- rather, the authors are comparing only against baseline versions of their own model, without the key components of the new method. So the improvement over the baselines indicates that the method does improve over the same model without the iterative compositional-split training. However, it is only in the one case of the GeoQuery dataset that the authors mention the existing SOTA (which they have beaten), suggesting that there are stronger SOTA models on the other datasets (or at least on COGS, if SCAN is not used typically for semantic parsing?). What this leads me to believe is that while the method improves over a vanilla model, it may not be improving over stronger models that use alternative methods for COGS (and possibly SCAN). I'm curious especially whether other models have made better headway on the COGS structural test, which is showing especially low performance here. It would be helpful to get greater clarity on how the presented results relate to the strongest existing results from other models across all datasets. Finally, I'm not totally sure how surprised/impressed we should be by improvements from this method. Specifically, I'm wondering how impressed we should be that we see a performance boost from training models to generalize across a specific type of split (e.g., one in which the test sentences are longer than the training sentences), for exactly the target task (semantic parsing). The authors make this general observation in Section 5.3, when they acknowledge that the model works best when the compositional splits are maximally similar. So to what extent is it ultimately somewhat obvious that training models to handle a given type of split will help it on this type of split? To put the above concern another way: to what extent are we potentially *no longer testing models on compositional generalization* if we train them directly to be able to generalize in the particular way that is needed for the selected compositional split? If the performance boost is very specific to a particular relationship between test and training data, is this simply allowing the model to learn strategies specific to that particular type of generalization, such that it no longer needs to use composition to achieve that generalization? This would defeat the purpose of trying to improve models' ability to use actual compositional processing to show the desired generalization. So I would like to see the authors address this concern. In general I found the method interesting and the paper overall clear. However, I have some remaining questions about certain aspects of the method and datasets/tasks, as well as some potential concerns about the impact of the contribution, which I would like to see addressed before I can strongly endorse this paper. <doc-sep>This paper proposes a training procedure for encoder--decoder models (applied to semantic parsing) which aims to improve the models' ability to compositionally generalize (successfully handle novel combinations of words and structures, where combinations were not seen in training). The approach relies on pre-finetuning: training a model on a different dataset than the target dataset that also the requires the same sort of compositional generalization as the target dataset, before then training on the training set of the target dataset and then evaluating zero-shot on the compositional set of the target dataset, in the standard way. In pre-finetuning, the decoder is only updated on the training set while the encoder is updated on the compositional generalization set. The approach is evaluated using two different pre-trained encoder--decoder transformer architectures on three different semantic parsing compositional generalization datasets from past work, where it obtains consistent (albiet somewhat small) improvements over a baseline that pre-finetunes all model parameters, and outperforms a past state-of-the-art model on one dataset. *Strengths* S1) I appreciated the paper's use of a non-synthetic dataset (GeoQuery), as I feel that this is an underexplored area of work on compositional generalization which will be useful to explore how and when nets fail to generalize on real data, and how to fix them. S2) The proposed approach seems simple and easy to implement. S3) The experiments were overall thorough (at least in the scope of semantic parsing compositional generalization), evaluating on three different datasets and two different models. S4) I found the demonstration of the benefits of pre-finetuning interesting and convincing. S5) The paper was extremely clearly written, in particular the description of the method. *Weaknesses* W1) I wasn't totally convinced that the method works well on strong tests of compositional generalization. - The GEO_cd and SCAN_cd splits, although they follow past work, are defined using a compound divergence method that, as the paper points out, does not ensure that compounds are completely absent (but only infrequent) in the training set. - The COGS_cg lexical challenge seems to mostly be obviated by pre-trained representations. - While I did find the length generalization results to be a convincing improvement and a more reasonable test of structural generalization, no method really seems to help much on the harder structural generalization test of COGS_cg, and (concerningly) even the proposed method makes no improvement in what seems to be the most a priori favorable experiment design for it, described in 5.4 (although I did appreciate including this negative result!). W2) It's not totally clear to me why the method should enable compositional generalization in general, and I feel like it would help to strengthen the motivation and intuition for the method, or perhaps do some some analysis could be done to indicate why it's working (where it is). - The paper motivates DUEL as learning to represent input sequences in a way that facilities compositional generalization, but it's not totally clear to me how the alternating freezing does this. It seems like the meta-learning approach (which directly trains for compositional generalization) that other work has employed is more directly suited to this, or even perhaps some adversarial training approach if the paper is aiming to learn representations f(x) that encode invariances across s and ~s. - It would help if the paper could somehow characterize the representations or models that DUEL learns (e.g. providing something like a fixed-point analysis), which "the algorithm has converged to the desired representation since the difference in representing s and ~s is small" in section 4 starts to do, but wasn't totally clear to me. - Or alternatively, perhaps the paper could do some empirical investigation of distributional differences in f(x) when x is drawn from s versus from ~s. W3) Some of the choices in the design of the method felt a bit arbitrary, and if they were better justified I'd feel more confident that the approach is working for understandable reasons. - Why reinitialize the parameters of g in fine-tuning? - The pre-finetuning setup on p does not match the fine-tuning setup on q in that fine-tuning updates both f and g. Why not update f in training on p, or keep f fixed in training on q? - While in some compositional generalization tasks, ~s is intuitively harder (e.g. longer inputs and outputs) than s in some way, in other tasks s and ~s are interchangeable, so does it matter that g is updated on s and f on ~s (and that the last updates in pre-finetuning are always done on ~s)? W4) It would help to present statistical significance results, or standard deviations across multiple seeds, as it's a bit difficult to interpret the significance of the improvements. However, since the improvements are consistent (albeit small), I don't think this is a crucial weakness. *Minor comments* - It would help to give some intuition for the \\alpha in Section 3. How is this value chosen? - The update equations (1-3) with a single step-size make it seem like SGD is being used, but from the appendix it's Adam. - How are the logical forms updated in the COGS_VAR splits to match the changes to the input sentences? *Typos*: - pg 4 "standard supervise learning" -> "standard supervised learning" - pg 5: several grammatical errors at the end of section 4 - pg 6: "BERT_SMALL" -> "BERT_BASE" (?) - pg 6: "GEO_TMCD2" -> GEO_{cd}" - pg 7: "When Will DUEL Works Best" -> "When Will DUEL Work Best" - pg 7: "compsitional" -> "compositional" - pg 7: "DUEL helps extracting" -> "DUEL helps extract" I feel a bit borderline about this paper, as the method seems a bit limited and heuristic -- not being clearly designed for compositional generalization or showing convincing results on the hardest tests of compositional generalization. But, it does seem to show consistent (if sometimes small) improvements on a couple models and several datasets, the methodology seems sound, and the paper is very clearly written. I've put an overall score of 5 for now, but I look forward to discussion. --- Update after the response: Thanks to the authors for their thorough response to my comments! The explanations and new ablation results helped convince me that the choices made in designing the method were reasonable. I also appreciate the standard deviations, which make me confident that the improvements are real. I'm in favor of accepting this paper, and have updated my score to a 6 (from a 5).
The authors attempt to tackle the problem of compositional generalization, i.e., the problem of generalizing to novel combinations of familiar words or structures. The authors propose a transfer learning strategy based on pretraining language models. The idea is to introduce a pre-finetuning task where a model is first trained on compositional train-test splits from other datasets, before transferring to fine-tuning on the training data from the target dataset. Although the technique brings some improvements, and the authors do their best the address the reviewers' questions, it is still unclear: a) Why the method should work in principle, whether there is a theoretical backing and how it formally relates to meta-learning b) How the approach compares to data augmentation methods since pre-finetuning requires more data, albeit from a different dataset. See for example: https://openreview.net/forum?id=PS3IMnScugk c) The whole approach would be more convincing if the authors could articulate *how* their method renders a model more robust to distribution shifts (e.g., based on GOGS results it does not help structural generalization, do the gains come from lexical generalization?) d) it would also be interesting whether this method works on larger scale or more realistic datsets like CFQ, ATIS or machine translation https://arxiv.org/pdf/1912.09713.pdf https://arxiv.org/abs/2010.11818
This paper proposes a conservative smoothing technique by adding perturbation to the states to improve the robustness of learned policy on offline RL. Theoretically, they claim their work enjoys a tighter suboptimality bound in linear MDPs. **Strengths:** Prior offline RL methods mainly focus on the OOD actions instead of states. This paper targets the robustness issue for Q value via perturbing the states. This paper is well-written and easy to follow. **Weaknesses:** The algorithm employs the ensemble Q functions. It is well known that the ensemble technique can bring robustness to learning. However, it is unclear whether or not the ensemble technique brings the main contribution to the robustness. To this end, comparing single agent methods, such as CQL and BC, is unfair. Moreover, on the results of the experiments, the proposed RORL is also very close to the ensemble baseline EDAC. Yes. <doc-sep>The paper investigates the problem of training robust RL agents with offline datasets. The paper claims that regularizing policy and value networks to have similar values against adversarial perturbations and applying this technique to PBRL can achieve state-of-the-art performance on the D4RL benchmarks in both standard and adversarial settings. The paper also provides a theoretical analysis of the sub-optimality gap of the proposed algorithm in linear MDPs. Strengths: - The paper thoroughly analyses the proposed algorithm both empirically and theoretically. - The proposed method achieves state-of-the-art performance in the standard offline RL benchmark, while the performance improvement is quite marginal. Weaknesses: - The paper does not provide experimental results on expert or near-expert datasets. - The proposed algorithm requires computing an adversarial example $\\hat{s}$, which is computationally expensive. I think the proposed method is much slower than other offline RL algorithms since it has to solve mini-max problems. Please report the wall-clock time of the proposed method in the main text. <doc-sep>The authors propose an approach to offline reinforcement learning that is robust to small perturbations in the observation space such that the changes are not detrimental to the performance of the final policy. The achieve this by encouraging the value estimator network to be smooth over the state space while being conservative on out of distribution samples. Moreover the learned policy is also constrained to change less with these perturbations. Experimental results show that the proposed algorithm is able to perform competitively with current Robust / Baseline approaches, and enjoy increased robustness over adversarial attacks. ### Strengths S1: Overall the motivations for the approach are intuitive and easy to follow. The paper is well written and clear. (excluding some minor comments in the questions section.) S2: The proposed attack metrics are diverse and well defined under the current framework. The experimental results show that RORL is more robust towards the proposed attacks as compared to other methods while being able to perform competitively under normal conditions. S3: Theoretically, the proposed framework (RORL) enjoys a tighter suboptimality bound that PBRL. ### Weakness C1: My main complain towards the paper is that while the authors specifically tackle the tradeoff of conservatism and robustness in offline RL no clear metric has been defined to quantify the robustness of an approach. Hence the reader is forced to trust their eyes over the evaluation curves to judge the robustness of the approach. It may be insightful to invest some thoughts on quantifying the robustness by for example measuring the area under the curve on the performance under attack curve where the scale is normalized over the overall variation and dimensionality of the given dataset. C2: While this might come off as a knee jerk comment, It would be interesting to see the same set of experiments on more challenging benchmark such as ant-maze, especially as the proposed approach is claiming to better generalization ability and improve the overall robustness of the policy. Will the generalization ability result in significant improvements in a more challenging domain ? While the authors clearly mention the adversarial state sampling as their main limitation, they do not quantify this slow down for different state sampling approaches. It would be interesting to see the actual effect on the compute time as a percentage of the total training time. <doc-sep>The authors handled my major concerns on approximation and experiments by providing additional responses and adding more experiments. I'd like to improve my score as boarderline accept. === Training robust RL agent from offline datasets is an important yet challenging problem. This paper proposes RORL: offline RL algorithms with conservative smoothing. The main idea is to add smooth constraints (forcing agent to generate outputs on (adversarially) perturbed inputs) to offline RL algorithm. Here, to avoid overestimation issues, the authors also utilize uncertainties from Q-function as a penalty term. The proposed method not only achieved strong results on mujoco tasks from D4RL datasets but also showed that learned agents are more robust to perturbation. # Strength * Motivation is clear and the proposed method sounds reasonable. # Weakness * Lack of evaluation on challenging domains. Even though mujoco tasks are standard benchmark in offline RL, it would be nice if the authors can evaluate on more challenging tasks such as AntMaze or Atari. Also, it would be nice if the author can consider the combination with more state-of-the-art offline RL algorithm [1]. # Overall * I think this paper studies an important research question and proposes a reasonable solution. Also, the authors showed the gains from the proposed method very clearly on standard offline RL benchmarks. However, at the same time, there are several concerns (i.e., more evaluation on challenging tasks, combination with more stated-of-the-art offline RL algorithms and so on) about the draft. Because of that, I'd like to suggest "weak reject" but I'm also willing to change my score based on other reviews and author responses. [1] Kostrikov, I., Nair, A. and Levine, S., 2021. Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169. As pointed out in Section 8, the overhead induced by the proposed method can slow down the training. Even though this can be handled later, it would be nice if the authors also can clarify the training overhead from the proposed method (e.g. comparing training time of RORL and other offline RL algorithms).
All reviewers agree that the author's response has addressed their primary concerns. Reviewer frMM had two reservations that resulted in a borderline rating 1) concerns about how the adversarial samples were generated and 2) a request for evaluation on AntMaze. The author's followup response and further experiments address 1 and partially 2. It would be great to see RORL results on AntMaze in the final version. Overall, the performance of RORL is competitive with state-of-the-art methods on Mujoco and Adroit tasks with fewer ensemble elements needed. The main benefit is on improved performance against adversarial attack, where RORL significantly improves over existing methods. I think the paper makes a nice contribution that the community will find valuable. I encourage the authors to think carefully about how to integrate the additional experiments into the paper to resolve the questions raised by reviewers.
This paper proposes DNN quantization with Attention (DQA), which uses a learnable linear combination of high, medium, and low-bit quantization at the beginning. It gradually converges to a single low-bit quantization at the end of training. Experiments show that DQA outperforms the naive quantization and the Binary-Relax method consistently across three datasets and two networks. Strengths: 1. According to the experiment, DOA performs better than the naive quantization strategy and Binary-Relax consistently across the experimented datasets and networks. Weaknesses: 1. The presentation needs to be improved. There are grammar errors and typos in the paper. 2. The paper compares the performance of DOA with only one related quantization work (Binary-Relax). It is not sufficient to demonstrate the effectiveness of the proposed work. There are many quantization works, both quantization-aware training and post-training quantization. It seems that some of them may have better performance in the experimented settings. For example, the work LQ-Nets reports an accuracy of 68% with 2-bit weights and 32 bits activation with ResNet-18 on ImageNet, but DOA proposed in this paper only achieves an accuracy of 66.9%. Minor comments or questions: 1. How to decide what quantization method to use (e.g., min-max, SAWB, BWN, TWN) when using DAQ in practice? Appendix A only defines each quantization method but doesn't give any guidance on how to choose them. 2. For experiments on the ImageNet dataset using ResNet18, why not report R18+BR and MV2+BR results? 3. In section 4.2, the paper mentions all reported validation accuracies are the results of a single training run. It might be better to report averaged results across several runs even if the convergence of the networks is not noisy. 4. In Table 1, it seems that DQA using SWAB consistently gives better results than the FP version. Do you have any insight regarding this? Overall, I think the paper is not good enough due to aforementioned nontrivial weaknesses. <doc-sep>The papers addresses the problem of compression of neural networks. The paper builds upon binary-relax prior work. The precision is adapted during training with a mixture-based quantization method through temperature cooling and a set of an ``attention’’ vector a. The idea of the method, called DQA, is to progressively moves from a mixture of quantization functions (mixing high with low precision, for instance 32 bit with 2 bit training), to a single one (low precision) towards the end of the training. The paper states that the method can be used with several types of quantization methods. The evaluation is carried on computer vision architectures (ResNet18, MobileNet) for image recognition tasks (Cifar-10, Cifar-100, Imagenet ILSVRC 2012). 1) In my understanding, what the paper call attention is simply a value weighting the importance of the different quantization functions (see Eqn. 2-3). Therefore this terminology is misleading in my opinion, as the vector of attention a only depends on the “trainable parameter alpha”, and not does not depend on the input (either or activation) as one would imagine with this name. Maybe I misunderstood something, but this is what I infer from Eqn (4). Eqn (2-3) and Figure 1. 2) The paper does not consier in the literature review techniques for quantizing neural networks [A,B,C] that, to my knowledge, are state-of-the-art for quantizing popular neural networks. [A] Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks, Martinez et al., CVPR’2021 [B] And the bit goes down: Revisiting the quantization of neural networks, Stock et al. ICLR’2020 [C] Training with Quantization Noise for Extreme Model Compression, Fan et al. ICLR’2021 At a high level, some elements are similar to these approaches, even though the details may differ. One noticeably similarity is the various of precision, which is already present in the work by Fan et al [C] when they consider both blocks that are not quantized (i.e., 32 bits) versus some that are quantized with low-precision, and randomly choose. In my own experience, choosing randomly a choice is better that relaxation, so I guess the paper should have included such a comparison. In any case the paper should be better positioned against the recent literature. 3) The paper is compared to poor baselines. As a result, the paper reports results that do not look competitive the state of the art [A,B,C] on Imagenet ILSVRC 2012, which is the most significant benchmark in the paper. For instance, Stock et al. [B] report some results with R18 (Figure 3 left, table 3) that are as follow: compression factors of x20 for top-1 accuracy at 67.87. In the submitted paper. Additionally, more recent papers [A] and [C] are compared to [B] on larger networks and show that they further improve results. Therefore I conclude that the proposed approach is not competitive, while additionally requiring a more engaged scheduling that may not generalize as well to other training settings. The paper states that it could be combined with any compression method, therefore in this context it would be worth using the same or similar quantization as in [A,B,C] and show how the method compares to these methods. 4) Formally I have nothing against having in the same section the introduction and the related work. In the case of this paper, I found that the introduction is actually more a related work than a formal introduction providing the motivation and rationale of the paper content at the core of the initial discussion. This discussion appears later in the background section. While I know well this area and therefore the problem at stake, I would advise to re-work jointly these two sections. 5) The paper mentions that the method could be used for quantization activation, but only addresses the case of weight quantization. I think that a lot of practical considerations would appear with activation quantization, so I would suggest either to support this claim with experiments, or to suppress or soften this unsupported claim. 6) The paper needs some polishing. Some mistakes indicate that the paper was not analyzed by a spell-checker, for instance in the abstract: conterparts -> counterparts. The term bitwidth is not established (and occurred at least once with a typo). The paper does not demonstrate that the method is a significant contribution to the state of the art in quantizing neural networks. The experiments are only applied on image classification task with small architectures, and in the setting that I found comparable in the literature, the results do not look great, which questions the significance of this work. <doc-sep>This paper attempt to address a challenging quantization problem, i.e., low-bit quantization. This work utilizes a learnable linear combination of high, medium, and low-bit quantization at the beginning while converging to a single low-bit quantization at the end of the training. In the quantization procedure, multiple quantizers and the corresponding attention matrices are adopted to fuse the quantized weights or activations. Pros: - The paper is well written and the idea is easy to follow. - Extensive ablation studies are provided to evaluate different components of the proposed method. Cons: - More parameters are introduced in the training stage, such as α. This will increase the computation and storage cost. More theoretical and experimental analysis should be given to study that. - Multiple quantizers with different bit-width are conducted in the proposed method, which will increase the storage and computation cost for quantization. - In the experiments, the authors compare their method with the corresponding counterparts with the same bit of n1. However, the proposed method has three quantizers with different bit-width, and n1 is the lowest bit-width. Therefore, this comparison seems unfair. For a fair comparison, the baselines and the proposed method should be compared under the same computation and storage cost. - The proposed method has not been compared with state-of-the-art approaches, which cannot comprehensively evaluate the proposed method. This work utilizes a learnable linear combination of high, medium, and low-bit quantization at the beginning while converging to a single low-bit quantization at the end of the training. In the quantization procedure, multiple quantizers and the corresponding attention matrices are adopted to fuse the quantized weights or activations, which will increase the computation and storage cost. Some experiments are conducted to evaluate the proposed method. However, it lacks some comprehensive and fair comparison. <doc-sep>This work presents a training method for low-bit network quantization. While training, it employs a multi-bitwidth paradigm in order to alleviate the nonsmooth optimization landscape with lower bitwidth. It uses a temperature parameter and a penalty term to force the network to gradually converge to the target low bit. Experiments are conducted on CIFAR10, CIFAR100, ImageNet Classification with ResNet 18 and MobilenetV2. Strength: - Authors shows the proposed multi-biwidth training effectively reduces quantization error and helps smooth loss landscape with some sample cases and visualizations. Weaknesses: - Overall performance of the given approach is not satisfying. Most recent quantization papers mainly conduct experiments on the large-scale ImageNet dataset since the CIFAR datasets are prone to easy overfitting. Almost all papers I know doing low-bit quantization has better results than this one. On weight-only 2-bit quantization with MobilenetV2: SAT: 66.8 (Neural Network Quantization with Scale-Adjusted Training BMVC 2020) DeepComp 58.1 (Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. ICLR 2016) This work: 52.2 On both weight and activation 2-bit quantization with ResNet18: PACT 64.4 (Pact: Parameterized clipping activation for quantized neural networks. arXiv 18) LQNet 64.9 (Lq-nets: Learned quantization for highly accurate and compact deep neural networks. ECCV 18) SAT 65.5 (Neural Network Quantization with Scale-Adjusted Training BMVC 2020) This work: 60.4 - Some technical details are not clear. The authors use a penalty term (equation 6) to regularize the attention weights of different bitwdith. However, it is not known whether all bitwidths in the network will be converged to the lowest bit which is the target. If some bitwidths are not property converged, will there be any issue on the performance? - Writing needs improvement. There are a lot of grammar errors and typos: Page 2 last paragraph: a way how to train a (delete how) Page 3 background 4th paragraph: Note, that because (no comma) Page 3 Background 4th paragraph: this problem is most pronounced for low bitwidhts (typo) Overall, this work presents a new approach to help improve the training convergence of low-bit quantization for neural network. Due to the weak results on large-scale datasets and unclear technical details, I do not think it meets our bar at ICLR.
This paper proposes a new learning procedure for quantizing neural networks. Basically, DQA method proposed in this paper uses attention to obtain a linear combination of the existing network quantization techniques and uses it to pursue more efficient quantization. Overall, it seems the submission was written in haste, so there are many typos and errors. Above all, the motivation that it can be applied to various existing techniques could not be proved experimentally at all since it only covers one somehow obsolete work. In addition, as in [1], it seems necessary to quantize not only weights but also activations, or to verify in lightweight networks such as MobileNetV2 rather than ResNet. [1] Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss, ICCV 2021
New tool for inferring underlying properties of partially observed spiking neural networks (generalization of the fully observable solution by Rene et al), based on mean field modeling on net effect of unobserved neurons; validated on within model class simulated data. Strengths: - clear setup and motivation - novelty: interesting attempt at extracting more interpretable latent models, with a focus on modeling spiking activity of neurons (including unobserved ones) - nice mix of traditional comp neuro and ML estimation Weaknesses: - writing clarity: i would have appreciated a clear spelling out of the graphical model, separation between what counts as observations, inferred latent variable and model parameters (the current text mixes in mean field technicalities which makes it less clear than it should be) in particular the link between n and y (Eq.4b back-referring to Eq.1) was difficult to get from the text, but i found both section 3 and 4 meandering and hard to follow in places - the interpretability of the parameters may quickly become problematic for out of distribution data, in particular when it's not clear that the observed neural responses can be easily partitioned in a small set of homogeneous subpopulations (especially given the extremely strict notion of homogeneity required here); arguably heterogeneity if a key feature of brain circuits, yet is not clear how sensitive is the estimation to deviations from the strict homogeneity assumption - metastable dynamics deviate by construction from the model assumptions of the most commonly used latent dynamical systems models, making them a perhaps unfair choice as the main benchmark for comparison across models. - simulations are often somewhat anecdotal (fig 2DE) - comparison to the fully observed scenario is rather trivial. if the model assumptions are radically different from the ground truth it is unreasonable to assume its estimated parameters to match the data Minor: -'photo-stimulation' is nonstandard terminology, especially if talking to an experimentalist -- causal manipulation, optogenetics etc would prove more useful Technical limitations of the applicability of the procedure to real data should be discussed a lot more. No ethical issues. <doc-sep>The authors propose a new type of latent space model for neural spike trains, based on a spiking neural network (SNN). They use mean field approximations to abstract parts of the SNN, resulting in the latent dynamics, but keep the initial formulation for the observed neurons. The authors show, that after pre-defining a single or multiple neural clusters they can recover the connectivity value(s) by an EM algorithm from snippets of 10 sec activity of only some observed neurons. The proposed latent model is able to reproduce some key functionality of the SNN and outperforms other latent variable benchmark models. ### Strengths The paper is clearly written and the authors define the posed problem nicely. The work is technical sound and the experiments are worked out thoroughly. The proposed latent model is an interesting approach to bridge the gap between pure statistical models and more biologically interpretable models. It has potentials for further investigations and the applications to experimental data. However, there are several weaknesses: ### Weaknesses 1. The authors test their setup only on one parameter set, it is not clear that the results are robust to different network configurations. 2. The comparison which the authors make between the different models is quite unfair as the ground truth data is generated from the microscopic model the neuLVM is based on. They share therefor the same inductive biases whereas the other three models could be advantageous on differently generated data. 3. A demonstration on real experimental would have been nice. But this is potentially beyond the scope of this manuscript. 4. There are several open questions (see questions), especially when it comes to an application to experimental data. 5. The model makes quite some assumptions on the network structure (number of E/I clusters, base connectivity pattern) etc. This is a limitations which the authors should discuss in more detail. ### Minor comments: - Fig. 2F: A log scale for the y-axis could be more appropriate. ### Update: After considering the responses to this review and the reviews of the other reviewers the score was raised by +1. The authors mention two shortcomings before applying their method to experimental data. However there are more open questions and limitations which could be commented on (see also questions). The authors did not comment on potential negative societal impact of their work. <doc-sep>In the paper "Mesoscopic modeling of hidden spiking neurons" the authors introduce a neuronally-grounded latent variable model for fitting populations of observed and hidden units. The latter are described at the mesoscopic level. The model is evaluated on synthetic data with a single homogeneous population and with multiple populations and compared against other competing methods. The authors address a very difficult problem, the problem of dealing with unobserved populations in spiking neural networks, which has plagued SNN modeling for decades. They propose a very promising approach, introducing inductive biases from biology to model hidden populations, but still keeping the level of description of these populations coarse. Therefore, significance of this work is excellent. Clarity of the paper and figures is excellent too. The text is very well written and organized focusing on the single hidden population in the main paper and the more general case in the appendix. All equations are clear with symbols well explained. The work builds on work on mesoscopic modeling of SNNs by Rene et al. and extends this to a latent variable model, which is trained by the Baum-Viterbi algorithm. This is an interesting and novel approach. The evaluation of the model shows some of its strengths when compared to PLDS, SLDS and GLMs but falls short of convincing me that this would hold in more realistic cases where there is some variability in mostly homogeneous hidden populations. See Questions and Limitations below. The authors do not see their method as being ready for application to real data yet (c.f. Discussion). Nevertheless, this work is an important stepping stone to deal with unobserved population activity. The authors haven't made their source code available! To my understanding, the source code should be available to reviewers. The authors did not discuss any societal impact. This is appropriate for this work. The authors briefly discuss limitations regarding non-identifiability and potential need for preprocessing. However, given the seemingly strong assumption of homogeneous hidden populations, I would have expected a more thorough discussion of limitations. <doc-sep>The paper proposes a latent variable model for modeling observed activity in spiking neural networks while taking into account the activity of unobserved neurons. To do this, the authors propose a mean-field approximation approach to reduce the effect of all unobserved neurons to a lower dimensional summary quantity with simplified parameters. The proposed model, neurLVM, is fit via the approximate “hard EM” algorithm. The model is validated in two simulations, where it provides accurate recovery of net population activity and of switching states. The fitted, reduced neurLVM model was also able to predict the effect of stimulation on the true simulated network. The paper is clear and of high-quality. It provides an original contribution for including the effect of unobserved neurons when fitting spiking neural network models. This is a significant step forward in models of neural spiking responses. The proposed methods and experiments were clearly described with sufficient detail. Additionally, the model showed good performance in the simulations. One weakness is the homogeneous population assumption seems to limit the generality of the method. It is not clear how the approach would work for heterogeneous populations with higher dimensional activity. Nonetheless, I think the proposed method is an important step forward. === Update === I appreciate the authors additional experiments and responses to my questions. In particular, I think including the heterogeneous population experiment strengthens the evaluation of the proposed approach. Based on the additional evaluations and clarity, I have increased my score by a point. The authors discussed two limitations, parameter recovery and choices of parameters and architecture for applications to real data, in their conclusion.
Dear authors, Congratulations on your paper being accepted! The reviewers unanimously recommended acceptance. The reviewers made a number of recommendations on how to improve the paper further, in particular with respect to clarity of writing and explaining the motivation behind different analyses. We strongly encourage you use this feedback to improve the paper— if needs be additional clarifications can be added in the supplement. In addition, it would indeed be highly useful to make your source code publicly available, as you indicated in your response. Best, your AC
Summary ------- This paper proposes a fast general framework (FNAS) for neural architecture search (NAS) problem to enhance the processing efficiency up to 10x times. Three interesting strategies (UAC, LKP, AEB) for reinforcement learning (RL) processing are introduced in the proposed FNAS and evaluated by extensive experiments to show their efficacy. In particular, the assumption that architecture knowledge is transferable has been verified by real observation. However, the authors paid more attention to introduce the fact based on observations and the thoughts of the framework design, thus neglected the technical depth for the key component (UAC) that has highest impact on the overall performance. Strengths --------- - The paper is well-organized and well-written thus easy to understand, including motivation, approach, and experiments. - The proposed framework (FNAS) is general, practical and convincing. The three strategies (UAC, LKP, AEB) for RL processing are based on real observations shown in Figure 4, which positively supports the motivation of the approach. - The evaluations are conducted on extensive experiments with solid results including ablation studies for the three different components LKP, UAC, and AEB (although it might be not sufficient; see next the weaknesses). Weaknesses ---------- - The technical depth was neglected. For example, the Uncertainty-Aware Critic (UAC) should be considered as the key component of FNAS framework because it was shown that the UAC has highest impact on (biggest contribution to) the overall performance in terms of efficiency in Table 4. However, there is no any technical/mathematical introductions about how the uncertainty network $U$ is obtained/prepared, and how the $U$ contribute to the NAS in detail. - Discussions regarding possible over-fitting are not sufficient. For example, the constraint (threshold $\\delta$) of uncertainty is introduced into the UAC strategy (Sec. 4.1), but without any experiment results to show the impact of such hyperparameters in the FNAS framework that has to be considered as trade-off parameter against over-fitting effects. - Similarly, in the AEB strategy (Sec. 5), the buffer size ($N$), and the annealing term ($\\beta$) are hyperparameters that should have impact on the over-fitting effects during RL processing. However, the authors did not provide any testing results to confirm their impacts. For example, why did the authors determine the buffer size $N=10$ in the experiments? - The testing of FNAS on vision tasks are not sufficient. This paper provided results on classification (ImageNet) and face recognition tasks, but how about other tasks such as object detection, tracking, person re-identification, and segmentation? Other Questionable Points ------------------------- - The loss functions shown in Eq. (1) and Eq. (2) seem too simple. Are they really sufficient to get high performance? Is there any potential loss functions or improvements that would get better performance? - In Table 1, why the numbers of "GPU Hours" for MBv3 and EfficientNetB0 are not shown? It is inconsistent with textual description in Sec. 6.2 ("there is nearly 10x of acceleration ...") that is not able to confirm. - In Table 1 and 2, regarding the numbers of "GPU Hours" like 20,000 and 2,000, do they indicate the real runtime in the experiments or only estimated values? As we know, 20,000 hours are roughly equal to 2.3 years, and 2,000 hours are similarly equal to 2.8 months. The reliability of the experiment results might be doubting. - In the references, there are too many informal publications cited from arXiv. Instead, they should be replaced by their formal publications at the corresponding conferences or journals. <doc-sep>Summary: The paper propose a few improvements to the sampling-based NAS using RL: 1) an uncertainty-aware critic to decide whether the sample needs to be trained; 2) a life-long knowledge pool to initialize the sample that needs training; and 3) an architecture experience buffer to reuse old samples for RL training. The experiments are done on ImageNet, facial recognition and transferability on object detection. The proposed methods are compared with related works. Finally the paper finishes with ablation studies on both the effectiveness and transferability of the proposed modules. Strengths: - The paper is well-written with clear flow and structures. - The three proposed modules are novel and improve the search cost significantly while achieving better performance. - It's great to see the authors has done a comprehensive comparison with the related methods for multiple tasks. The ablation study also demonstrate the effectiveness of the three proposed modules. Weakness: - The improvement of Top1 Acc. on ImageNet is marginal (without the 1.3 scale up) and worse than some of the recently proposed differentiable NAS work which requires far less search cost (comparable to DARTS). - There could be more discussion on related work studying uncertainty in RL or in supervised learning, given that it is one of the core modules in the proposed pipeline and uncertainty an important topic in general.<doc-sep>In this paper, the authors propose to use a sampling-based approach to neural architecture search, which combines a life-long knowledge pool, uncertainty aware critic, architecture experience buffer. This approach has been demonstrated with vision tasks involving days of TPU training. Overall, I rank 5, marginally below the acceptance threshold. NAS is an underexplored topic. But the papers seems like an engineering project that combines multiple existing ideas from the others' work, and there lacks theoretical depth about clear mathematical formulation of the approach, and reasoning on why the approach should work. Pros: + Having access to TPU + the NAS topic + Integration of lifelong learning, NAS, and several other ideas Cons: - Why lifelong learning should work, considering that we are not in multi-task learning and nonstationary environment scenario? How do we know that the experimental results is not some coincidence? - Can we put down the whole framework mathematically? It seems that this paper has only two formulas about some loss functions. - Can we reason about the math? For example, any ideas to better organize knowledge pool and ideas architecture experience buffer for a large number of architectures and parameters encountered? <doc-sep>This paper proposes an RL-based neural architecture search approach to decrease the searching cost by introducing three modules to estimate uncertainty, restore parameters, and store old models. Compared to MNAS, it can significantly reduce the search cost up to x10, while giving competitive accuracy. This paper is generally well-written and well-motivated, except for some unclear sentences; - Architecture knowledge is not well described. Compared to parameter knowledge, the authors should clarify what they are and the difference between them. - In Figure 4, it is unclear what the operators are and which operators are similar and different. Moreover, details are missing on how to sample 100 optimal models. - In Equation 1, the definition of the reward is missing. - LKP (the acronym first introduced in page 5) is not described. Even if it accelerates the search process, it entails additional memory due to the proposed module; it stores learned networks. So, I think there’s a trade-off between search cost and the total memory we need to reserve. From this, I wonder reducing the search cost is more significant compared to increase the required memory. In Table 3, why FNAS has higher FLOPs than MNAS? This should be properly elaborated. In Table 4, the cases using two modules are missing. It would be great to see the results to see which component actually affects the performance.
This paper presents a compelling mechanism for reducing the neural architecture search process based on accumulated experience that the reviewers found compelling with significant improvements in performance. This is an intriguing idea. However, there were concerns about clarity that need to be addressed, and more concerning, the paper lacked technical depth or details in several aspects described in the reviews. The authors subsequent response and revisions have somewhat addressed these issues. The reviewer discussion had mixed opinions, with some for weak acceptance and others for weak rejection. There were compelling points that the contribution is significant, but overall this paper would benefit from thoroughly addressing the shortcomings mentioned in the reviews before it is ready for publication.
The paper presents a NAS optimization algorithm for SNN search. +The authors present interesting results with the differentiable NAS search. -There are two major works related to NAS for SNNs that has been recently out [5], [11]. The authors have not cited these works. It makes me wonder what is the author's contributiona s compared to these works. [5] talks about the fact that training SNN using standard NAS methods might be too complex because SNNs need large training time, so they come up with a NAS without tarining technique. [11] talks about a differentiable NAS technique. Both works show good results on a avraiety of datasets, and talk about the intricacies of architecture search. -The authors have also compared their technique to select works in table 1. There is a lot of work from Priya Panda's group at Yale, Emre Neftci's group, and many others with regard to SNN training that show SOTA results on DVS and static datasets. The authors have failed to acknowledge most recent works. Below is a list of publications (not exhaustive) that the author should check: [1] Towards spike-based machine intelligence with neuromorphic computing K Roy, A Jaiswal, P Panda Nature 575 (7784), 607-617 [2] Enabling spike-based backpropagation for training deep neural network architectures C Lee, SS Sarwar, P Panda, G Srinivasan, K Roy Frontiers in neuroscience, 119 [3] Rate Coding Or Direct Coding: Which One Is Better For Accurate, Robust, And Energy-Efficient Spiking Neural Networks? Y Kim, H Park, A Moitra, A Bhattacharjee, Y Venkatesha, P Panda ICASSP 2022-2022 [4] Neuromorphic Data Augmentation for Training Spiking Neural Networks Y Li, Y Kim, H Park, T Geller, P Panda arXiv preprint arXiv:2203.06145 [5] Neural architecture search for spiking neural networks Y Kim, Y Li, H Park, Y Venkatesha, P Panda arXiv preprint arXiv:2201.10355 [6] Optimizing deeper spiking neural networks for dynamic vision sensing Y Kim, P Panda Neural Networks 144, 686-698 [7] Federated Learning with Spiking Neural Networks Y Venkatesha, Y Kim, L Tassiulas, P Pand IEEE Transactions on Signal Processing 2021 [8] Beyond classification: directly training spiking neural networks for semantic segmentation Y Kim, J Chough, P Panda arXiv preprint arXiv:2110.07742 [9] Revisiting batch normalization for training low-latency deep spiking neural networks from scratch Y Kim, P Panda Frontiers in neuroscience, 1638 [10]Na, Byunggook, et al. "AutoSNN: Towards Energy-Efficient Spiking Neural Networks." arXiv preprint arXiv:2201.12738 (2022). See weakness section. <doc-sep>This work is aimed to search for both the optimal SNN architecture and hyperparameters of surrogate gradient (SG) functions. In the architecture search phase, they use DARTS and refine the search to different granularities (layer-level and cell-level). The search for SG function (DGS) focuses on optimizing the temperature of the Dspike SG function. The results show that searched architecture achieve SOTA performance on image classification and event-based stereo matching task. **Pros** 1. The search for the architecture alone significantly increases the performance of image classification tasks, which reveals the potential to be applied to various more complicated tasks. 2. The idea of searching hyperparameter of SG function is novel, simple but effective. **Cons** 1. The idea of applying NAS on SNNs is not novel till the deadline of NeurIPS submission. SNASNet[1] and AutoSNN[2] have proposed that NAS methods can be used for searching the structure of SNNs. The latter has been accepted at ICML2022. 2. The articulation of the training pipeline is not highlighted and is somewhat unclear to me. See the **Questions** below. 3. The trials of search on SG functions are confined to the Dspike function. [1] Youngeun Kim, et al. "Neural architecture search for spiking neural networks." *arXiv preprint arXiv:2201.10355* (2022). [2] Byunggook Na, et al. "AutoSNN: Towards Energy-Efficient Spiking Neural Networks." *arXiv preprint arXiv:2201.12738* (2022). N/A <doc-sep>In this work, the authors propose a differentiable hierarchical search framework for spiking neurons, where spike-based computation is realized on both the cell and the layer level search space. Meanwhile, the authors find effective SNN architectures under limited computation cost. In order to avoid the standard SG approach that leads the network into suboptimal solutions, the authors propose a differentiable surrogate gradient search method where the SG function can be efficiently optimized locally in parallel. Finally, this work shows some interesting results on the image classification tasks. Strengths: 1. A hierarchical differentiable surrogate gradient search framework is proposed to obtain better performance of the spiking model. 2. Significant improvements in energy savings on deep stereo. Weakness: 1. In terms of writing, some methods that were not proposed in the work were placed in the methods section. There are also some typos in terminology. 2. The results of the ablation experiments and the analysis of some elements do not match. 3. The font of the figure seems to be a small and not clear enough, which leads to a very careful reading to find valuable information. 4. The percentage improvement of the proposed method varies greatly on the two image classification datasets. Even the improvement on CIFAR-10 is only 0.18. The authors illustrate the limitations of their work. <doc-sep>In this submission draft, the authors device a differentiable hierarchical search framework tailored for SNNs. In the meantime, this framework is able to search the surrogate gradient in a differentiable manner. Their methods are validated on the CIFAR dataset and an event-based deep stereo dataset. Overall this is an interesting work. The authors come up with an end-to-end differentiable framework that solves two critical problems in SNN: the architecture and the surrogate gradient. 1. Developing SNN-oriented architectures are novel and necessary. Even though this work is not the first trial in the community. 2. Searching the SG is interesting and I am glad to see a learning-based method to address the issue. 2. The results on the CIFAR10/100 dataset are promising. 1. Need to include two prior SNN NAS papers in the discussion or experiments. See references below. 2. A critical problem is that there is no comparison between the searched architecture and the ResNets used in other works. What if the searched architecture has a higher capacity than ResNets? 3. An ablation study on the DGS is recommended, the authors should compare static temperature gradient, DGS, and [31] on the same neural architecture and under the same training receipt. 4. Better to have an ImageNet result. ------ **References** Na B, Mok J, Park S, et al. AutoSNN: Towards Energy-Efficient Spiking Neural Networks[J]. arXiv preprint arXiv:2201.12738, 2022. Kim Y, Li Y, Park H, et al. Neural architecture search for spiking neural networks[J]. arXiv preprint arXiv:2201.10355, 2022.
This paper proposes a new architecture search algorithm for spiking neural networks (SNNs). The key insight is to optimize both the cell and the architecture level of the SNN. Convincing numerical results are provided on image classification tasks (CIFAR10, CIFAR100, and an event-based stereo task). One concern raised by the reviewers regards the comparison to existing work (some of which appears to be very recent). This point is raised by all the four reviewers (although it has led to a rather large variance in their initial assessments). After an in-depth discussion between authors and reviewers and a discussion between AC and reviewers as well, it appears that this concern has been addressed in a satisfactory way. Other concerns (e.g., training pipeline and versatility by reviewer cjsQ) have been also resolved, and the remaining ones (measuring energy accurately as mentioned by reviewer LhUf, and computational overhead on neuromorphic hardware as mentioned by reviewer hUzC) have been regarded as out of scope. In summary, the reviewers have found the authors’ response convincing and have reached a consensus towards accepting the paper. After my own reading of the manuscript, I agree with this assessment and I am happy to recommend acceptance. As a final note, I would like to encourage the authors to include in the camera ready the discussions related to the feedback from the reviewers.
This paper presents a very preliminary first step into designing a foreground/background CNN that is robust to adversarial attacks. The Authors of the paper have not properly quantifiedWhile the intention of the paper is good, this paper unfortunately does not meet the bar/standard for an ICLR submission, and may also be reporting mis-leading results given the correctness of how the attacks were computed. This papers main weaknesses are: * No use of errorbars or confident intervals in the adversarial attacks or blur based perturbation * Section 2.3: Gaussian blur is not a type of Adversarial Attack; it is an out-of-distribution type of image distortion/manipulation. * Authors should have used PGD based attacks to strengthen their claims. * Authors should expand on using different CNN-based architectures. * Clarity: it is not obvious how the fore/background networks are partitioned into separate streams and then unified to be fully end-to-end differentiable. * Most importantly: I am not convinced the results here are veridical given the way the adversarial attacks have been made. It seems by figure 1 that the fusion network is not end-to-end differentiable. If it is not end to end differentiable, then how is the gradient computed for the FGSM attack to actually maximize the loss? (Maybe I missed something?) Overall the idea of using parallel foreground/background networks is appealing for adversarial robustness, but there are still some missing works I encourage the authors to look into: * Putting visual object recognition in context. Zhang, Tseng & Kreiman. CVPR 2020. * Human peripheral blur is optimal for object recognition. Pramod, Kitti & Arun. ArXiv 2020. * Emergent properties of foveated perceptual systems. Deza & Konkle, ArXiv 2021. The figures in general could all use more work. While I find the idea interesting, and I like the direction the authors are going -- this work is still quite preliminary and needs more work. <doc-sep>In this paper, authors studies the problem of adversarial training and tries to leverage a fusion-based method against adversarial attacks. This method fuses features from foreground and background extracted by pre-trained models and test its performance against both Gaussian blur and gradient-based attacks. The authors claim three main explorations: * Exploring the effects of adversarial attacks on both context and object feature space. * Exploring the benefits of fusing different modalities against adversarial attacks. * Exploring the benefits of context features. Strengths: * Robustness to adversarial examples is a hot topic within the ML community. However, relatively less attention has been spent on the explorations of fusion based models against adversarial attacks. Therefore, I believe that the main focus of this paper is very relevant to the ICLR community. Weaknesses / discussion questions: * The paper exceeds the page limitation, which is not fair for other submitted manuscripts within the page limitation. * The integration of fusion and adversarial learning should be a very interesting topic to be studied with. The contribution of this paper seems to be making explorations within this domain. Then the authors should point it out explicitly, instead of saying “Summary of our approach” in section 1.4. And, the paper is not making clear what contributions are novel, and what is from existing work. I think it would be better to separate a Related Work section from the Introduction, and describe more prior work of making fusion networks against adversarial robustness (e.g., [1]) and the differences between your method and other methods for clarity. Then, in the Methodology section, authors describe how to leverage and fuse pre-trained models, and test its performance against adversarial attacks. The authors are suggested to highlight your **proposed** method, otherwise, it is more like a technical report with lack of novelty. In my opinion, the contribution of this work is not enough. * Authors are suggested to gather more prior work, re-design the experimental settings, compare with other related methods, and demonstrate its performance against stronger attacks (e.g., PGD, CW and AA). * The paper seems to be written in a rush. There are several format errors, typos, grammatical errors, and sentences that fail to convey ideas. The writing of the paper needs polishing. All the citations are mixed with main text, making the paper not easy to follow. Figures are blurred, for examples, the text in figure 4 & 5 is not clear and being stretched. [1] Yu et al, Towards Robust Training of Multi-Sensor Data Fusion Network Against Adversarial Examples in Semantic Segmentation, IEEE ICASSP, 2021. This paper studies the problem of adversarial training and tries to leverage a fusion-based method against adversarial attacks. The integration of fusion and adversarial learning should be a very interesting topic to be studied with. However, I have concerns about the technical quality, the novelty of the manuscript, and the violation in page limitation. All of these lead me to recommend its rejection. <doc-sep>The paper tackles the adversarial example problem. The authors propose an approach that is motivated by the way biological systems employ multi-modal information to recognize category of objects. Specifically, the approach combines two pre-train models, that are excepted to focus on foreground and background, respectively. Then the foreground module is fine-tuned for downstream tasks while the background module is left unchanged. The authors demonstrate that they obtain better performance against blur and FGSM. There are three major weaknesses in this paper: 1. The pre-trained models selected for recognizing foreground and background are not convincing. There is no proof that the one trained on ImageNet can be used as a foreground objects detector. If you ever checked the detailed class labels of ImageNet, you will know there are many classes that are similar to Place-365, vice-versa. 2. The novelty is limited. The method can be seen as an ensemble of different models. Moreover, the FGSM is only targeting the foreground module, leaving the background module untouched. 3. The experiments are weak, including the selection of datasets and attack methods. Due to the weaknesses mentioned, I recommend to reject this paper. <doc-sep>This work proposed to enhance the robustness of DNNs by fusing context information from the background. It first studied the blur effects to the foreground and background-based DNNs and observed that fusing the two information helps accuracy improvements under different blur effects. Then, it further extends to the adversarial attacks via FGSM, and observes the advantages of using background information on MSCOCO and CIFAR-10 datasets. Finally, it proposed a regularization method to reweigh the foreground-related weights during training. I have the following concerns: 1. The idea for enhancing the adversarial robustness via foreground and background is not novel and has been studied in [A]. [A has similar conclusions but with a more challenging attack, e.g. PGD, instead of the FGSM. 2. It is not clear why choosing MS-COCO dataset as a subject dataset. Commonly used datasets for adversarial attacks are the imagenet and CIFAR datasets. Why not use imagenet dataset? 3. Why choose Gassuain blur as a perturbation? Note that, recent work [B] has studied the adversarial attack from the angle of motion blur. In contrast to Gaussian blur, the adversarial motion blur could fool DNNs via gradient information like the traditional noise attack. 4. All figures show obvious distortions and there are a lot of typos. This work may be a rush to the deadline. [A] Towards Robustness against Unsuspicious Adversarial Examples. [B] Watch out! Motion is Blurring the Vision of Your Deep Neural Networks. NeurIPS 2020. Overall, the main concerns of this work are the novelty and unclear experimental setups.
This manuscript proposes an information fusion approach to improve adversarial robustness. Reviewers agree that the problem studied is timely and the approach is interesting. However, note concerns about the novelty compared to closely related work, the quality of the presentation, the strength of the evaluated attacks compared to the state of the art, among other concerns. There is no rebuttal.
This paper formulates the disentanglement problem from an information theoretic perspective, but focusing on an objective that encourages a compositional disentangled feature space on the layers that precede the final latents. With objective, the authors describe a new method using Gate of Mixture-of-Experts to implement the compositional disentangled reconstruction objective. Some of the terms require mutual information estimation, for which they use MINE estimators. They run experiments across dSprites and 3DShapes and look into reconstruction error and different disentanglement metrics, observing that they method outperform existing beta-VAE-like baselines, without any compositional incentives. They also analyse the loss components with different architectures and observe that degrees of compositionally in the architecture yields better disentanglement. Finally, they look into some ablations of the regularisation pressure and into data efficiency in downstream tasks. Overall, I am pretty happy with the paper. It's mostly well written and organised. 1. Positives * Session 2 (Compositional disentanglement learning) is well organised and sets up the scene well for the method in session 3. * Good level of implementation detail is available, such as architectures, estimator used, etc. The experiments are well conducted and common mistakes were avoided AFAICT. * Use of standard datasets and metrics well stablished in the field. 2. For improvement: * There has been some progress in the use of hierarchical VAEs, which can be interpreted as applying disentanglement regularisation to other layers and making it compositional in a similar fashion to this work, e.g. NVAE (A Vahdat, J Kautz 2020). * I would be a bit more careful with the tone on claims about the requirement of compositionality for disentanglement. Figure 1 is only an evidence in a toy example, not an actual demonstration. So statements as " fig 1 shows when ... is not effectively disentangled." (session 1) and "To achieve better disentanglement between ... their input feature sets .. are expected to be disentangled as demonstrated in our case study" (session 2.2) could be watered down a little. * It's unclear if there are benefits from using MINE and the architecture in some of the experiments. For example, if I understood correctly, the beta-VAE objective yields better metrics on figure 6 than in figure 3. * The paper left me wondering what are the disentanglement metrics on the preceding layers, looking into MIG/SAP/DCI-D on m^L-1 and m^L-2 seems like a straightforward analysis that should be in the appendix or even in the main paper. * Some ablations on the architecture itself also seem to be missing (ablations on the loss are good). My intuition is that Gate of Mixture-of-Experts fits quite well with the disentanglement that we want, because of the top . For example: * just learning a linear+softmax instead of the Router * no routing at all, just some fixed assignment: e.g. split m_l in d+1 equal slices and pass through the encoders. * using a transformer instead of the Recursive Modules, perhaps over fixed slices as well. My main concern is the discussion of related hierarchical models missing from related work and the emphasis on this being the only work to apply some disentanglement pressure outside the main z latents. This should be an easy fix for this paper. The compositional objective is interesting and novel and the implementation method is clean. The experiments were well conducted and the well analysed. Overall, I am confident that the authors will be able to address the main issue above and that this paper will award acceptance in this venue. <doc-sep>The paper proposes a new approach for learning disentangled variational autoencoders. In addition to pushing the sufficiency, minimal sufficiency, and disentanglement of the latent representation, the paper proposes to also regularize those on earlier features in the network. Experiments demonstrate promising results. Overall, the idea is technically sound and the results look promising. I have some questions and suggestions and hope the authors could clarify them during the rebuttal. * Section 2.1: what is the meaning of defining Markov Chain as \\hat{x} -> x -> z, given that the generative process is actually x -> z -> \\hat{x}? I looked at the cited work (Achille & Soatto, 2018), and they seem to discuss a different setting, where they have a dataset of data points x and associated labels y, and in that case, y -> x -> z makes sense to me. * Definition 5: Similarly, what is the practical meaning of m_j^{l+1} -> x -> (m_i^l,m_j^l)? * Figure 6 left: do you use compositional objective (Section 2.2) and recursive disentanglement network (Section 3) in this experiment? If so, lambda_2=0 is not equivalent to beta-VAE as the vanilla beta-VAE does not have the compositional objective. * Figure 2: the 'w_j^l' in green should be 'w_{d_{l+1}}^l'? * I would suggest adding GAN-based approaches into the comparison tables. This would be very helpful for readers who want to pick techniques for their downstream applications. * The decomposition and discussion of the losses around Eq. 2 and Table 1 have been partially discussed in prior work (e.g., https://arxiv.org/abs/1706.02262), and I believe it is not your key contribution. I would suggest highlighting this fact better as it sounds like these are your discoveries from the current writing. In summary, the idea is interesting and the results look promising. I hope the readers could clarify these questions and I will adjust the score accordingly. <doc-sep>The paper presents a VAE variant that disentangles the features of the inference network at every layer, with disentanglement defined in terms of mutual information between features. The approach is implemented as "recursive disentanglement network" based on a switch network (aka Mixture-of-Experts gate, introduced in Shazeer 2017 and used in the switching transformers). The results in dSprites and 3DShapes dataset suggest this variant performs better than well-known disentanglement VAE networks (from few years back) in dSprites and well in 3DShapes (though not the best in all measures). In terms of VAE loss, the approach is presented as a generalization of various other disentangling VAEs. Strengths + The paper presents an approach that is theoretically justified and implemented in a convincing manner. + The results appear convincing and beating relevant baselines (though with the disclaimer that mostly these baselines seem rather old by now; some of the relevant comparisons from the last 2 years might be missing, but I cannot name any). + Empirical evaluations are sufficient (though only barely), except for the ablation (*): Weaknesses - I suspect that including layer-wise disentanglement has occurred to many people before, but it has not been attempted due to the computational burden. (That said, I have not seen anyone actually try it.) - It is unclear whether this approach *solves* the computational burden and scales to more complex datasets and larger image resolution - I am not convinced that the loss formulation (Eq. 4) is that significant wrt. InfoVAE, for example. What would the authors say against the criticism that Eq. 4 is basically just reshuffling the InfoVAE loss? (I'm not saying it is, but it would be helpful if the authors could shoot down this potential concern.) - (*) It is unclear to what extent does the performance originate from the loss (Equation 4), and to what extent does it come from the switch-based architecture. Could the authors clarify this? It would seem that one could implement Eq. 4 loss without the switch architecture? The paper appears to simultaneously give a clear slightly novel generalization to disentanglement losses of prior VAE variants, and to provide an architectural approach to implement their approach. I have concerns about the scalability, and I am suspicious about whether such a heavy-handed disentanglement can be maintained in larger models. It also was not clear to me whether the results originate from the loss or the architecture advances (which build on existing switching architectures). However, either way, I think this is a novel approach and potentially a significant addition to the VAE disentanglement research, and I lean towards acceptance in this case, though I urge the authors to address my questions. <doc-sep>This paper proposed a recursive disentanglement network (RecurD) for the learning of disentangled representations from information theoretic perspective. The experimental results show RecurD outperforms some existing baselines on two benchmark datasets. Pros: 1. Developed a compositional disentanglement learning, called RecurD, that directs the disentanglement learning process across the compositional feature space. 2. Provide some theoretical analysis based on information theory. Cons: 1. Optimizing the lower bound of Eq.2 does not mean obtaining the optimal objective function of $\\beta$-TCVAE paper [1] on the right-hand side of Eq 2. As far as I know, the right-hand side of Eq 2 is the objective of $\\beta$-TCVAE. If we optimize the objective function on the left-hand side, it does not hold for optimizing $\\beta$-TCVAE. Thus, I am afraid that the proposed objective function in Eq 1 fails to be generalized to the existing $\\beta$-TCVAE and FactorVAE. In contrast, optimizing the objective of $\\beta$-TCVAE is approximately equivalent to optimizing the proposed objective function in this paper. What is more, in Table 1, $\\lambda_c =1$ for the original $\\beta$-TCVAE in their paper. 2. Do not specify the number of Gate of Encoders (GoE) for different datasets. It is hard to know how many GoE should be used for a new dataset, like CelbeA, that contains 40 latent variables. Also, I am curious about the complexity of the proposed network? 3. The upper bound and lower bound are confounding. On page 9, the author said: mutual information I(x,z) is the upper bound of KL divergence. In fact, based on the proof in prior work [2], I(x,z) is the lower bound of KL divergence. 4. The Markov Chain in Eq 7 is incorrect. Based on my understanding, the next state $X_{t+1}$ of Markov chain is only related to the current state $X_t$. Hence the joint probability of p(a,b,c) = p(a)p(b|a)p(c|a) rather than p(a)p(b|a)p(c|b). This is because c is conditionally independent on b as you mentioned, b->a->c. 5. Missing baselines in recent years. The author only discussed and compare the proposed method with the baselines before the year of 2019. There are some recent works [3] [4] [5] on improving the disentanglement and reconstruction error. For instance, ControlVAE [5] dynamically tunes the weight \\beta on the KL term to achieve a good trade-off between disentanglement and reconstruction error. 6. Did not conduct experiments on complex datasets. The authors should do experiments on 3D chairs or CelebA to demonstrate the good performance of the proposed method. 7. The result in Fig 5 does not perform well. We can observe that for orientation and scale factors, they are slightly entangled. Besides, the reconstruction quality is not as good as ControlVAE and FactorVAE in the paper. In particular, ControlVAE and FactorVAE can disentangle both 5 latent factors which are better than those in this work. 8. There are some typos in this paper, please proofread this manuscript. For instance, priopr work --> prior work on Page 5. Reference: [1] Chen, R. T., Li, X., Grosse, R., & Duvenaud, D. (2018). Isolating sources of disentanglement in variational autoencoders. arXiv preprint arXiv:1802.04942. [2] Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, Sergey Levine: Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow. ICLR 2019. [3] Patrick Esser, Johannes Haux, Bj rn Ommer: Unsupervised Robust Disentangling of Latent Characteristics for Image Synthesis. ICCV 2019. [4] Srivastava, Akash, Yamini Bansal, Yukun Ding, Cole Hurwitz, Kai Xu, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, and Dan Gutfreund. "Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling." arXiv preprint arXiv:2010.13187 (2020). [5] Shao, H., Yao, S., Sun, D., Zhang, A., Liu, S., Liu, D., ... & Abdelzaher, T. (2020, November). Controlvae: Controllable variational autoencoder. In International Conference on Machine Learning (pp. 8655-8664). PMLR. I think the authors have addressed most of my concerns. I will increase the final rate.
This paper proposes an algorithm for achieving disentangled representations by encouraging low mutual information between features at each layer, rather than only at the encoder output, and proposes a neural architecture for learning. Empirically, the proposed method achieves good disentanglement metric and likelihood (reconstruction error) in comparison to prior methods. The reviewers think that the methodology is natural and novel to their knowledge, and are happy with the detailed execution. The authors are encouraged to improve the presentation of the paper, by providing rigorous formulation of the "Markov chains" to avoid confusions, justification of the independence assumptions behind them, and more in-depth discussions of the learning objectives.
This paper considers the exploration efficiency issues in off-policy deep reinforcement learning (DRL). The authors identify a sample efficiency limitation in the classical entropy regularization, which does not take into account the existing samples in the replay buffer. To avoid repeated sampling of previously seen scenarios/actions, the authors propose to replace the current policy in the entropy term with a mixture of the empirical policy estimation from the replay buffer and the current policy, and term this approach as sample-aware entropy regularization. The authors then propose a theoretical algorithm called sample-aware entropy regularized policy iteration, which is a generalization of the soft policy iteration (SPI) algorithm, and show that it converges assuming that the empirical policy estimation is fixed. A practical algorithm based on the sample-aware entropy regularized policy iteration, called Diversity Actor-Critic (DAC), is then proposed. This algorithm is a generalization of the well-known soft actor-critic (SAC) algorithm. Finally, numerical experiments show that DAC outperforms SAC and other SOTA RL algorithms, and some ablation studies are also provided to demonstrate the effect of hyper-parameter choices in DAC. In general, the approach is novel to my knowledge and the high level idea of using mixed policies in the entropy regularization to avoid repeated sampling and encourage unseen scenarios/actions is also interesting and reasonable. However, there are some clarity and technical issues that should be addressed and improved, as listed below: 1. The authors study finite horizon MDPs, for which the optimal policy should be non-stationary in general. However, the authors only consider stationary policies. Instead, the authors should either change the underlying setting to infinite horizon MDPs or consider non-stationary policies. 2. In (2), $s_t$ should be replaced by an arbitrary $s$ in the state space. Otherwise there may be contradicting definitions of the policy $q$ if $s_t$ and $s_{t’}$ are equal for some two different timestamps $t$ and $t’$. And in (3), it is better to write the $q_{\\rm target}^{\\pi,\\alpha}$ in the entropy term as $q_{\\rm target}^{\\pi,\\alpha}(\\cdot|s_t)$, to be consistent with (1). 3. It’s not very clear why the authors propose to estimate $R^{\\pi,\\alpha}$ with some (neural network) parametrized $R^{\\alpha}$. The authors mention that one can only estimate $R^{\\pi_{\\rm old},\\alpha}$ for the previous policy $\\pi_{\\rm old}$ in practice. However, since in $R^{\\pi,\\alpha}$, all the quantities including $\\pi$, $q$ and $\\alpha$ are known, I’m confused why one cannot evaluate it directly. On a related point, it’s not very clear why the estimation procedure for $\\eta$ (the parameter of $R^{\\alpha}$) using hat $J_{R^{\\alpha}}(\\eta)$ makes sense. The form of hat $J_{R^{\\alpha}}(\\eta)$ looks like an entropy term extracted from the $J_{\\pi_{\\rm old}}$ function, but it’s unclear why maximizing it gives a good estimation of $R^{\\pi,\\alpha}$. Some more explanations are needed. 4. There seem to be several errors (at least inaccuracies) in the proof of Theorem 1 (in the Appendix). Firstly, in the proof of Lemma 1, the term “correctly estimates” is not very accurate, and should be simply stated as something like “equals”. Also, it’s not very clear when the assumption $R^{\\alpha}\\in(0,1)$ can be guaranteed (e.g., using Gaussian/soft-max policies?). Secondly, in the main proof of Theorem 1, convergence of $Q^{\\pi_i}$ to some $Q^{\\star}$ is correct, but this does not immediately imply convergence of $J_{\\pi_i}$, let alone the convergence of $\\pi_i$ to some policy $\\pi^\\star$. On a related point, the proof for the optimality of $\\pi^\\star$ in terms of $J$ is not clear. In particular, it is not clear why (7) and Lemma 2 implies the chained inequality $J_{\\pi_{\\rm new}}(\\pi_{\\rm new})\\geq J_{\\pi_{\\rm old}}(\\pi_{\\rm new})\\geq J_{\\pi_{\\rm old}}(\\pi_{\\rm old})$. I understand that the authors may feel that the proofs are similar to that of SPI, but indeed there are several significant differences (e.g., the definitions of $\\pi_{\\rm new}$ and $J_{\\pi}$). More rigorous proofs are needed for these claims. 5. In Section 5, it is unclear why the authors need to include the parameter $c$, how to choose it and what it serves for. Some additional explanations are needed. 6. On a high level, the eventual goal of the paper is not clearly stated. From the experiments, it seems that the average episode reward is the actual goal of concern. However, the problem setting and the theoretical results (Theorem 1) seem to indicate that the problem of concern is the discounted entropy regularized reward. Some discussion about this is needed. Finally, here are some more minor comments and suggestions: 1. In the analysis of the sample-aware entropy regularized policy iteration, the authors assume that $q$ is fixed. However, in practice, especially in the long run (as concerned in the analysis), such an assumption will not hold (even in just an approximate sense). Can you still obtain some sort of convergence when taking into account the $q$ changes? 2. Why do you need to divide the reward and entropy regularization term in $Q^{\\pi}$ by $\\beta$? 3. It’s better to write out the “binary entropy function $H$" explicitly for clarity. 4. At the beginning of Section 4.3, “propoed” should be “proposed”, and In Section 5, “a function $s_t$” should be “a function of $s_t$”. 5. Some high level explanations on why the $(1-\\alpha)$ term can also be dropped in (8) will be helpful. 6. The theoretical results only show that the algorithm converges, which is already guaranteed by SPI. Is there any possibility to show that there is also some theoretical improvement? So in short, the paper proposes an interesting modification of the max-entropy regularization framework, but contains several technical and clarity issues. Hence I think it is not yet ready for publication in its current form. <doc-sep>This paper proposes diversity actor-critic (DAC) for exploration in reinforcement learning. The main idea of the proposed algorithm is to take advantage of the previous sample distribution from the replay buffer for sample-efficient exploration. The authors provide convergence analysis of DAC and conduct empirical investigations on several benchmarks. Pros The idea of using previous sample distribution from the replay buffer for better exploration seems interesting. The proposed exploration bonus $\\mathcal{H}(q^{\\pi, \\alpha}_{\\text{target}})$ can be decomposed into three terms as shown in (4). Since the last term does not depend on $\\pi$, intuitively this exploration bonus encourages the exploration of $\\pi$ (first term), and tries to make $\\pi$ different with previous policies approximated by the replay buffer (second term). The authors provide a reasonable method to optimized the proposed objective, which can be naturally combined with state-of-the-art algorithms like SAC. Cons 1. Theorem 1 seems misleading. The diverse policy iteration can only guarantee the converge to the optimal policy with respect to the regularized value function, not the optimal policy of the original problem. The authors should make the definition of $\\pi^*$ clear. 2. It’s hard to see the motivation of using a mixture of $q$ and $\\pi$. Could you explain more about this choice? 3. It’s worth to provide the results of SAC-div with JS divergence as it’s more similar to the proposed objective (4). 4. The experiment results are not convincing enough as some important baselines are missing. For example, [1] also uses a mixture of previous polices to encourage exploration with strong theoretical guarantees. I believe this is closely related to the proposed algorithms. Also, the experiment results are not very promising compared with the baseline algorithms based on SAC. [1] Hazan, E., Kakade, S., Singh, K. and Van Soest, A., 2019, May. Provably efficient maximum entropy exploration. In International Conference on Machine Learning (pp. 2681-2691). Other suggestions The main idea of the proposed method is to make the current policy different with previous policies. The paper uses a nonparametric method (2) to approximate the previous policies. I think it’s also worth to try parametric $q$. For example, $q$ could be learned by fitting the replay buffer, or use a moving average of previous policies. <doc-sep>Summary This paper proposes a novel exploration method in off-policy learning. Compared to previous methods which do not take care into account the distribution of the samples in the replay buffer, the proposed method maximizes the entropy of the mixture of the policy distribution and the distribution of the samples in the replay buffer, hereby making exploration efficient. Reasons for score I vote for accepting the paper. The paper proposes an intuitive and efficient exploration method that generalizes existing methods, including them as special cases. The authors provide a theoretical guarantee (Theorem 1) that the policy obtained from the iteration of evaluation and improvement under this new regime converges to the optimal policy. The presentation is clear and concrete, and the experiments are convincing. Pros The experiment results are not limited to just showing that the proposed method achieves higher reward than state of the art methods, but they also address important questions such as (i) the pure exploration when rewards are assumed to be 0 (i) the necessity of the adaptation of alpha, the parameter that controls the ratio of the current policy to the sample distribution in the target distribution. (ii) the effect of controlling alpha, the entropy weighting factor beta, and the control coefficient c (required for adapting alpha), and also, the robustness of the proposed method to these parameters. The authors have stated the experiment details clearly and the results are convincing. Cons The methodology part in Section 3 and 4 could be improved. Some notations are confusing. (a) In Section 3, the policy \\pi is defined as a function from S to A. It looks like it is a fixed function over time. (b) An explanation on the definition of J_{pi 1}(pi 2) would be helpful,e.g., J_{pi 1}(pi 2) is value of J(pi_2) computed under pi_1. Minor Comments It would be good to add the line of SAC and SAC-Div in Figure 5 (c ) to show that the performance of DAC with adaptive alpha is robust to control coefficient c. For now, one has to go back to Figure 4 (b) to check that most of the case (when c is not 0), DAC with adaptive alpha performs better than SAC and SAC-Div. In Section 6 in the 5th line, J(\\pi) should be specified as “J(\\pi) in (1)”. It is done in the next sentence, but I prefer that it is done when it first appears. It was confusing <doc-sep>### Summary The paper proposes DAC, an actor-critic method exploiting the replay buffer to do policy entropy regularisation. The main idea of DAC is to use the data from the replay buffer to induce a distribution $q(\\cdot, s_t)$ and replace the entropy part of the Soft Actor-Critic objective with a convex combination of $q$ and $\\pi$. This results positively on exploration properties and leads to sample-efficiency gains on some of the considered MuJoCo benchmarks. ### Pros - Formulating the diversity using the entropy of the replay buffer frequences is an interesting idea. - Using the convex combination of $q$ and $\\pi$ for entropy regularisation is a nice way of generalising SAC for the considered purpose. - The paper shows the convergence of their method to an optimal policy and derives a surrogate objective whose gradient direction coincides with the original one, but which can be practically used. (However, I have not checked the proofs which are in the appendix). ### Cons - It is not clear, what is the problem the paper tackles. Is it exploration? Is it a generic RL setup? What kind of problems is DAC good for? - If DAC is for improving exploration, then it should be compared with other exploration methods, not with vanilla SAC. Comparison with RND should not be in the appendix and there should be more details on this. Related work in this case should have a paragraph on exploration methods in RL. - The paper is based on assumptions not challenged/tested by the authors, e.g. policy entropy regularisation is inefficient, because it does not take the distribution of the samples into account. - The paper focuses more on the technical details of the solution rather than justifying the assumptions and making the research question clear. ### Reasoning behind the score I believe, the paper has a great potential. However, at the moment I vote for rejection. The paper has to have a clear research question and its motivation. This should define the experimental part of the work. Lack of a clear positioning makes it unclear if the baselines of the experimental sections are the right ones and whether the claims have been properly supported by the results. ### Questions to the authors - Can you formulate the exact problem you are solving? - How can you justify the claim that 'entropy regularization is sample inefficient in off-policy learning since it does not take the distribution of previous samples stored in the replay buffer into account. - "it is preferable that the old sample distribution in the replay buffer is uniformly distributed". Why is it true? Doesn't prioritized experience replay refute this claim? - You define $\\beta$ in Equation 1 in $(0, \\infty)$, can it really be infinite? - "The rationale behind this is that it is preferable to have as diverse actions stored in the replay buffer as possible for better Q estimation in off-policy learning." What are the assumptions for this? Do you care more about better Q estimates or finding an better policy faster? How can you support your rationale? - In section 4.1. you define the target distribution as a convex combination of $\\pi$ and $q$. You assume that the buffer is generated by $q$. Does such a policy always exist? What are the assumptions for this? - You prove the convergence of your algorithm (I did not check the proof in the appendix), what are the assumptions for which the convergence is guaranteed? - Why do you use sparse/delayed MuJoCo benchmarks, but not the original ones? - The variance across different seeds seems to be huge for your method (as well as for the others). What do you think is the reason behind this? This also happens for the pure exploration task in 6.1, why do you think it happens? - For the adaptive $\\alpha$ case, you restrict the range of possible values, what is the reasoning behind the left boundary? - I think your paper can find an important application in Imitation Learning or off-line RL. Have you considered this? Are you aware of works which do something similar in those subfields? ### Additional feedback not affecting the score - "Reinforcement learning aims to maximize the discounted sum of rewards...'. Should be 'expected discounted sum'. - There should be a distribution over initial states under the expectation sign in 3.1. - 'A is the continuous action space'. This is not true for the general MDP definition, specify that this is specific for your paper. - Section 3.1, a policy is a mapping from states to distribution over actions, not to actions. - In off-policy, we can learn from any other samples, not only from 'previous samples' from our policy. - typo "propoed" at the bottom of page 4. - Equation 9 does not have a left hand side. - DAC acronym has been used in RL. I would choose a different one to avoid confusion.
First, I'd like to thank both the authors and the reviewers for extensive and constructive discussion. The paper proposes a generalization of SAC, which considers the entropy of both the current policy and the action samples in the replay pool. The method is motivated by better sample complexity, as it avoids retaking actions that already appear in the pool. The paper formulates a theoretical algorithm and proves its convergence, as well as a practical algorithm that is compared to SAC and SAC-Div in continuous sparse-reward tasks. Generally, the reviewers found the method interesting. After rounds of discussion and revisions, the reviewers identified two remaining issues. Theoretical analysis still requires improvement and the positioning of the paper is not clear. Particularly, the method is motivated as an exploration method, and it should be evaluated as such, for example, by comparing to a more representative set of baseline methods. Therefore, I'm recommending rejection, but encourage the authors to improve the work bases on the reviews, and submit to a future conference.
This paper introduces a sensor-fusion approach that provides interpretable intermediate representations of the world scene. The approach can fuse multi-view RGB images along with LiDAR scans. The network architecture involves a CNN backbone which feeds into a transformer encoder for fusion. A transformer decoder generates an object density map, a set of waypoints for the ego, and a set of rules to be enforced (such as traffic lights) which are then fused in a safety controller to generate open-loop actions. Strengths: 1. I like the idea of training the perception pipeline with the planning and control modules in tow. This would allow the perception pipeline to extract the features that are relevant to behavior generation and control. 2. The approach provides interpretable perception outputs which would be a great asset for verification of the correctness of the decisions made by the downstream planning and control modules, modulo the correctness of the perception outputs. 3. The experimental results are strong (first rank in CARLA leaderboard in driving score) and thorough (extensive ablation studies that do give reasonable insights such as the importance of fusion). Weaknesses: 1. In a simulator, the data collection for training was easy because direct access to the scene ground truth is available. However, perfectly annotating the object density map for training from real driving logs will be very challenging. 2. The planning and control modules in an actual AV might be significantly more complex than the safety controller used in this paper with multiple layers and potentially non-differentiable components. 3. The paper suffers from some grammatical errors which can be fixed. Clarification question: 1. In the context of control, line 109 says: “rule-based methods hardly scale to complex environments due to the extensive human labor required.” What human labor is being discussed here? Also, could you cite a reference which suggests rule-based methods fail to scale. This paper (Helou, Bassam, et al. "The Reasonable Crowd: Towards evidence-based and interpretable models of driving behavior." IROS 2021) seems to suggest otherwise. 2. Why does picking the local maximum for object probability in the map help with identifying objects with high position uncertainty (as suggested in line 198)? <doc-sep>The authors present an interpretable autonomous vehicle (AV) policy which features a sensor fusion transformer. The authors develop a transformer encoder which takes in which uses multiple camera viewpoints and LIDAR. Additionally, a transformer decoder which output waypoints, an object density map, and traffic rules in order to determine a nominal trajectory and use a safety controller to adjust the velocity of the planned trajectory. The resulting AV policy beats state of the art methods in CARLA leaderboard and benchmarks. Strengths: - Justification for methods are well argued and the paper is accessible - The results on the CARLA leaderboard and benchmarks are impressive and show the potential of the method for more realistic/larger scale settings. - The inclusion of code for the method is a great strength and I hope it is made public upon publication. This will allow follow up work to more easily compare to this strong method and in a straightforward way and extend portions of the method (such as with different safety controllers as the authors mention on L210-L11). Weaknesses: - The addition of an ablation study is important in understanding the importance of the choices the authors made in the resulting method. However, as it stands, the interpretation of the ablation study and tables 2 and 3 are quite confusing and I am not sure how to view the results. The authors seem to have a different interpretation of the table than I do, but this is possibly due to a mistake in the table. - The overall approach is very hand-crafted and there are many possibly non-obvious choices which must be made. E.g. L554-L555 cyclists’ and pedestrians’ bounding boxes are scaled up but not vehicles’. I believe this is common for large-scale learning for AVs but this does somewhat weaken the approach. It is possible that the good performance is due to these many choices and the careful cost function shaping instead of the transformer and interpretable outputs. <doc-sep>This paper proposed a transformer fusion architecture for controlling autonomous driving agents. Various image and LiDAR inputs are processed by CNNs and fused by a transformer encoder, which is followed by a transformer decoder to output driving action and auxiliary outputs. Evaluation results on various benchmarks, including the public CARLA leaderboard, demonstrate the effectiveness of the proposal. Strengths: 1. This paper covers an important topic. 2. The proposal achieves impressive performance. 3. The writing is generally clear and the diagrams are helpful for understanding the architecture. Weaknesses: My main concern about this paper is the lack of more careful analysis of safety and interpretability, the two main benefits claimed by this paper. These two concepts are very related in this paper, as safety is ensured by the "interpretable" intermediate outputs generated by the model, such as inferred traffic state information. However, since these outputs are generated by equally non-interpretable black box model (actually the same transformer decoder), so this set up seems more like an auxilliary loss setup, rather than providing any kind of interpretability or safety guarantee. For safety, it is mainly tackled by verification, as already discussed in the related work, or barrier functions [e.g. 1]. However, the safety notion strongly depends on the quality of the intermediate output prediction, which seems hard to offer any guarantee. On the interpretability side, I wouldn't call such an architecture interpretable. Specifically, it is not clear what additional benefits the "explanation" (i.e. intermediate outputs) offer, in terms of understanding the model. Because the explanation is generated along side the action predictions, they do not need to be coupled in anyway, so that, for example, the action could be the "drive forward" prediction even if a red traffic light is also predicted. For claims on interpretability, I would like to see some concrete evidence, such as helping with model debugging [2, 3], improving human-model collaboration [4], or some other use cases [5]. At the very least, some more careful analysis of the intermediate output is needed to understand when they can help and when they cannot. Then the lack of rigorous studies of safety and interpretability could be acknowledged in the limitation section. [1] https://arxiv.org/abs/2109.06689 [2] https://arxiv.org/abs/2104.14403 [3] https://openreview.net/forum?id=xNOVfCCvDpM [4] https://arxiv.org/abs/2006.14779 [5] https://dl.acm.org/doi/10.1145/3511299
This paper proposes a new sensor-fusion approach that provides interpretable intermediate representations of the world scene and a safety-enhanced feature for autonomous driving. The authors propose to fuse multi-view RGB images along with LiDAR scans. The feature extraction part is also enhanced with the planning and control modules. The reported experiment results are promising and strong, i.e. first rank in CARLA leaderboard in driving score, and accompanied with an extensive ablation studies. The justification for the method of fusing multi-view RGB images and LiDAR scans are well articulated. The authors have greatly clarified open questions from the authors regarding the safety and interpretability. These additional details would be very helpful to understand the paper and its impact, the authors should consider to add them to the final version or its appendix.
This paper presents HCM, an approach for chunking a sequence of data into a hierarchical representation. More specifically, HCM learns a tree with atomic units (ie the low-level inputs, in this case integers representing things like text characters or quantized pixel values) as the leaves and increasingly complex groupings of them higher up the tree. HCM learns by iteratively parsing the provided data (ie stream of tokens), in each pass computing marginals for the current set of chunks as well as transition frequencies between them. After updating its marginals and transition frequencies, the two chunks with highest joint probability are combined into one. The process continues until all pairs of chunks pass an independence test. I believe the main contribution of this paper is in that it presents an idea for interpretable grouping based on the principle of grouping by proximity from cognitive science, and a largely qualitative proof of concept for it. **Strengths** - I believe the paper's main strength lies in its motivation. I believe the core of the presented idea is compelling, and would be of interest to the community. - The paper is clearly written, and the method is simple. **Weaknesses** - The paper presents primarily qualitative results for the majority of datasets/tasks used. The experiment performed on a text corpora only presents a table of examples with learned chunks, and the visual-temporal experiment only presents a figure with some of the learned visual chunks. It is not clear to me from the presented experiments how to compare this method to alternatives. There is one experiment comparing against an RNN baseline, showing that HCM converges faster — however, RNNs are not the current SOTA in sequence modeling (i.e. why wasn't a transformer model used?). - I am concerned that the method as currently defined cannot generalize to real world data. HCM parses chunks from a sequence by matching them exactly to subsequences, which to me means that this method groups segments together purely based on form rather than semantics. My perspective is that the promise of hierarchical representations is that you can decompose complex objects and patterns into their parts (e.g. a person into [head, arms, legs] — head into [eyes, ear, nose], etc...). However, in modalities such as vision the same parts can appear with drastically different color values. The paper alludes to this in its Discussion section, but does not present a solution to this problem, which is something that I think would need to be shown. - Related to my point above, I'm not entirely sure I understood the thesis of the paper in terms of the narrative it is trying to convey, and would appreciate hearing the authors thoughts on this. Is this meant to be received as a paper for the cognitive science community, showing an operationalization of grouping by proximity? Or is it being presented for the machine learning community as a representation learning method for use in downstream tasks? If it's the former, I believe this work would be much more appropriately submitted at a cognitive science conference. If it's the latter, I believe much more empirical evidence of the learned representations' usage needs to be shown. - The related work section mainly focuses on historical NLP methods, with little discussion over similar methods in computer vision, which I believe is needed given that it presents experiments on visual data. I would suggest works such as: - Normalized Cuts and Image Segmentation by Shi and Malik 2000 - Selective Search for Object Recognition by Uijlings et al. 2012 as places to start. Additionally, I think work on unsupervised grammar induction could also be relevant here. Although I believe the motivating idea is very compelling, I don't believe this paper is ready for publication. In summary, I believe the paper currently lacks: - More thorough empirical evaluations comparing against other methods. - Experiments showing the method's potential for generalizing to more naturalistic data, as well as its usefulness for downstream tasks. - A more clearly focused narrative motivating why it's appropriate for a venue like ICLR as opposed to a cognitive science publication, as well as more thorough contextualization among related work (particularly comparing against recent alternative methods for this problem). I thank the authors in advance for their response, and am also interested in seeing other reviewers' thoughts. <doc-sep>The paper proposes a graph-learning model (HCM) for learning hierarchical chunks from sequential data. The paper first proposes an idealised HCM method, for which the paper provides learning guarantees via a proof by induction, and an online approximation to this idealised method, which is more computationally feasible and which is used to perform experiments in temporal, visual, visuotemporal and language sequential data domains. The paper demonstrates that the online method learns interpretable chunks at multiple levels of abstraction and demonstrates positive (and negative) transfer to other hierarchically structured environments with similar (and different) structures. Strengths: The paper is very well written, with very clear, intuitive explanations for how their method works, and justifications for the authors’ design choices. The paper provides several well-considered experiments to demonstrate the HCM method quantitatively and qualitatively. First, purely sequential data is generated from several random (but known) heirarchically-structured graphs and the HCM method is shown to learn this underlying hierarchical structure well, compared to a vanilla RNN. Secondly, the paper verifies that the learned model shows positive and negative transfer to similarly or differently structured heirarchical environments, as might be expected from a chunk learning algorithm. Fianlly, the paper explores how the HCM model performs qualitatively in spatial, spatiotemporal or english-language chunking, with interpretable (although unquantified) results in each. The connections to animal chunk learning are well thought through. Interestingly, for the case of spatiotemporal chunking, without considering a priori the spatial proximity of pixels, spatially connected chunks are learned. So it is by virtue of the fact that objects tend to move smoothly in space and time that online HCM will learn to group visual spatial chunks smoothly in the height x width plane too. This has really interesting close ties to theories for animal learning of object permanence (although obviously the implementation is very different), as the authors note. Weaknesses: The paper mentions that this method should offer more interpretable learned representations, but for what sort of task or application is this envisaged? Regarding the transfer of learned chunks to new data sequences, it seems that a human (or other model) would have to know the underlying generative process of the target data sequence in order to know whether the original learned chunking model should work well in the new setting or not (unless of course, the data is generated from the exact same process as the training data). If a human (or other model) knows that then is it not true that you don’t need the model to do the chunking in the first place? It would have been nice to see quantitative demonstrations of performance for the spatial, spatiotemporal and language-chunk learning experiments. I appreciate its not immediately obvious what the right metric for this performance would be (at least to me), but if the authors were willing and able to find an appropriate one and use this to compare their method to other chunk-learning algorithms it would definitely strengthen the paper. In the learning plots vs the vanilla RNN, the paper would also have benefitted from comparisons to other explicit chunk-learning algorithms. A well-written description of a method for a chunk-learning algorithm, with learning guarantees and qualitative demonstrations of sensible-looking chunks across a variety of domains. Quantification of results was a bit lacking. <doc-sep>This paper proposes a method for learning representations of non- i.i.d. data in terms of hierarchical sets of chunks, inspired by cognitive theories of grouping by proximity. These sets are assembled over time from the initial set of primitive data points by finding correlations between temporally/spatially sequential primitives/chunks and appending to the set. The authors show that this learning method is tractable, has convergence w.r.t. hierarchically-decomposable problems, and learns intuitively and practically reasonable chunk sets. Strengths: - This paper is particularly well-written and understandable. I appreciated the intuitive explanations of chunking in cognitive science and its extension to common machine learning use cases like language and visual data. The examples of instances where hierarchical chunk learning could both help and hurt a learned model were well-chosen. The figures effectively demonstrated the training process and the learned representations in each domain. Even the theorems were more interpretable than I typically see, being subdivided and laid out piece by piece. - The method is reasonably novel and broadly applicable. The paper shows HCM applied to temporal, visual, visuo-temporal, and language domains. Given a domain with some hierarchical structure, a fairly reasonable assumption, this method is able to find that hierarchy with some guarantees. The learned hierarchy itself, as the authors note in the conclusion, could be applied to down-the-line endpoints such as -causal learning. - This method really leans into explainability/interpretability and could thus be more compatible in human-ML frameworks Weaknesses: - While the method is novel and seems to recover structure quite well, the results are not as convincing as I’d like. To lay this out: Given a toy generative hierarchical model, HCM is able to more effectively predict sequences than a basic RNN, particularly as the levels of hierarchy increased. Not to be too glib, but I should hope so! - In an environment where the HCM representations overlap with the underlying model, it outperforms a learned-from-scratch HCM, while in the opposite case, it underperforms. The authors suggest that the nature of the HCM (as compared to something like a DNN) allows users to understand a priori whether their pretrained model will work well, which I agree with - In toy visual domains with and without temporal correlations, HCM learns reproduces the underlying representations. But how does its ability to reproduce the actual sequences compare with appropriate baselines. - Finally, HCM is applied to a corpus from the Hunger Games and is able to learn commonly-repeated phrases over time. My main concern with all of this is the lack of actual baselines. I agree that the models are interpretable and useful, but they aren’t applied to any previously-used datasets or compared (empirically) to other SOTA methods. HCM doesn’t necessarily need to *win* in performance, given its other advantages, but I’d like to see whether it’s competitive - On a related note, the authors provide both idealized and online HCM algorithms. Even the “online” algorithm, while theoretically tractable, seems practically quite slow, which I assume is why the chosen domains are simple. While the online algorithm seems to work well for these domains, I would imagine the loss of guarantees is more likely to be impactful in harder domains. It was not clear to me how the chunks were generated until I read the independence tests section in the appendix, and I think that this is too important to push out of the body of the paper. It also introduces the hyperparameter of statistical significance p, which isn’t really discussed. I like this algorithm and think it has potential. I can see how it can be applied both to standard ML tasks, but also how it could unlock a more symbiotic human-ML collaboration through its interpretability. The motivation and build-up from cognitive science is clear, and all else aside, because of its writing, I felt this paper gave me a lot more valuable insights than most. That said, I’m just not convinced by the current set of experiments. I can’t glean how well HCM will *actually* perform (vs. baselines on standard datasets), particularly the online variant, and I suspect it’s not computationally that practical either. With some of these comparisons added, I think I could accept, but for now it’s a reject from me. <doc-sep>This paper proposes a non neural system of parsing natural language text by chunking sequences to form hierarchical structures. The algorithm strongly resembles classical parsing algorithms. Decisions about when to chunk a phrase into a constituent are based on chi^2 tests of independence, where a pair of chunks that are considered to be dependent are joined into a single constituent. They test this chunking algorithm on natural language data against an RNN, concluding that the classical parsing algorithm is more sample efficient in achieving a low KL-divergence from the true sequence data. They also provide some examples of how this algorithm can be applied to temporal image data or video. The paper is clear. I rarely had trouble following, although I didn’t understand that the decision to chunk was based on a chi^2 test until I read the appendix, which seems crucial. I enjoyed reading about the relative sample efficiency of the classical algorithm vs the RNN, though I would have rather seen a fair comparison with a TreeRNN or some other system that involves latent tree structure, as well as a comparison to other classical dependency parsers. The application of a classic parsing algorithm to video was a nice adaptation. The overall problem I had with this paper was the fact that it is presenting a classic parsing algorithm but contains no citations to any work from the age of classic parsing algorithms. I found this lack of background disturbing because as far as I can tell this algorithm is a statistical stack based parser, and the authors should have looked into whether they were reproducing existing work. The problems they have with efficiency of their own algorithm are resolved by many statistical parsing algorithms. Even allowing partial parses (as they do) is a property in a number of non neural parses such as https://www.cs.cmu.edu/~nschneid/twparser.pdf (A Dependency Parser for Tweets by Kong et al.). Ironically, I also had trouble looking for specific classical parsing algorithms to compare with this while reviewing, because the literature has exclusively contained neural parsing algorithms for so long. The general area of structured prediction is one that has a long history, and the authors seem not have a particular background in the problem space. I recommend reading Slav Petrov’s thesis (https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-116.pdf) for a deep background on the topic from the age of classical parsing. Although the paper described the problem of a lack of inductive bias towards hierarchical parse structures, there were no citations to the literature which attempts to resolve this problem (treeRNNs, RNNGs, etc.). There was also no discussion of non-neural hierarchical algorithms for structured prediction on video (e.g., structured prediction cascades), which seems necessary in a paper with experiments on a non neural hierarchical algorithm for video. Beyond the lack of discussion of the existing field of algorithmic hierarchical parsing, the discussion of limitations does confront the possibility of non projective grammars, which cannot be covered by this sort of chunking (“How to relax the adjacency assumption as a grouping criterion to allow for non adjacent relationships to be chunked together remains an open challenge”), but does not to discuss it in terms that have been used historically in parsing, or acknowledge the existing parsers that cover non-projective cases. I was somewhat confused by the decision to use The Hunger Games as a corpus for training natural language parsers on, as there are a number of more common corpora that would have compared more easily to the existing literature (The Little Prince, PTB, or wikitext come to mind). I was confused by the reference to Teh 2006 alone as an extension of ngram models, given that there was no other discussion of backoff (e.g., Katz back-off, or smoothing) in ngram models, which has a much longer history. I was not surprised that introducing a parsing algorithm with a strong inductive bias was more sample efficient than using an RNN. This phenomenon is the reason why, for years, NLP did not use neural networks until large quantities of data and compute became easily available. MINOR Please explain how the hypothesis testing works in the main text of the paper, and not just in the appendix, or at least emphasize appendix A in the main text of the paper while describing the algorithm. Typos: “they the way” Hinton (1979) should be a parenthetical but is instead inline citation. Questions: How does catastrophic interference relate to gradient starvation? This paper is missing significant background on classic hierarchical structured prediction. Because it is presenting a classical parsing algorithm without a single citation to pre-neural structured prediction as a field, I believe that it is extremely similar to existing algorithms that are rarely in use today.
This paper develops an approach to learning hierarchical representations from sequential data. The reviewers were very positive about the overall approach, finding it well motivated and interesting with strong potential, and thought that the paper was extremely well written with clear examples throughout. There was a good back-and-forth between the reviewers and the authors, discussing several aspects of the paper and providing constructive suggestions for improvement. In particular, the reviewers suggested improvements in terms of independence testing, comparison to further baselines, further experiments, and other improvements as detailed in the reviews. The authors were extremely receptive of these suggestions, which is to be commended and is very much appreciated, and in a response state that they are planning to take the time needed to revise this paper before publication.
The paper presents a new gradient-based framework for learning invariant mechanisms (often called "relations" in the paper) from data drawn for multiple environments (data generating processes). Overall, the writing is excellent, and the central ideas are interesting and valuable. A key idea of the paper is that training data drawn from different environments can be exploited to learn mechanisms that remain invariant across those environments. While true, this is unsurprising and well-established. Fundamental principles of causal inference, known for decades at this point, directly imply that different environments (data generating processes with different interventions) will allow identification of different sets of causal dependencies. Practical methods for such identification have been demonstrated using graphical models and relatively simple methods for parameterization of those models. The paper could be improved by spending less time on the known results (or at least making clearer connections to prior work) and spending more time clarifying what is genuinely novel about the proposed ideas. In addition, the authors should make a greater effort to distinguish between central ideas and implementation details. Multiple times in the paper, basic results from the causal inference literature are attributed to relatively recent papers (e.g., Peters et al. (2017)), including the special properties of the causal factorization and the idea of invariance of mechanisms in response to intervention. These ideas can be traced back much further. For example, the basic idea of invariance to intervention (so called "autonomy" or "modularity") has been known since at least the 1930s. Heckman and Pinto (2015) note that: "In the language of Frisch (1938), these structural equations are autonomous mechanisms represented by deterministic functions mapping inputs to outputs. By autonomy we mean, as did Frisch, that these relationships remain invariant under external manipulations of their arguments." The paper would be improved by making clearer when concepts were first identified and by who. The empirical evidence provided for the claims in the paper is relatively modest. The simulated results provided in Table 1 shows only very small differences in L2 errors among variants of the authors' proposed methods, and more substantial improvements over ICP and ERM (in three of four cases). The discussion of these results is excellent. The results on the "Colored MNIST" data show the expected results. However, good performance on simulated data and only a single real data set is still relatively weak evidence for the claims made in the paper. The paper would be improved by increasing the number of real data sets used for evaluation. References Heckman, J., & Pinto, R. (2015). Causal analysis after Haavelmo. Econometric Theory, 31(1):115-151.<doc-sep>In this paper, the authors propose a gradient-based learning framework, with a two part objective function in which one part improves the informativeness about the target variable, and the other part enforces the invariance of the relation. The second part is based on the ICM principle and increases the stability and renders domain generalization possible. The paper is well written and, for the most part, is easy to follow. We should note that the ICM principle is only usable if we have no hidden confounders, i.e., causal sufficiency, in the system. The authors should clarify that causal sufficiency is an important assumption early in the manuscript and should clarify what will happen to the results if it is violated. In general, the assumptions in this work are very strong and I do not believe they will hold in reality. Specially, regarding Assumption 2, if we are assuming some of the causal mechanisms are changing across environments, why the one corresponding to the target should not change? Although the assumptions are strong, same assumptions were considered in few other works such as (Peters et al., 2016). Compares to existing work with the same assumptions, this paper provides a good implementation method that is an improvement over past work and would be of interest to the ICLR community. The authors also discuss the conditions under which the recovered stable relations correspond to the true causal mechanisms. The use of ICM for causal discovery is also extensively studied in the non-parametric case in [Huang et al., Causal Discovery from Heterogeneous/Nonstationary Data], and in the linear case in [Ghassami et al., Multi-domain Causal Structure Learning in Linear Systems]. The definition of do-intervention in page 3 is not standard. What is referred to as do intervention in this paper is usually referred to as hard intervention in the literature, and what is referred to as hard intervention in this paper is usually referred to as atomic intervention in the literature.<doc-sep>The paper is well-motivated and studies and important topic, but unfortunately it is let down by the presentation of their contributions which is confusing and at times misleading. First - a more minor complaint (which I put here because its a source of confusion for the rest of the review) - The normalizing flow section is confusing because the mapping between the base distribution and y isn't clear. I normally think of a normalizing flow as a map from some base distribution u to some target y such that y = T(u) and p(y) = p(u)|det J_T (u) | where u=T^{-1}(y) (adding conditioning as required to make a conditional flow). This paper uses T(Y; h(X)) everywhere - which I think is referring to T^{-1}(Y; h(X)) because we normally think of T as acting on the base distribution U and T^{-1} as acting on the target variable. My review assumes that I should read T(.) as a map from Y -> U... but that's a little weird and should be explained explicitly. More seriously, I don't understand why Lemma 1 isn't trivial? By the data processing inequality, any transformation of X can only lose information about X. So if the identity function is among the set of feature extractors, then h^* includes it, because it maximizes I(h(X), Y). The fact that h^* is independent of the flow's latent variable trivially follows from the fact that choosing the identity is always optimal. Of course, things get more complex if there is some constraint on H such that the identity isn't included, but this isn't discussed. On a second reading, I think that this constraint is meant to come from the Y \\perp E | h(X) condition in section 4, but how this condition interacts with Lemma 1 needs to be clearer. The presentation of the method in section 4 also needs work: the domain generalization problem is presented as the problem of finding h that maximizes the mutual information between Y and h(X) in the worst case environment under the constraint that Y\\indep E | h(X). As far as I can tell, the independence constraint is the important part of that objective: under that constraint, it is not clear why I wouldn't want to maximize the average mutual information, or some other objective? Similarly - it's not clear why theorem 1 is useful until we get to equation (5) (and it took me a couple of reads to realize that this is actually the important step) - on its own, it just essentially says that if we have conditional independence, then applying a 1:1 function maintains that conditional independence. Having gotten to this point in this review, I think that many of my issues would be resolved if the presentation order was reversed. The key condition you need is Y \\perp E | h(X); The paper would be far easier to follow by making it clear that is is the condition you need, explaining both why we can't optimize for it directly, and why this particular normalizing flow approach gives an indirect approach to achieving the condition. In the current order of presentation which leads with a discussion of normalizing flows, we are presented with theoretical results about flows which, in isolation, seem trivial. The experiments show the method shows promise (though they should report both IRM & REX [Kruger et al 2020]'s performance for coloured MNIST to make it clear that there are better methods on that dataset)... [Kruger et al 2020] Out-of-Distribution Generalization via Risk Extrapolation
This paper proposes a new framework for improving supervised learning via invariant mechanisms. The reviewers agree that overall, this paper is well-written and contributes to a growing body of work on invariant prediction and causality in supervised learning. At the same time, there are some concerns regarding novelty and significance in light of previous work, as well as the overall organization of the paper, which could be improved to highlight the main contributions more clearly. Ultimately, this was a borderline decision, but it is clear that the paper needs a major revision before acceptance. Although the authors have already incorporated some of the minor comments which is appreciated, the authors are urged to consider the major comments (e.g. see R2's comments regarding presentation) when revising the paper.
Pros: -- Clustering sequence vectors is a practical and useful problem. Some of the business use-cases described in the paper are indeed useful and relevant for analytics in healthcare and retail. Cons: -- The paper is poorly written. There are numerous typos and grammatical errors throughout the paper. -- The ideas are not presented coherently. The writing needs to improve quite a bit to get accepted at a conference like ICLR. -- Description of related literature is done very poorly. -- The generative model described clearly lacks justification. The model is not described concretely either. There is no clear description of the inference techniques used. -- Empirical results are weak. <doc-sep>The problem formulation at the bottom of page 3 correspond to what a bag of words preprocessing of a document would provide and in this the clustering would be a much simpler solution that just doing LDA. The paper has zero interest.<doc-sep>This paper propose a hierarchical Bayesian model to cluster sparse sequences data. The observations are modeled as Poisson distributions, whose rate parameter \\lambda_i is written as the summation of \\lambda_{ik}, a Gamma distribution with rate equal to the mixture proportion \\alpha_{ik}. The model is implemented in Pystan. Experimental results on a real-world user visit dataset were presented. The format of this paper, including the listing in the introduction section, the long url in section 2.3, and the model specification in section 3.2, can be improved. In particular, the presentation of the model would be more clear if the graphical model can be specified. The motivation of choosing the observation model and priors is not clear. In section 3, the author described the details of model specification without explaining why those design choices were appropriate for modeling sparse sequence data. Experimental results on a real-world dataset is presented. However, to demonstrate how the model works, it would be best to add synthetic experiments as sanity check. Results using common baseline approaches should also be presented. The results should also be properly quantified in order to compare the relative advantage of different approaches.<doc-sep>The paper is very poorly written. It is hard to understand what the real contribution is in this paper. The connection of the model with HMM is not clear. The literature review has to be rewritten. To the reader, it sounds that the authors are confused with the fundamentals itself: mixture model, Bayesian models, inference. > Mixture models can be based on any of the exponential family distributions - Gaussian just happens to be the most commonly used. > Again if this is a Bayesian model, why are #clusters not inferred? The authors further mention that in their Pystan implementation K clusters were spun too quick. What was the K used here? Was it set to a very large value or just 3? Did the authors eventually use the truncated infinite mixture model in Pystan? > The authors mention their model is conceptually similar to EM but then end up using NUTS. > Why is a url given in Section 2.3 instead of being given in the references? > Provide a plate model describing Section 3.2.<doc-sep>The paper discusses clustering sparse sequences using some mixture model. It discusses results about clustering data obtained from a restaurant loyalty program. It is not clear to me what the research contribution of the paper is. What I see is that some known techniques were used to cluster the loyalty program data and some properties of the experiments conducted noted down. No comparisons are made. I am not sure what to evaluate in this paper.
All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.
1. The paper provides an interesting solution to achieve better performance in both ID and OOD settings. 2. The paper is well-written and easy to follow. 1. The motivations of the assumptions / definitions are not clearly clarified. 2. The theories provide limited insight for designing a better OOD approach. 3. More baselines should be added. I have several concerns about both theories and experiments. 1. The motivations of the assumptions / definitions are not clearly clarified. - For Assumption 4.1, I do not understand the relationship between the conditional independence constraint and the claim "$f_{\\text{std}}$ relies on the spurious features while $f_{\\text{rob}}$ relies on the robust features". In addition, I think $f_{\\text{std}}$ should rely on both the spurious features and robust features. Moreover, a more detailed analysis of the related works should be provided. - For Definition 4.1, 4.2, and 4.3, the relationships between the mathematics forms and the motivation are also vague. Detailed analysis with concrete examples would help. - The authors develop the theories under the class-balanced assumption, which is strong in practice. Although the authors claim that they provide the general setting in Appendix A, I do not find any texts about this. 2. The theories provide limited insight for designing a better OOD approach. The paper assumes the availability of a robust model $f_{\\text{rob}}$ and aims to ensemble it with the ID approach. In practice, we often need to train a robust model from the training data only. Moreover, we can not verify which kind of distribution shift could take place. As a result, we can not guarantee the effectiveness of the proposed method in real-world scenarios. 3. There are many OOD approaches and the authors should compare the results with them. To name a few, [1, 2]. Some minor issues 1. $T$ appears in the wrong place in both Equation 3.1 and Equation 3.2 2. There are many empty references (marked as ?) in the paper. [1] Liu, Evan Z., et al. "Just train twice: Improving group robustness without training group information." International Conference on Machine Learning. PMLR, 2021. [2] Nam, Junhyun, et al. "Learning from failure: De-biasing classifier from biased classifier." Advances in Neural Information Processing Systems 33 (2020): 20673-20684. <doc-sep>Intuitively it makes sense to me that since robust models and standard models could rely on different sets of features, ensemble them could make a better model. The method is easy to conduct and it performs well. Also, the paper is clearly written, and lots of theoretical and experimental results are shown. Ablation studies are conducted. -While I like the idea of this paper, my main concerns lie in how practical are the assumption made in this paper, and how would the conclusion change if those assumptions do not hold? Such as what if the class balance assumption doesn’t hold? While assumption 4.1 is weaker than prior works, what if it doesn’t hold? Would these affect the conclusion made in the paper? -Reading the intuition of why using calibration from the paper, it seems to me that this is because a simple ensemble method is used(½(f_std+f_rob)), but would another more well-fitted ensemble method make the calibration step not necessary? -Related work: There is previous work discussing the relation between calibration and out-of-domain generalization. Although it differs from this paper in that it is in the multi-domain setting while this paper has one domain in training. “On Calibration and Out-of-domain Generalization”, 2022 -minor typo: on Page 5&6, several citations related to lightweight fine-tuning seems not working -Would be good to have the missing variance results on some of the datasets, especially on one of the anti-correlated one completed in Table 2&3. <doc-sep>The experimental results are great and the theoretical support is clean and intuitive. A second lesson here to me here is that the success of the proposed method also says something about the kind of real world shifts that exist (their method does not improve OOD if real-world shifts are adversarial like in the anticorrelated setting, as the authors point out). I'm not going to argue against their results, which are extensive and good. my only concerns are with the theoretical results which are nice to and intuitive but seem to rely on weird assumptions. I think a few clarifications are warranted. 1. In prop 4.1, it is very weird that f_ens is better ID. I don't think this is possible if f_std is trained on ID data to maximize performance. The reason here is that when f_std learns the ID p( Y | X ), then f_std is a sufficient statistics and P_ID( Y | f_std, X) = P_ID( Y | f_std) meaning that f_std should be strictly better than f_ens and f_rob. Can the authors clarify why this is not the case? 2. Out of the three shifts considered, I am most sold on the anticorrelated one; this is also the case where the method should not be expected to work. This is also the case where you cannot mitigate trade-off (nor should you try to). This seems to limit the applicability of the method because I'm not sure how we would know where to apply the method. Would I be able to choose whether to apply the method or now without access to labelled test data? 3. In the missing spurious features assumption, I'm not sure whether f_std would be zero. Wouldn't latching onto the shaggy mane mean f_std would say predict the other class? The suppressed spurious features assumption also seems to have the same problem, where less prevalence does not mean the conditional has lesser predictive probability (for the max probability class). Why would f_rob get affected by less prevalent country-specific features? <doc-sep>The proposed method seems simple and effective and the paper communicates it clearly. The paper clearly defines the proposed problem, the proposed solution and provides intuition for why the observed improvements could occur. It furthermore benchmarks the proposed approach on a wide variety of datasets and settings. The paper is well written and provides good intuition as to how observed effects might come to be. The experimental evaluation is well organized. The research question and operationalization in the benchmark are clear and well thought out. The benchmark is sufficiently large and covers a variety of datasets and scenarios. The detail of steps given in the different proofs is to be complimented. They are mostly easy to follow due to detailed steps and provided comments. It is not clear to me why only parts of the WILDS benchmark are used instead of all datasets. The compared baselines only include the standard model and the robust model. It did not become clear from the text to me whether the baselines are the calibrated models or the un-calibrated models. This might provide an additional baseline and point of reference. The paper does not provide any code that would enable reproducibility or a deeper investigation of the results. This is problematic in so far as results seem almost a little too good. Furthermore, some details on the experimental setup are missing (number of replications, hyperparameters, ...) which might be needed for replication. The theoretical work shows proofs for a special case of the setting, i.e. balanced classes and assumption of orthogonality (4.1) this should perhaps be discussed more thoroughly. Proofs of Prop 4.1 assume that r and s are orthogonal, which coincides with Assumption 4.1. I am not sure this assumption generally holds, so additional investigation regarding this might provide interesting insights. Minor Points I think Eq. (2.1) is unnecessary and does not generally hold in practice, as errors depend on the data (assume e.g. small n in one domain) and magnitude of shifts. Typos: Sec 4.1 calibrate fstd and frob ID Sec 4.2 [on] OOD Sec 6.2 citation of kaggle competitions is [?] Sec 5: ? use fine tuning
Meta Review: The reviewers found the paper to be of high-quality. The idea is simple to implement yet novel and insightful, the experimental evaluation is solid and the results are strong. The paper is well-written and gives good intuition about the results and their interpretation. During the review and discussion phases several questions and clarifications were made, and some additional experiments were promised by the authors - I trust these will be incorporated into the final accepted version of the paper (some of it possibly in the supplemental materials).
The barren plateau phenomenon is a 'vanishing gradient' effect that arises in sufficiently randomly initialized parameterized quantum circuits. Specifically, the norm of the gradient falls exponentially with the number of quantum registers in the circuit. While not a problem for classical neural networks due to efficient gradient estimation procedures, gradients for parameterized quantum circuits are obtained by statistical sampling. Estimating small gradients therefore adds an exponential overhead, eliminating most possible computational advantages. In recent years there has been an effor to handle this problem by suggesting initializations that are not 'fully random' on the space of circuits. This paper takes the following approach: a (practically reasonable) architecture is chosen and the parameters are initialized from a Gaussian distribution. The main technical message is a proof that if the variance of the normal distributions is chosen as $1/L$ where $L$ is the number of layers, the gradient decays polynomially with the number of qubits $n$ and layers $L$. Models with the propsoed initialization are evaluated on a variational quantum eigensolver setup to find the ground state of the Heisenberg model and LiH Hamiltonian. On the considered examples, Gaussian initialization appears to outperform the setting where parameters are initialized uniformly. The main contribution of the paper is the lower bound on the gradient. This proof is very non-trivial and involves techniques and intermediate results, that I think may be of interest when analyzing properties of Gaussian initializations in quantum circuits in general. I have not checked every statement, but I am overall convinced of correctness of the proofs. On a technical level, I think the contribution of the paper is solid. The experimental results also line up with the conclusions drawn, indicating that the phenomenon described may have applicability beyond the theorem setting. I have two concerns: firstly, due the Gaussian initialization I am not sure that the proof in the paper suffices to say that the gradients are lower bounded *throughout* training since the distribution has morphed from that at initialization. Note that this was not an issue for the original derivation of 'barren plateau' as Haar distributions are invariant under the shifts induced by training. The second is that the restriction to Gaussians with deviation decaying as 1/L, essentially restricts the initialization to a constant neighborhood of the identity. This assumption seems to put more of a bias on the initialization than most existing approaches, and may create the possibility of adversarial problems where convergence is heavily slowed. Experimentally, the advantage over initializing the parameters to zero seems like an artifact of their being a stationary point at identity, and the addition of some noise to perturb the initial state (in the noisy simulations) seems to remove most observed advantage for the proposed scheme. Yes <doc-sep>Variational quantum circuits are parametrized models that can be trained to perform mappings using gradient descent methods. In this paper, the authors propose an initialisation strategy to avoid the problem of vanishing gradients that occur when the number of qubits and the circuit depth grow. - Strengths: - Well-written - New initialisation: Several initialisation strategies have been studied for classical neural networks but few existing work extend these results to the quantum case. In this work, the authors apply the Gaussian initialisation strategy to variational quantum circuits and study how it may affect the training procedure by providing a theoretical and experimental analysis. - Theoretical analysis: The authors start by describing the Gaussian initialisation technique and provide theoretical guarantees in different settings. The first setting corresponds to the case when the circuit architecture is made using trainable 1-qubit gates and the output is projected using local observables. These results are then extended to the global observable case and 2-qubit gates. - Experimental analysis: The authors apply their technique to two quantum machine learning problems where they perform numerical simulations to study experimentally the training behaviour of the parameters. - Weaknesses: - The zero initialisation strategy seems to be fine for the performed experiments. No <doc-sep>This paper introduces a new initialization strategy for quantum variational circuits. This gaussian initialization strategy is shown to exponentially increase the upper bound on the gradient, with substantial implications for addressing optimization concerns of medium to large scale quantum machine learning models. Two empirical examples are provided, which show this initialization strategy demonstrates an improvement in performance. ============================== Note: after author responses score 6 -> 7 Pros: - This work provides a very important improvement in gradient bounds. Given the scale of concerns for optimising QVCs, this is a very exciting result. - The results for global observables are especially interesting, since previous works have focused on the benefits of local observables to trainability. - The paper is generally well written and conveys the point effectively - Circuit diagrams are well done and add to the understandability - Code is provided in supplementary material, which greatly improves replicability and experimental verification - The survey of related work is both useful and extensive Cons: - The biggest problem is the empirical results. Although these experiments are just examples and the main result is the theoretical proofs, they don’t add as much as they could. Using shot noise, instead of added measurement noise would improve the realism. It would also be beneficial to add examples with more realistic circuit noise (e.g. depolarizing channels). Additionally, showing the gradient norm in Figure 3 (like in Figure 2) would be beneficial - Citations could be condensed, e.g. “quantum simulations [14, 15, 16, 17, 18, 19, 20, 21, 22, 23]” -> “quantum simulations [14-23]” - Empirical comparisons to other initialization strategies would be beneficial (e.g. block initialization) - The horizontal lines on the graphs don’t aid interpretability The authors sufficiently addressed the potential negative societal impact of the work. <doc-sep>The authors demonstrate that over a Gaussian prior of appropriate width, the second moment of the derivatives of a class of variational quantum algorithms (VQAs) is only polynomially small in the problem size $n$ and circuit depth $L$ (for constant problem locality $S$), which is exponentially larger than the traditional barren plateau bounds (taken uniformly over parameters). The authors also give a bound when the locality $S$ grows with the problem size, that lower bounds the expected derivative second moment by (a finite fraction of) the initial squared derivative. The authors then demonstrate numerically that their initialization scheme gives much better optimization performance than uniform initialization in a variety of VQA tasks. The introduced bounds are novel, and though I did not check the proofs in complete detail, the authors' technical methods and proofs seem correct. I also enjoyed that these results are essentially a more rigorous understanding of the intuitive fact that training VQAs while near the identity should be similar to low-depth VQAs, and not experience barren plateaus. I think a couple of weaknesses of the paper, though, are maybe claiming too much from the shown results. First, in their discussion of Theorem 4.2 (and Corollary 4.3), the authors imply that their results are enough to show that Gaussian initialization completely absolves VQAs with global cost functions from barren plateaus. However, these results only lower bound the second moment of the derivative with (a finite fraction of) the initial square of the derivative. This initial derivative can be very small; see, for instance, the global cost function warm-up example in "Cost function dependent barren plateaus in shallow parametrized quantum circuits" (Cerezo et al. 2021), where this initial derivative is zero, giving a trivial bound. I would have enjoyed more discussion (or examples) arguing that this bound is typically only polynomially (not superpolynomially) small. Second, the authors' results (including now also Theorem 4.1) still rely on assumptions in the training of the model. Namely, once training is far away from the initial $\\vec{\\theta}=\\vec{0}$, it is no longer well-approximated by a Gaussian of polynomially small variance. In fact, for a number of parameters growing superlogarithmically with $n$, roughly the volume of "allowed" region where these results are expected to hold is superpolynomially small in the volume of parameter space (polynomially small in diameter). This (that is, superlogarithmically large depth) is the regime previous barren plateau results kick in (i.e. when averaged uniformly over parameters), and I suspect they may be related. I recommend the authors make this limitation of their work more clear. These limitations aside, I still find the work a nice, rigorous interpretation of a common approach to circumventing barren plateaus; when one has a good guess for where in parameter space the optimum is (say, near $\\vec{\\theta}=\\vec{0}$), and expect optimization to stay within this region, barren plateaus may be avoided. I previously discussed what limitations of the work I believe should be more explicitly mentioned by the authors; namely, I suggest tempering the claims that Gaussian initialization solves all instances of barren plateaus, and providing more examples or intuition as to situations where one might expect the implicit assumptions on training (i.e. staying near $\\vec{\\theta}=\\vec{0}$) to hold.
The authors propose a new random initialization of quantum neural networks which could avoid generating vanishing gradients. Specifically, the new random (Gaussian) initialization scheme will depend on the shape of the ansatz so that the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase. This finding is also supported by the associated empirical study. The reviewers consider this an important step toward the understanding of the trainability of variational quantum circuits. However, some limitations of the proposal are also discussed in the reviews, and we hope the authors can make an explicit discussion of these limitations in the final version.
This contribution describes a novel approach for implanted brain-machine interface in order to address calibration problem and covariate shift. A latent representation is extracted from SEEG signals and is the input of a LTSM trained to predict muscle activity. To mitigate the variation of neural activities across days, the authors compare a CCA approach, a Kullback-Leibler divergence minimization and a novel adversarial approach called ADAN. The authors evaluate their approach on 16-days recording of neurons from the motor cortex of rhesus monkey, along with EMG recording of corresponding the arm and hand. The results show that the domain adaptation from the first recording is best handled with the proposed adversarial scheme. Compared to CCA-based and KL-based approaches, the ADAN scheme is able to significantly improve the EMG prediction, requiring a relatively small calibration dataset. The individual variability in day-to-day brain signal is difficult to harness and this work offers an interesting approach to address this problem. The contributions are well described, the limitation of CCA and KL are convincing and are supported by the experimental results. The important work on the figure help to provide a good understanding of the benefit of this approach. Some parts could be improved. The results of Fig. 2B to investigate the role of latent variables extracted from the trained autoencoder are not clear, the simultaneous training could be better explained. As the authors claimed that their method allows to make an unsupervised alignment neural recording, independently of the task, an experiment on another dataset could enforce this claim.<doc-sep>Here the authors define a BMI that uses an autoencoder -> LSTM -> EMG. The authors then address the problem of data drift in BMI and describe a number of domain adaptation algorithms from simple (CCA to more complex ADAN) to help ameliorate it. There are a lot of extremely interesting ideas in this paper, but the paper is not particularly well written, and the overall effect to me was confusion. What problem is being solved here? Are we describing using latent variables (AE approach) for BMI? Are we discussing domain adaptation, i.e. handling the nonstationarity that so plagues BMI and array data? Clearly the issue of stability is being addressed but how? A number of different approaches are described from creating a pre-execution calibration routine whereby trials on the given day are used to calibrate to an already trained BMI (e.g. required for CCA) to putting data into an adversarial network trained on data from earlier days. Are we instead attempting to show that a single BMI can be used across multiple days? This paper is extremely interesting but suffers from lack of focus, rigor, and clarity. Focus : AE to RNN to EMG is that the idea to compare vs. Domain adaptation via CCA/KLDM/ADAM. Of course a paper can explore multiple ideas, but in this case the comparisons and controls for both are not adequate. Rigor: What are meaningful comparisons for all for the AE and DA portions? The AE part is strongly related to either to Kao 2017 or Pandarinath 2018 but nothing like that is compared. The domain adaptation part evokes data augmentation strategies of Sussillo 2016 but that is not compared. If I were reviewing this manuscript for a biological journal a rigorous standard would be online BMI results in two animals. Is there a reason why this isn’t the standard for ICLR? Is the idea that non-biological journals / conferences are adequate to vet new ideas before really putting them to the test in a biological journal? The manuscript is concerned with the vexing problem of BMI stability of time, which seems to be a problem where online testing in two animals would be critical. (I appreciate this is a broader topic relevant to the BMI field beyond just this paper, but it would be helpful to get some thinking on this in the rebuttal). Clarity : This paper needs to be pretty seriously clarified. The mathematical notation is not adequate to the job, nor is the motivation for the varied methodology. I cannot tell if the subscript is for time or for day. Also, what is the difference between z_0 vs. Z_0? I do not know what exactly is going into the AE or the ADAN. The neural networks are not described to a point where one could reproduce this work. The notation for handling time is inadequate. E.g. despite repeated readings I cannot tell how time is handled in the auto-encoder, e.g. nxt is vectorized vs feeding n-sized vector one time step at a time? Questions What is the point of the latent representation in the AE if it is just fed to an LSTM? Is it to compare to not using it? Page 3, how precisely is time handled in the AE? If time is just vectorized, how can one get real-time readouts? In general there is not enough detail to understand what is implemented in the AE. If only one time slice is entered into AE, then it seems clear AE won’t be very good because one desires latent representation of the dynamics, not single time slices. How big is the LSTM used to generate the EMG? It seems like a the most relevant baseline is to compare to the data perturbation strategies in Sussillo 2016. If you have an LSTM already up and running to predict EMG, this seems very doable. Page 4, “We then use an ADAN to align either the distribution of latent variables or the distributions of the residuals of the reconstructed neural data, the latter a proxy for the alignment of the neural latent variables.” This sentence is not adequate to explain the concepts of the various distributions, the residuals of reconstructed neural data (where do the residuals come from?), and why is one a proxy for the other. Please expand this sentence into a few sentences, if necessary to define these concepts for the naive reader. Page 5, What parameters are minimized in equation (2)? Please expand the top sentence of page 5. Page 6, top - “In contrast, when the EMG predictor is trained simultaneously with the AE…” Do you mean there is again a loss function defined by both EMG prediction and AE and summed, and then backprop is used to train both in an end-to-end fashion? Please clarify. Page 8, How do the AE results and architecture fit into the EMG reconstruction “BMI” results? Is that all decoding results are first put through the AE -> LSTM -> EMG pipeline? I.e. your BMI is neural data -> AE -> LSTM -> EMG? If so, then how does the ADAN / CCA and KLDM fit in? You first run those three DA algorithms and then pipe it through the BMI? Page 8, How can you say that the BMI improvement of 6% is meaningful to the BMI user if you did not test the BMI online? <doc-sep>The paper considers invasive BMIs and studies various ways to avoid daily recalibration due to changes in the brain signals. While I like the paper and studied methods -- using adverserial domain adaptation is interesting to use in this context --, I think that the authors oversell a bit. The problem of nonstationarity rsp. stability is an old one in non-invasive BCIs (shenoy et al JNE 2006 was among the first) and a large number of prior methods have been defined to robustify feature spaces, to project to stable subspaces etc. Clearly no Gans at that time. The least the authors could do is to make reference to this literature, some methods may even apply also for the invasive data of the paper. While the authors did not clearly say that they present an offline analysis; one method, the GAN, gets 6% better results then the competitors. I am not sure whether this is practically relevant in an online setting. But this needs to be clearly discussed in the paper and put into perspective to avoid wrong impression. Only an online study would be convincing. Overall, I think the paper could be accepted, the experiments are nice, the data is interesting, if it is appropriately toned down (avoiding statements about having done something for the first time) and properly references to prior work are given. It is an interesting application domain. I additionally recommend releasing the data upon acceptance.
BMIs need per-patient and per-session calibration, and this paper seeks to amend that. Using VAEs and RNNs, it relates sEEG to sEMG, in principle a ten-year old approach, but do so using a novel adversarial approach that seems to work. The reviewers agree the approach is nice, the statements in the paper are too strong, but publication is recommended. Clinical evaluation is an important next step.
The authors present a method for unsupervised alignment of word across multiple languages. In particular, they extend an existing unsupervised bilingual alignment to the case of multiple languages by adding constraints to the optimization problem. The main aim is to ensure that the embeddings can now be composed and the performance (alignment quality) does not degrade across multiple compositions. Strengths - Very clearly written - A nice overview of existing methods and correct positioning of the author's contributions in the context of these works - A good experimental setup involving multiple languages Weaknesses - I am not sure how to interpret the results in Table 2 and Table 3 (see questions below). Questions - On page 7 you have mentioned that "this setting is unfair to the MST baseline, since ...." Can you please elaborate on this? I am not sure I understand this correctly. - Regarding results in Table 2 and 3: It seems that there is a trade-off while adding constraints which results in poor bilingual translation quality. I am not sure is this is acceptable. I understand that your goal is to do indirect translation but does that mean we should ignore direct translation ? - In Table 3 can you report both W-Proc and W-Proc* results ? Is it possible that the GW-initialization helps bilingual translation as the performance of W-Proc* is clearly better than W-Proc in Table 2. However, could it be the case that this somehow affects the performance in the indirect translation case? IMO, this is worth confirming. - In Table 3, you are reporting average accuracies across and within families. I would like to see the numbers for all language pairs independently. This is important because when you consider the average it is quite likely that for some language pair the numbers were much higher which tilts the average in favor of some approach. Also looking at the individual numbers will help us get some insights into the behavior across language pairs. - In the motivation (Figure 1) it was mentioned that compositions can be done (and are often desirable) along longer paths (En-Fr-Ru-It). However, in the final experiments the composition is only along a triplet (X-En-Y). Is that correct or did I misinterpret the results? If so, can you report the results when the number of compositions increases? <doc-sep>This paper is concerned with the idea of inducing multilingual word embeddings (i.e., word vector spaces where words from more than two languages are represented) in an unsupervised way using a mapping-based approach. The main novelty of the work is a method, inspired by recent work of Nakashole and Flauger, and building on the unsupervised bilingual framework of Grave et al., which aims at bypassing the straightforward idea of independently mapping N-1 vector spaces to the N-th pivot space by adding constraints to ensure that the learned mappings can be composed (btw., it is not clear from the abstract what this means exactly). In summary, this is an interesting paper, but my impression is that it needs more work to distinguish itself from prior work and stress the contribution more clearly. Although 11 languages are used in evaluation, the authors still limit the evaluation only to (arguably) very similar languages (all languages are Indo-European and there are no outliers, distant languages or languages from other families at all, not even the usual suspects like Finnish and Hungarian). Given the observed instability of GAN-based unsupervised bilingual embedding learning, dissected in Sogaard et al.'s paper (ACL 2018) and also touched upon in the work of Artetxe et al. (ACL 2018), one of the critical questions for this work should also be: is the proposed method stable? What are the (in)stability criteria? When does the method fail and can it lead to sub-optimal solutions? What is the decrease in performance when moving to a more distant language like Finnish, Hungarian, or Turkish? Is the method more robust than GAN-based models? All this has to be at least discussed in the paper. Another question is: do we really want to go 'fully unsupervised' given that even a light and cheap source of supervision (e.g., shared numerals, cognates) can already result in more robust solutions? See the work of Artetxe et al. (ACL 2017, ACL 2018), Vulic and Korhonen (ACL 2016) or Sogaard et al. (ACL 2018) for some analyses on how the amount of bilingual supervision can yield more (or less) robust models? Is the proposed framework also applicable in weakly-supervised settings? Can such settings with weak supervision guarantee increased robustness (and maybe even better performance)? I have to be convinced more strongly: why do we need fully unsupervised multilingual models, especially when evaluation is conducted only with resource-rich languages? Another straightforward question is: can the proposed framework handle cases where there exists supervision for some language pairs while other pairs lack supervision? How would the proposed framework adapt to such scenarios? This might be an interesting point to discuss further in Section 5. Style and terminology: it is not immediately clear what is meant by (triplet) constraints (which is one of the central terms in the whole work). It is also not immediately clear what is meant by composed mappings, hyper-alignment (before Section 4), etc. There is also some confusion regarding the term alignment as it can define mappings between monolingual word embedding spaces as well as word-level links/alignments. Perhaps, using mapping instead of alignment might make the description more clear. In either case, I suggest to clearly define the key concepts for the paper. Also, the paper would contribute immensely from some running examples illustrating the main ideas (and maybe an illustrative figure similar to the ones presented in, e.g., Conneau et al.'s work or Lample et al.'s work). The paper concerns word translation and cross-lingual word embeddings, and there isn't a single example that serves to clarify the main intuition and lead the reader through the paper. The paper is perhaps too much focused on the technical execution of the idea to my own liking, forgetting to motivate the bigger picture. Other: the part on "Language tree" prior to "Conclusion" is not useful at all and does not contribute to the overall discussion. This could be safely removed and the space in the paper should be used to additional comparisons with more baselines (see above for some baselines). The authors mention that their approach is "relatively hard to scale" only in their conclusion, while algorithmic complexity remains one of the key questions related to this work. I would like to see some quantitative (time) measurements related to the scaling problem, and a more thorough explanation why the method is hard to scale. The complexity and non-scalability of the method was one of my main concerns while reading the paper and I am puzzled to see some remarks on this aspect only at the very end of the paper. Going back to algorithmic complexity, I think that this is a very important aspect of the method to discuss explicitly. The authors should provide, e.g., O-notation complexity for the three variant models from Figure 2 and help the reader understand pros and cons of each design also when it comes to their design complexity. Is the only reason to move from the star model to the HUG model computational complexity? This argument has to be stressed more strongly in the paper. Two very relevant papers have not be cited nor compared against. The work of Artetxe et al. (ACL 2018) is an unsupervised bilingual word embedding model similar to the MUSE model of Conneau et al. (ICLR 2018) which seems more robust when applied on distant languages. Again, going back to my previous comment, I would like to see how well HUG fares in such more challenging settings. Further, a recent work of Chen and Cardie (EMNLP 2018) is a multilingual extension of the bilingual GAN-based model of Conneau et al. Given that the main goal of this work and Chen and Cardie's work is the same: obtaining multilingual word embeddings, I wonder how the two approHowaches compare to each other. Another, more general comment concerns the actual evaluation task: as prior work, it seems that the authors optimise and evaluate their embeddings solely on the (intrinsic) word translation task, but if the main goal of this research is to boost downstream tasks in low-resource languages, I would expect additional evaluation tasks beyond word translation to make the paper more complete and convincing. The method relies on a wide spectrum of hyper-parameters. How are these hyper-parameters set? How sensitive is the method to different hparams configurations? For instance, why is the Gromov-Wasserstein approach applied only to the first 2k vectors? How are the learning rate and the batch size determined? Minor: What is W in line 5 of Algorithm 1? Given the large number of symbols used in the paper, maybe a table of symbols put somewhere at the beginning of the paper would make the paper easier and more pleasant to read. I would also compare the work to another relevant supervised baseline: the work from Smith et al. (ICLR 2017). This comparison might further strengthen the main claim of the paper that indirect translations can also be found without degrading performance in multilingual embedding spaces.<doc-sep>This is a work regarding the alignment of word embedding for multiple languages.Though there are existing works similar to this one, most of them are only considering a pair of two languages, resulting in the composition issue mentioned in this work. The authors proposed a way of using a regularization term to reduce such degraded accuracy and demonstrate the validity of the proposed algorithm via experiments. I find the work to be interesting and well written. Several points that I want to bring up: 1. The language tree at the end of section 5 is very interesting. Does it change if the initialization/parameter is different? 2. The matrix P in (1) is simply a standard permutation matrix. I think the definitions are redundant. 3. The experiment results are expected since the algorithms are designed for better composition quality. An additional experiment, e.g. classification of instances in multiple languages, could further help demonstrate the strength of the proposed technic. 4. How to choose the regularization parameter \\mu and what's the effect of \\mu? 5. Some written issues like the notation of orthogonal matrix set, both \\mathcal{O} and \\mathbb{O} are used.
This paper provides a simple and intuitive method for learning multilingual word embeddings that makes it possible to softly encourage the model to align the spaces of non-English language pairs. The results are better than learning just pairwise embeddings with English. The main remaining concern (in my mind) after the author response is that the method is less accurate empirically than Chen and Cardie (2018). I think however that given that these two works are largely contemporaneous, the methods are appreciably different, and the proposed method also has advantages with respect to speed, that the paper here is still a reasonably candidate for acceptance at ICLR. However, I would like to request that in the final version the authors feature Chen and Cardie (2018) more prominently in the introduction and discuss the theoretical and empirical differences between the two methods. This will make sure that readers get the full picture of the two works and understand their relative differences and advantages/disadvantages.
In this paper, the authors introduce a novel Transformer (or Network) architecture, termed as Focal Modulation Network. FocalNet deals with the problem of efficient long-range feature modeling. Different from window-wise self-attention (Swin) and focal attention, FocalNet adaptively aggregates surrounding tokens from different levels of granularity. This is novel and interesting. Extensive experiments on MS COCO, ImageNet, and ADE20k demonstrate the effectiveness of the proposed method. In summary, the idea is interesting, and the performance is promising; I would like to recommend acceptance. **Strengths:** 1. I enjoy reading this paper, the writing, and the presentation. 2. The idea is interesting and novel. Considering different levels of granularity makes sense in image processing. 3. The experimental results are promising, demonstrating state-of-the-art performance. **Weaknesses:** 1. From L124-L132, it is easy to understand the Translation invariance and Decoupled feature granularity, but why input-dependency is considered as an advantage? Also, Spatial- and channel-specific is achieved by depth-wise operations and cannot be considered as an advantage. 2. I like the idea of different levels of granularity, but this can simply be considered as a multi-scale depth-wise convolution in the implementation (Fig. 2c), which limits the novelty. Multi-scale/ multi-branch always works and introduces no novelty. 3. Some important papers are missing (not compared), like ConvNeXt, that only uses depth-wise convolution as well, and MixFormer, which can be considered as a strong baseline. Both are published in CVPR'22 and released much earlier than the submission dealine. The authors should cite and compare these baselines. [1] Liu, Zhuang, et al. "A convnet for the 2020s." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [2] Chen, Qiang, et al. "MixFormer: Mixing Features across Windows and Dimensions." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. The authors adequately addressed the limitations and potential negative societal impact of their work. <doc-sep>This paper proposes Focal Modulation, which uses context at multiple spatial scales from a stack of convolutions combined with gated aggregation to produce a modulation. This modulates each query point through elementwise multiplication. Focal Modulation is evaluate as a drop-in replacement for Self-Attention. The method is tested in several experiments against strong baselines, and has a thorough ablation analysis. ## Update after author response Thank you for the responses and further experiments. To be clear, I was mostly concerned with the kernel size experiment as a way to glean the variance of other results. Since you have provided direct experiments to address that concern, I am more confident that this is a paper worthy of acceptance. The paper proposes a novel method for improving computer vision architectures based on visual transformers. Focal Modulation is a clever way to include expanded context at each layer while simultaneously removing expensive pairwise interaction terms present in self-attention. The method is described clearly, the diagrams are illustrative, and the experimental evaluations and ablations are thorough. This paper is a solid contribution to an important field of computer vision. However, I would like to see some better estimate of variance of results. For example, in Figure 5 in the appendix, the mAP results vary considerably depending on the setting of the kernel size. There seems to be little relationship between the kernel size and the mAP, but the mAP varies considerably (41.2 to 41.6). The limitations and societal impact are well addressed. <doc-sep>The paper propose a focal modulation module that is more effective and efficient for modeling token interactions. Their main contributions are proposing an efficient way to input-dependent long-range contextual interactions. The authors conduct experiments on tasks of image classification, detection, and segmentation. The experiments' results show the SoTA performance. + The performance achieves SOTA on almost tasks. + The paper is well-written. + The authors conduct abundant ablation studies to validate the effectiveness of each design in the proposed method. yes <doc-sep>This paper proposes a focal modulation module to replace the attention module in Transformers. Specifically, this module contains Hierarchical Contextualization (several layers of depthwise convolutions), Gated Aggregation, and element-wise modulation to fuse information from the token itself and the context. Actually, the element-wise modulation can also be regarded as a gate mechanism. Thus, this module sounds like a convolution + gate mechanism. This paper does extensive experiments on various vision tasks to show the advantages of the proposed modules. Strengths: 1) The writing of this paper is clear and integrated. 2) The experiments in this paper are extensive, including several vision tasks and ablation studies. It is convinced that Focal Net is better than Swin and Focal Transformers. Neutrality: Focal Modulation can be regarded as convolutions plus gate mechanisms. The idea is OK, not incremental but not good enough. Thus, I write the idea aspect into neutrality. Weaknesses: In ablation table 9, about the fusion between token itself and context, it only shows the experiment that replaces Multiplication with Addition. What about totally removing the query branch and moving the parameters and computation to other components? Yes
All the reviewers acknowledge that the paper is well-written, novel, and shows strong performance gain. Besides, all the reviewers are satisfied with the authors' response to the raised concerns. AC double-checks the paper, reviews, and response, and finds that the paper is well-shaped and generally flawless. AC recommends acceptance.
Summary: The work extends an existing algorithm to train a convolutional neural network by selecting a subset of fixed random weights by (1) using multiple random values per weight, (2) using a sampling procedure to select the particular random value for each weight. If these networks are finetuned after random-value selection is performed, they perform close to networks purely trained with SGD on CIFAR-10 and MNIST. Strong points: - Interesting results that extend results on the lottery ticket hypothesis and random weight learning. Weak points: - The paper builds heavily on an existing algorithm, but it does not cite it in the method section. - The paper makes many claims and fails to provide evidence for these claims. - The work is not clearly motivated. It is unclear why this problem is interesting or the results insightful. Recommendation (short): While the results are interesting, the paper is poorly motivated and does not conform to standards of scientific work. I recommend rejecting this work. Recommendation (long): The method section appears to be independent work, but it is the same as Ramanujan et al., 2019 extended with multinomial sampling. While Ramanujan et al., 2019 are cited, it is not cited in this section. Furthermore, the paper claims that random initialized networks and trained networks perform the same, but it does not lay out evidence or an argument for this. Otherwise, it is unclear why this method is interesting. I recommend rejecting this work. Comments for authors: I think these are some good initial results that you have, and you can work with that, but right now, the paper has many flaws that need to be ironed out before you make another attempt. I do not think you can get this work accepted in this round and instead should try to learn as much as possible from the discussion for a resubmission. I think fixing the claim, references, and so forth will be easy. The hard question is, why is your work interesting. Using multiple random values per weight can be seen as some form of quantization of weights. Why is your way of doing something similar to quantization more interesting than other forms of quantization? Please also consider if it is true that selecting fixed random weights is training or not. Clearly, you are optimizing weights. It does not matter if you do it with SGD, an evolutionary strategy, or your algorithm; in the end, you optimize weights to have particular values. I would say that it should be considered training. But if your method is considered just a different optimization process compared to SGD, why is it interesting? Finding subnetworks, like done in Ramanujan et al., 2019, is interesting because you have smaller trained networks, but you do not have subnetworks. It could be interesting if you do (1) a thorough analysis that yields some insights and make this an analysis paper, or (2) try to get better performance by doing both optimizations of weights and selection of weights (but this is very similar to Wortsman et al. 2019). Some minor things: - Equation 3 has an additional W (h(*) already contains the W) - Figure 6 has an annotation error; I believe the upper line is supposed to be the PS method Ramanujan et al., 2019: What’s Hidden in a Randomly Weighted Neural Network?, https://arxiv.org/abs/1911.13299 Wortsman et al., 2019: Discovering Neural Wirings, https://arxiv.org/abs/1906.00586 <doc-sep>### Summary The paper investigates a type of neural network in which one of K possible fixed weights is chosen in each neuronal connection. The weights themselves are fixed and random, but the scores that determine which of the weights is chosen are updated through back-propagation using a straight-through estimator. The accuracy and behavior of such networks is studied for small networks on small datasets (MNIST, CIFAR). ### Quality, clarity, originality and significance The paper is well-written and easy to follow. The main idea seems interesting at first sight and it is well-motivated, but after some consideration of related effects in neural networks, the results do not seem very surprising (see below). I think a deeper exploration of the connection to other phenomena would be necessary to make this paper relevant to the conference, e.g. to weight quantization (to few bits per weight) or to (variational) dropout. The paper seems to not go beyond ad-hoc conclusions of the form that these (peculiar) networks "perform competitively on challenging datasets" (which seems to be a bit of an overstatement to me). The authors also claim that the trained networks might be useful for initialization, but to really make this point strongly, a comparison to other practical methods of data-driven initialization on larger datasets with larger architectures might be needed to convince the reader. Why do the results not seem surprising to me: It is possible that I misunderstood the algorithm (which we could clarify in the rebuttal period, of course), but in my understanding of the described approach, the straight-through estimation of the scores will lead to preferring the selection of larger (or smaller) weights where standard gradient descent training would lead to larger (or smaller) weights. This is consistent with the distribution of the selected weights as shown in several figures, with the observation that uniform initialization works better than Normal initialization, and with the observation that "both GS and PS tend to prefer weights having large magnitudes as learning progresses". It would also account for the observation of the similarity in error rates when the network has a sufficient number of weights to choose from. * Pros: interesting idea, the paper is well-written * Cons: of limited interest to the conference audience I believe; not clear if there is practical relevance or potential to improve scientific understanding ### Detailed/minor comments * The main bullet points at the end of the introduction were not fully substantiated in the paper in my opinion: (1) I was not fully convinced of "a performance equivalence between random initialization and training", because the slot machines are effectively *trained*. (2)"demonstrates that current networks can model challenging non-linear mappings extremely well even with random weights" is not 100% clear because the weights are a *choice* among a set of *initially* random weights, but that choice is a result of training. (3)"connects to recent observations". This seems to happen mostly in the two sentences before 4.3 and there connection *and* statement are not entirely convincing to me "We find a random 6 layer network that performs as well as a 6 layer trained network without any form of pruning." Here it seems to me that the slot machine after training is in fact not random, because it was trained, and that training exploited weight-correlations that potentially span multiple layers. The argument could be extended to regular training in the sense that regular training just picks out the random weights among all the random floating point numbers. (This is a bit exaggerated, but I think it shows why the term "random 6 layer network" may be an overstatement after training.) * Why is K chosen from the set {2, 8, 64, 128} - it seems that some natural values in this sequence are missing or that a more log-uniform spread would be more "natural". Maybe there is a specific reason that is not obvious, then it could be mentioned. ### Update after author replies and discussion I have updated the review score after reading the authors' reply and revision of the paper.<doc-sep>########################################################################## Summary: This paper proposes a method to train a neural network by selecting a weight from a set of $K$ randomly generated weights for each edge in the network. Each edge has a different set of random weights. Quality score is assinged to each of $K$ random weights, which determines the weight used in the forward calculation. Instead of optimizing weights directly, the proposed method optimizes the quality scores with the straight-through gradient estimator. Experimental results show that the neural network trained by the proposed method achieves high accuracy compared to random initialization even when $K=2$. ########################################################################## Reasons for score: Overall, I vote for rejecting. The authors say in Sec. 3 that the goal is to construct non-sparse neural networks with completely random weights that achieve high accuracy. However, the model obtained by the proposed method is no longer a network with completely random weights, because the authors optimize quality scores instead of original weights. It is empirically shown that a neural network can achieve a high accuracy by properly selecting weights from a set of random weights prepared in advance. However, such a result is not so surprising from the viewpoint that the quality scores are optimized. Also, this paper has few practical implications. I would like to see if the network can still achieve a high accuracy when every edge has a common set of $K$ random weights. If this is the case, the proposed method may lead to a network that is efficiently compressed. ########################################################################## Minor concerns: (p.1) a fixed a set of random weights -> a fixed set of random weights -- Regarding the author responses, I have updated my rating.<doc-sep>**Update after authors' response** I want to thank the authors for their response, and I am happy to see additional results on shared sets of weight values (which allows to easily relate the work to methods for training low-bit-width networks). To further increase impact and significance of the work it would be necessary to really flesh out the advantage of the proposed method over other, similar methods ("it is not too surprising that the method works, but why would I prefer it over other methods?"). Nonetheless, the paper presents novel empirical analysis that adds to the body of work on non-standard training of neural networks. To make a clear stance for the reviewer discussion I have therefore increased my score to 7, though I would rate the paper at the lower end of score-7 papers. --- **Summary** The paper proposes a novel scheme to obtain well performing neural networks without classical adaptation of weights. Instead, each connection can have one out of K randomly drawn values, and “training” consists of a backpropagation-based procedure to find which value out of the K possible values to select for each weight. The method can be interpreted as finding a high-performing sub-network within a larger network of random weights. However, in contrast to previous methods that literally implement the latter, the proposed method is computationally more efficient. Experiments are performed on a number of neural network architectures on MNIST and CIFAR-10. --- **Contributions, Novelty, Impact** 1) Proposal of a novel scheme for finding well-performing networks without explicit training of weights. This is interesting and adds to a growing body of recent work on alternatives to classical training of neural nets (which is insightful for both, developing better training algorithms but also understanding the nature of neural network training). My concern is that the proposed method is conceptually very similar to previously known approaches (pruning a larger network which is also discussed in the paper, but also some methods for training low-bit-weight networks such as [1] and [2]). While the proposed method is an interesting alternative implementation, the advantages compared to the other approaches are fairly limited. Accordingly the potential impact of the work might be somewhat limited as well, I’m afraid. 2) A nice and extensive set of ablations and control experiments, as well as repetitions of experiments to establish statistical significance of the results. The paper, in particular the experimental part is well executed, and the ablations and controls allow for being optimistic about the generality of the findings, which has a positive influence on the potential impact of the work. 3) The paper shows that networks obtained with the proposed scheme can also act as a good initialization for further fine-tuning, leading to very well performing classifiers. This process is also analyzed in terms of overall computational budget (FLOPS) and in 2 of 3 cases shown compares favourably against standard neural network training. In terms of impact this is another nice result to add, but probably not strong enough to replace standard initialization anytime soon. [1] Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, Courbariaux et al. 2016 [2] XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, Rastegari et al. 2016 --- **Score and reasons for score** I’m quite torn with this paper - on one hand the method works well, is thoroughly analysed and the paper is very well written and polished (in many respects I would even say this is an exemplary paper). On the other hand the paper suffers from only adding a quite simple variation on existing work. Particularly the work on training low-bit-width networks ([1] and [2] above, as well as later extensions to more than single-bit weights) is conceptually very similar - the forward pass uses constrained weight values but gradient information is accumulated in a real-valued variable. As the appendix notes, the main idea could also be implemented simply via pruning in a larger network (though less computationally efficient). To further strengthen the paper it would be good if the paper could answer one (or both) of the following questions: (i) how do the results contribute to understanding weight initialization and the training process, what can the reader learn that wasn’t known already, (ii) what are the concrete advantages of the proposed method over previously proposed alternatives, what does it do better, what shortcomings does it address (is it faster, is training more stable, …)? I would love to give this paper a very high score because of the great execution and presentation, but the lack of novel insights, or clear methodological advantages makes this hard. I am currently voting for a weak accept (because the paper is very well written and experiments are thorough, if the scoring was based on novelty alone I’m not sure that the paper would clear the bar for a top-tier conference). I am of course happy to reconsider my final verdict in light of the other reviews and authors’ response. --- **Strengths** 1) Great presentation of the work, a very well written paper, and good main results. 2) Experiments are well executed - multiple repetitions, many controls and ablations that one wants to see to improve confidence in the generality of the findings. --- **Weaknesses** 1) Little novelty. The proposed algorithm is a nice idea, but it’s not too surprising that it operates essentially on par with a previously proposed pruning method since the algorithm can even be conceptually recast as pruning in a larger network. 2) The writing puts a lot of emphasis and focus on the distinctive features of the method (understandably so given how close it is compared to other methods). But I think it’s also fine to not start off in an almost defensive fashion and simply state that this is another possibility of implementing the same idea but with the following advantages (and disadvantages). --- **Correctness** Reasoning in the paper is sound, experiments are well executed and many control-experiments and ablations are shown. --- **Clarity** The paper is very well written, results are nicely presented and related literature is discussed in a useful fashion. --- **Improvements (that would make me raise my score) / major issues** 1) The main issue is a lack of novelty. Addressing either (i) how the paper adds new knowledge (in light of the current body of literature) or (ii) stating the precise advantages of the proposed method over alternatives would be crucial. I don’t see an obvious way for (i), but for (ii) a starting point could be to do more detailed comparison against other methods, in particular Ramanujan and see whether the proposed method compares favorably e.g. in terms of training stability of robustness w.r.t. hyper-parameters. 2) Another possibility to add novelty to the paper would be to focus on low-bit-width training, where instead of drawing K values for each weight separately, values are re-used per layer or even across the whole network (typically first and last layers need to be treated differently, i.e. they require more bit-width). Reliable and robust methods to obtain e.g. 2-, 4- or 8-bit networks is a timely and important topic, and the proposed method has potential to contribute to that body of work as well (though it is admittedly a bit of a deviation from the current story and main focus of the paper). I want to add this as a suggestion to the authors (it could work, but it could also severely reduce focus and clarity of the paper), not necessarily as an improvement that I’d expect to see. --- **Minor comments** A) The ICLR header seems to be missing in the PDF file. B) In the probabilistic version of the method it might be worth experimenting with some “annealing” schedule where the randomness of the method is gradually reduced over training (e.g. via reducing a Gumbel Softmax temperature). Making the magnitude of the scores very large has essentially the same effect but is less controllable since it has to be indirectly influenced via the learning rate. I would expect convergence/test-time performance of the method to benefit from such a schedule and perhaps even help close the gap to “greedy selection”. C) Is the supplementary material supposed to go into a separate document (did not double-check the submission instructions)? D) A bit of a nitpick, the phrase that “weights are never updated” might suggest some miraculous phenomenon - I’d rather say there’s a set of weight values, from which one value is probabilistically selected. So if one considers the weight of a particular connection as a random variable (across different forward passes) then the expected value over that random variable changes (smoothly) as training progresses - resembling a standard weight update process quite closely.
The idea behind this paper is to develop a training algorithm that chooses among a fixed set of weights for each true weight in a neural network. The results are reasonable -- though difficult to quantify as either good or surprising -- performance from the algorithm. A perhaps interesting point is that additional fine-tuning from these found networks can, in some cases, best the accuracy of the original network. The pros of this paper are that it is a neat original idea. With the exception of the limited scale of the benchmarks (i.e., the selected architectures), the paper is largely well-executed. The primary shortcoming of the paper, as discussed by the reviewers, is the lack of clarity in its implications. Specifically, it is difficult to position the result as contributing to a practical aim or leading to additional future work. Based on the reviews and discussion, my recommendation is Reject. In particular, this paper would be significantly improved by bringing in a strong motivational context and, therefore, additional comparisons. For example, the context for the work of Ramanujan et al. (2019) is that, perhaps, it is possible to find subnetworks of large initialized networks that will permit more efficient training. In Appendix A, this paper proposes that the technique here could be cast as pruning within a much larger network. Following results from Zhu and Gupta [1] and also Ramanujan et al. (2019), finding a sparse network within a larger network can produce a more accurate network than training a network of equivalent size to the sparse. Therefore, these results could, potentially, be cast and as a more efficient way to perform the techniques of Ramanujan et al. (2019). Alternatively, the results that demonstrate that fine-tuning the identified networks improves performance over the standard network could be more robustly evaluated and perhaps cast as either an alternative training technique or leveraged as a technique like warm starting [2]. This is a very interesting and promising direction. It appears that the paper just needs a bit more distillation. [1] To prune, or not to prune: Exploring the efficacy of pruning for model compression. Michael Zhu and Suyog Gupta. In International Conference on Learning Representations Workshop Track, 2018. [2] On Warm-Starting Neural Network Training. Jordan T. Ash, Ryan P. Adams. NeurIPS 2020
This paper aims to solve the problem of human-machine interface with supervision or prior knowledge about the user’s desired tasks. This is challenging due to the large space of interface designs. This paper proposes a reinforcement-based interface update method by maximizing the mutual information between the user’s commands and the induced state transition, where an underlying key idea is that “the more intuitive the interface, the less noisy the user’s commands.” The main contribution is the design of mutual information objective/reward for enhancing interfaces. The experiment section also shows several studies to demonstrate the effectiveness of the updated interfaces and application examples. The main strength of this paper is that the intuitive design of mutual information rewards considering the user’s influence on the next state and its opposite. The major weakness of this paper is there is no comparison with baselines. It is unclear whether the given dataset/problem is easy to design interface. Further, it is still hard to clearly confirm the correlation between the mutual information and true reward over results, particularly in Fig. 2. N/A <doc-sep>This paper presents a method for building an adaptive user interface in an unsupervised manner. The key idea is to use the mutual information between the user input and the state transition. Based on this idea, the authors propose an algorithm to maximize the mutual information lower bound from a small amount of user operation data. The proposed approach is evaluated using five existing dataset, real-world user study data, and another expert user demonstration. ### Strength - The unsupervised user adaptation task setting is interesting, and the idea of using mutual information content seems reasonable. - Experimental evaluations have been conducted from multiple perspectives, demonstrating at a minimum the effectiveness of the proposed method. ### Weaknesses - While offline evaluation using existing data makes sense, evaluating the actual use of adaptively changing interfaces with this approach is not necessarily sufficient. The cursor control task seems a bit easier than the "unknown" interface that this study assumes. Multiple user experiments on diverse interfaces with similar or greater participants would increase the method's reliability. The author's demonstration is somewhat unreliable as experimental evidence. - As for the method's claim that it does not require prior knowledge about the desired task, it does not appear to be strictly substantiated, as there are no corresponding experiments. - Considering the purpose of the study, what should be shown is the advantage of the system compared to the user's solo adaptation to the system. However, the experiment mainly only compares the initial state and after adaptation, and there is no discussion of what happens after the participants use the baseline interface for an extended period. - There is an experimental discussion on interfaces where the proposed method cannot be successfully applied, which is a good point for the paper. - Although the study involved numerous user experiments, there was no specific mention of a process for obtaining consent from participants or for ethical review. <doc-sep>This paper presents a reinforcement learning algorithm that uses mutual information as a proxy reward for achieving understanding between a computer with a randomly perturbed UI and a human trying to accomplish a task using that UI, without knowing how it is perturbed. The only feedback (signal) to the algorithm is the entropy of the humans movements/inputs where it is assumed that the human would be making more random/chaotic inputs in a sense proportional to how unexpectedly (unintuitively) the game’s state changed with respect to the action taken by the user. The main evaluation is with a space ship control game where the actual motion of the ship was perturbed/offset initially by some random theta degrees. The algorithm eventually learns to reduce theta either to zero or -180 degrees so that it exactly follows the human’s command. Only at these two stable points does the human stop acting so randomly in response to the state changes, since the state changes are, apparently and acceptable reaction to the action (so further exploration by the human is not required). As the authors state: “the key idea in this paper is that, regardless of the task, when an interface is more intuitive, the user’s commands are less noisy.” They formalize this “noise” as “the mutual information between the user’s command signals and the induced state transitions in the environment.” Their work is in line with recent literature in user empowerment where the machine does not assume or infer the users’ goals but only uses cues that indicate the user’s satisfaction with the result as a reward upon which to learn in this human-in-the-loop system. The key strength of the paper is that it is interesting and the idea that users will express frustration with a poorly performing interface by random frantic exploration of potential alternative commands that might work seems plausible and is supported by the authors first study that shows that their mutual information pseudo-reward is correlated with interfaces that adapt to meet the users expectations (become more intuitive) As the authors state: “the key idea in this paper is that, regardless of the task, when an interface is more intuitive, the user’s commands are less noisy.” The authors give reasonable proof of the fact that with adaptive interfaces, users do seem to exhibit the behavior of taking more noisy actions when the resulting effects in the environment (state changes) are not what would be intuitively expected. The authors do include a user study with a simple adaptive interface that shows that their mutual information pseudo-reward can drive convergence in this case. The authors also show that a knowledgeable user can drive adaptation using the pseudo-reward. As the author states there has been a number of attempts to find “natural” reward signals in the real world when interacting with humans whether these be through NLP, BCI (EEG), facial expressions and physiological signals. None of these are as easy to obtain as the “noise” in the input that these authors leverage, making this perhaps the most useable of all prior proposed methods in the field. The weakness of the paper is that the use case is not immediately compelling. The first contact scenario is a good framing for this, however unlikely, but in reality, this is a more theoretical method for machines to learn preferences from humans. The more compelling use case is probably in applying this as a feedback mechanism to search engines or to personalize the “fit” of something like exoskeletons or prosthetic limbs. The user studies are not exhaustive, but I would not expect that for a contribution that is fundamentally about the algorithm. It is laudable that the authors disclosed that the singular subject in the feasibility study was the first author and while on the one hand it could be argued that having such an informed participant as an evaluator made the task “easier” for the machine (potentially the author knew to proportionally vary the randomness of the responses in a way that would facilitate convergence) it is not entirely to be dismissed especially in considering a case where a person was trying to “fit” or “train” a machine intentionally knowing how the machine learned through randomness. The authors show that this is a naturalistic response in general, so intensifying it intentionally would not be entirely out of line with user behaviour. Also, I believe the term “co-adaptive” in the title is a bit misleading as you do mention that the algorithm has no ability to adapt the behavior of the human. I believe you are saying that the human naturally seems to adapt their behavior in response to the algorithms “correctness” and that you are relying on the existence of this adaptation otherwise you pseudo reward would not work, but maybe clarifying this a bit would be good. This contribution is limited to the space of adaptive interfaces where the correct settings for the user’s task cannot be known in advance but where the parameters of the controlling algorithm have the ability to adapt themselves to a more optimal setting. The assumptions are reasonable but not exhaustively studied. The process of adaptation is comparatively slow compared to other methods, so a stronger motivating use case would have been better. The method could have been more exhaustively compared versus other human-in the loop methods such as NLP or facial expression analysis or simply giving the user an explicit “performance feedback” channel on the algorithm’s performance (e.g. 1 to 10 as a keyboard input) and evaluated with respect to speed of convergence and user satisfaction (e.g. if could take longer yet users might prefer it as more natural.
The paper describes an approach to learning an adaptive user interface (i.e., mapping raw inputs to the agent's actions) in an unsupervised way via reinforcement learning. The goal is to learn interfaces that are intuitive for the user, with the supposition that the user's inputs become less noisy as the interface becomes more intuitive. To that end, the proposal is to use the mutual information between the raw input provided by the user and the resulting state transitions as a reward proxy. The approach is evaluated on a series of control and typing domains as well as a small-scale user study involving a cursor control task. The paper was reviewed by three researchers who read the author response an discussed the paper with the AC. The reviewers agree that the problem of adapting a user interface in an unsupervised way is interesting and the proposed use of mutual information for adaptation is sensible and interesting. The reviewers initially raised concerns about the absence of a compelling use case and the experimental evaluations, notably the lack of appropriate baselines (Reviewer eRaa) and inadequate experiments (Reviewers eRaa and phLr). The authors made a concerted effort to address most of the reviewers' concerns, which included experiments conduced on the cursor domain using the alternative method suggested by Reviewer eRaa. However, the authors did not address the experimentation issues raised by Reviewer phLr, who finds that the paper lacks experimental evidence for some of the claims being made. As it stands, the paper doesn't show that the interface that can be achieved with this approach is truly intuitive. Making such a claim requires comparative experiments with appropriate baseline interfaces and more detailed user analyses. As such a detailed set of user studies may out-of-scope for a conference-length algorithms paper focused on the use of mutual information as a reward proxy for interface learning, the claims in the paper should be revisited.
This paper introduces a framework to treat Denoising Diffusion Probabilistic Models (DDPMs) as solving differential equations on manifolds. This goal is faster sampling without significant loss of quality. I thank the authors for this submission. I believe there is value in this work both from a theoretical and practical perspective. In general, I am willing to accept the paper. I have two main suggestions: 1. Please double-check the writing and notations. Some sentences are hard to understand and equations contain errors, e.g. Eq. 2, hat{alpha}_t, wrong index. 2. As DDPMs are relatively new, I was expecting a bit more elaborate introduction. 3. I was honestly quite disappointed with the presentation at times. Some equations are stated without any intuition. If you are short of space, I would rather move some of the experimental parts to a supplementary. 4. Sec. 4.3 and in general, I would also work with toy problems and easily visualizable data. This will provide additional insight for the reader. I am not very familiar with DDPMs but as far as I can tell, the numerical technique introduced makes sense and leads to improved results. I, therefore, recommend acceptance. <doc-sep>This article studies pseudo numerical methods that improve and accelerate upon exiting numerical methods for Denoising Diffusion Probabilistic Models (DDPMs). The crux of this work is the observation that existing work on this topic does not take into consideration the structure of the high-density region of the data. The beginning of the article (Section 1 and Section 2) is well written and presents the motivation behind the introduction of these pseudo numerical methods in a very didactic way. However, Section 3 needs a lot of polishing. These are some comments and suggestions: 1. The contribution of this work is based on the assumption that $\\sigma_t = 0$. The conditions required to satisfy this assumption deserve to be commented. 2. The mention of the manifolds is very implicit and requires to be clarified: what is the definition of the manifold defined at the beginning of Section 3.2? 3. The paragraph at the beginning of Section 3.3, supposed to give the intuition behind the introduction of the transfer part defined in (11), is not clear and should be reformulated. For example, what do you mean by ‘We find that Equation (9) has the property that if ϵ is the precise noise in $x_{t}$, then the result of $x_{t−\\delta}$ is also precise, no matter how big $\\delta$ is’? 4. In Algorithm 2, PLMS and PRK were not defined 5. It would be preferable to put the theoretical results of Section 3.6 in a clearly stated theorem. This would highlight the theoretical contributions of this work. The novelty of the methodology presented in this paper qualifies this work to be accepted to the conference conditionally to improve the clarity of Section 3. <doc-sep>The paper proposes a new efficient method for denoising diffusion probabilistic models (DDPM) (generative models that optimize for the closest solution on a manifold) based on the observation that this can be seen a solving a set of differential equations on a manifold. This allows efficient pseudo numerical methods to be applied here which have many advantageous properties over classical optimization methods, including less optimization steps and guaranteed manifold solutions due to separating the gradient part from the transfer part in the optimization. Results are shown on four datasets comparing to two reasonable but not sota baselines. The paper introduces a class of pseudo numerical methods for DDPM, this is based on a previous work that already establishes the relationship between DDPMs and a certain class of differential equations on manifolds. The pseudo numerical methods separate the gradient and transfer step of classical numerical algorithms, and for each part choose the best of both worlds. With this they are able to provide faster converengence and efficient update steps because gradients do not need to be recomputed for every step. While I would not guarantee that methods do not exist already in classical optimization, I have not seen them in any related application. The paper is overall well written, and the contribution and main ideas are explained clearly. Starting at Section 3.3 (to 4) the derivations become slightly confusing, and it takes a lot of referencing back to find all the variable names again. A legend for the variables and a bit more redundant explanation would probably help here. The same goes for the derivations in the appendix which I could not follow completely. Experiments are done on four datasets with different resolutions using pretrained models for the manifold description. The results show that the introduce pseudo numerical methods converge much faster and provide better results in less iterations than the previous DDIM method and classical numerical methods. The qualitative examples in the appendix look good, but some of them are too smooth to compete with general sota generators (I am not sure if the smoothness here is related to the pretrained models that were used?). Minor comments: - The authors claim that their implementation is in the supplementary. I was not able to find any supplementary (except the appendix) but I am honestly not sure if this is a failure of me using OpenReview... The implementation should definitely be published in the end though. - I think it is bad practice to move the related work section in the appendix. - The large figures in the appendix are very hard to understand because the subfigures are not separately titled. - There are some grammatical errors throughout the text which should be proofread again. The idea of separating the gradient and transfer part is (to the best of my knowledge, I am not an expert though) novel, and I can see many applications besides the ones proposed in this paper. The shown results might not be exactly state-of-the-art in terms of generating images, but show clear advantages over traditional numerical methods. Therefore I recommend accept. <doc-sep>Highlighting the high computational complexity for sampling from Denoising Diffusion Probabilistic Models (DDPMs) (e.g. wrt GANs), authors build on the connection between diffusion processes and ODEs to propose efficient (pseudo-)numerical methods so as to sample data from the data manifold. The main idea is to combine the discrete update proposed in DDIMs, with a fourth-order gradient estimation given by the Runge-Kutta or linear multi-step methods. The motivation being that such gradient estimator should yield trajectories that stay closer to the data manifold. They empirically assess their method(s) on CIFAR10 and CelebA in terms of sample quality (measured by FID), and show that they get a ~x20 speed-up wrt DDIMs or a significant improvement in FID with the same number of steps. ## Clarity. Overall, I believe that the clarity of the submission should be enhanced. First, the introduction can be improved to better stress the motivation of the submission. If I understand correctly, this work builds on Probability Flows (Song et al., 2020), which leverage the existence of a deterministic process whose trajectories have the same densities as the original diffusion process. This deterministic process satisfies an ODE that depends on the score original drift and diffusion terms but also on the score function. Consequently classical numerical ODE solvers (e.g. RK) can be leveraged to sample data from the probabilistic model. Authors then state that results obtained via this approach are subpar and suggest that this is due to the solvers tendency to sample data far from the data manifold. 1/ Why is this true? Would be necessary to give some intuition and to refer to a theoretical analysis 2/ What is the precise problem? Is the issue that the model oversample the data distribution's tail (i.e. fails to fit the distribution properly) or that the numerical methods fail (in what sense?) in these areas as the score is undefined (or hard to estimate)? SMLD (Song and Ermon, 2020)'s motivation to inject noise is built on the latter. The authors then provide "pseudo numerical methods for diffusion models (PNDMs)" which produces trajectories that "iterate data on the high-density region of the data", hence tackling the aforementioned issue. Also, perhaps this is question of taste, but I believe that the Background Section (and most of the paper)'s clarity could be greatly improved by taking the perspective of Song et al. (2020b), that is, a continuous diffusion process (forward / perturbating data) and the associated reverse diffusion process (generating data). Additionally, Section 3.1 is challenging to follow. Would perhaps be better to put less equations but spend more time explaining why and how they matter. ## Strengths First, the proposed method is conceptually simple and showcase previously proposed methods (DDIMs) as a special case where the update is a (first-order) Euler step. Then, the submission shows strong empirical results on common datasets like CelebA, with faster convergences or significantly better FID (for the same number of steps) yielding SOTA. Authors report a ~x20 speed-up wrt DDIM, but this in number of steps and fourth order methods trade-off convergence speed for computational cost. Figure 3 suggests that the speed-up is around x15 for CIFAR which is still significant (although it would ideal to report the runtime directly in Table 2). Finally, Figure 4 is quite nice as it empirically illustrate the proposed method ability to sample trajectories that like closer to the data manifold, which was the original motivation. ## Weaknesses I think that the main weakness is the writing. Although I believe that Section 4.3 would deserve a deeper empirical analysis as it is directly investigating the core motivation of this submission. ## Relation to prior work - Perhaps worth citing Sohl-Dickstein et al., 2015 in Section 1? - Citations for Runge-Kutta and the Linear Multi-Step methods appear to be missing. - It is not entirely clear to me what method is meant by Probability Flows (Song et al., 2020b), is it with Variance Exploding SDE (SMLD) or Variance Exploding SDE (SMLD) (cf Table 1 from that paper). ## Additional feedback. - "However, classical numerical methods (Sauer, 2017) have problems when they are applied to DDPMs." -> What class of numerical methods? The Euler and RK methods are ODE solvers, although extensions exist for SDEs. It is unclear how they can be applied to DDPMs. Or is it implicitly implied that they are used for the the corresponding deterministic process (probability flow)? - "to iterate our data on the high-density region" -> would suggest to reformulate - Eq 3: $\\epsilon_\\theta$ is not defined. Would advise to do so so for the paper to be self-contained, especially as $\\epsilon_\\theta$ is used through the entire paper. - Table 2: Error bars / confidence intervals are missing. Bold is not defined. As methods have different computational cost per step would be very useful to additionally show this metric. - Figure 3: Time has no unit. - Figure 4: Axis names are missing. I personally find this submission interesting and significant, yet believe that clarity needs to be improved to enable readers take the most of the paper's insights.
This paper presents a new DDPM model based on solving differential equations on a manifold. The resulting numerics appear to be favorable, with faster performance than past models. Most of the reviews thought the main result was of interest and were impressed with the performance. Reviewer c9bY points out some challenging issues and analytical questions that remain unanswered in the text; they also have some simpler textual revisions that seem less important. In general, this paper has the misfortune of receiving reviews whose confidence appears to be low. While partially this is a byproduct of the noisy machine learning review system, the difficulty of the text itself is substantial and made the paper less than approachable; the authors are encouraged to continue to revise their text based on feedback from as many readers as possible. That said, the authors were quite responsive to reviewer comments during the rebuttal phase, which significantly improved the text. Overall this is a borderline case, and the AC also had some difficulty following details of this technically dense paper. Given the positive *technical* assessments of the work and at least one reviewer defending the paper's clarity, the AC is willing to give this paper the benefit of the doubt.
+ The proposed method has been evaluated on several benchmarks. + The idea is simple and the proposed method achieves a performance improvement on several benchmarks. - About the motivation, the key motivation is unclear. As described in the introduction or method sections, the key motivation are “existing mix-based methods cannot be combined with each other” and “most of existing mix-based methods cannot effectively combine more than 3 images”. It is very confused to me that why we need to combine the existing mix-based methods. It is necessary to give a detailed discussion about the motivations theoretically. The authors try to utilize the experimental results about “Mixup-> CutMix” and “Mixup” to show the necessity of the motivation. To my knowledge, the results are weak because Miixp should perform better than “Mixup-> CutMix”. Mixup->CutMix gives a noise cutmix label for a mixup image input. My major concerns are about the key motivations and the experiments of the paper. -As for the proposed stackmix method, it has the highest performance when $k$ is equal to 2. The results make me confused about the second motivation. Should we need to combined more than 2 images for data augmentation? Please explain for this phenomenon. - About the paper writing, a lot of grammatical errors exist in the main manuscript and make the paper hard to follow. For example, page 1 “There is work to reduce the cost of the search” -> “There are some works to reduce the cost of the search”, page 2 “Follow up work…with correspondingly weighted labels”-> “Follow-up works… label the image with correspondingly weighted labels”, page 3 “setting -including” -> “setting, inclduing” - In section of Related work, the difference between the proposed StackMix and the previous Mix-based augmentation method should be discussed in detail. - The size of the stacked image input is related to the number of $k$. The authors claim that the proposed method do not change the general network architecture. To my knowledge, the size of the input can influence the structure of the network. - About the experiments, the authors list all of the experiments settings in Table 2. The authors have conducted the experiments on supervised learning setting for several datasets and networks, but only on Cifar dataset for other settings. Can the authors explain for this? <doc-sep>The proposed method is very simple and shows performance improvements across the board in the experiments when combined with other mix-up based techniques. The results show that it is competitive with CutMix on its own and boosts performance further when combined with it (or Mixup). The compatibility with data augmentation is also evaluated. Results indicate an improvement with corrupted data and in the semi-supervised learning case as well. There is no theoretical explanation of why the proposed method helps to improve performance. There is a lack of an attempt to motivate the proposed approach in general. Inference speed is reduced by applying the new technique. It would be very useful to provide some form of justification for the proposed approach that is not just based on improved accuracy on the test set. I can see that it may be difficult to provide theory justifying the proposed method, but perhaps some intuitive justification could be provided. <doc-sep>1. The method is simple and easy to implement. 2. It can be used with other existing augmentation methods. 3. The authors validated the method on multiple datasets, showing performance gain. 4. To provide a fair comparison, the authors accounted for a different number of hyperparameters, epochs, etc., showing that the effect of StackMix is nontrivial and potentially can not be explained by computation or model size differences. Weakness and suggestions: 1. “In Stack-Mix, each input is presented as a concatenation of two images, and the label is the mean of the two one-hot labels.” Although the figure explains the axis of concatenation, it will be better if you mention the axis of concatenation in the text as well. 2. You should evaluate performance for unsupervised/SSL methods, which rely heavily on data augmentation techniques. 3. Section 3.6 is not very clear. Some of the experiments are not well explained, and the intuition is not clear. In section 3.6, “ Therefore, we designed several experiments,” – the sentence seems incomplete (designed experiments to/for?). Questions: 1. In section 3.5, “For MixUpand CutMix, k represents the number of images combined,” how do you combine multiple images (k>2) for Mixup? 2. In section 3.6, “OnSRN18-CIFAR10, we only swept over the top image in StackMix for the first convolutional layer”, could you explain more about the intuition for this step? 3. “In another case, the standard one-hot setup is given two forward passes for inference at test time.” Why are two-forward passes given, and how?
Meta Review: AC read the paper, reviews, and responses. AC appreciates the simple and effective StackMix method that surpasses all existing baselines. Though the average rating is below the acceptance bar, AC still recommends acceptance due to the comprehensive experimental results that may shed light on future research in the community. However, AC suggests that the authors do follow the negative comments, especially from Reviewer Z7zb, to improve the quality of the paper for publication.
This paper describes a new benchmark for comparing and contrasting deep learning knowledge tracing models. This benchmark is motivated by the observation that the research community is busy creating a variety of models based on different assumptions and approaches, some major and some minor, and that the results on standard data sets vary in ways that make judging new models difficult. The paper posits that data cleansing and other pipeline processes may explain the variability rather than factors intrinsic to the model being proposed. The chief contribution is a toolkit called pyKT, which include python routines for standardizing data cleansing and data set preparation, as well as attendant recommended procedures. The benchmark was utilized to compare 5 different flavors of DLKT models across 7 different publicly available datasets. The evaluation tasks included predicting the student’s response on the last question based on historical data and predicting multiple student responses. In all cases, the student response is a binary variable of either correct or incorrect. The paper concludes with a series of observations on the performance of the selected models on these data sets using the proposed benchmark toolkit: the chief finding is that the original DKT model still performs the best. Whatever one may say about the particulars of this benchmark, the team is tackling a very important challenge within the research community: how can we ensure that we are comparing apples-to-apples and that the results being reported are reproducible and meaningful. The literature is filled with papers where someone tweaks a current model in some small way and reports a marginal gain in performance that is elusive to replicate. The strength of this paper is the thoroughness of this first step towards creating a community benchmark. The team did a terrific job reviewing the plethora of models out there and picking core, representative examples along with selecting appropriate and available data sets. The paper is framed around two well-chosen research questions that motivate the concluding observations. The limitation of this work applies to all knowledge tracing approaches that reduce student learning assessment down to a binary value of correct and incorrect. We know that these sorts of observations usually measure recall or recognition, rather than deep comprehension by the students themselves. Contemporary approaches towards assessment will need to handle longer and more nuanced language and require much better NLP. But, all of the datasets that are being used are based on this binary response assumption. The paper makes a big deal about label leakage but it could have been explained much more clearly! <doc-sep>This paper proposes a comprehensive python-based benchmark platform, PYKT, to guarantee valid comparisons across DLKT methods via thorough evaluations. The PYKT library consists of a standardized set of integrated data preprocessing procedures on 7 popular datasets across different domains and 10 frequently compared DLKT model implementations for transparent experiments. Experimental results on the fine-grained and rigorous empirical KT studies yield a set of observations and suggestions for effective DLKT. The proposed toolkit is open source. Overall, I believe this paper has made a good contribution to the knowledge tracing community, although I am not an expert in this area, I find this paper easy to follow. 1. The proposed benchmark plated-form PYKT is novel and can guarantee valid comparisons across DLKT methods via thorough evaluations. 2. Experimental results on the fine-grained and rigorous empirical KT studies yield a set of observations and suggestions for effective DLKT. 3. This paper provides comprehensive experiments with insightful analysis. 4. I read some codes in the toolkit and found it is well-organized. 1. I think more background about knowledge tracing baselines will be helpful. <doc-sep>This work focuses on the knowledge tracing problem. The authors point out that the preprocess procedures in existing works are often private and the evaluation protocols are different and far away from the real world senerio. To address this issues, this paper presents a comprehensive python toolkit named pyKT. The toolkit provides standardized dataset preprocess procedure and several popular DLKT model implementations. 1. This paper presents a problem statement section, which is reader-friendly. 2. This work is overall well-motivated, and discuss its application in real-world scenarios. 3. The authors present detailed analyses and insights into the experimental results. 1. Potential limitions are not discussed in this paper. 2. The authors should provide more instructions on how to contribute to the PyKT. <doc-sep>The authors state that data preprocessing procedures in existing DLKT approaches are often private or custom and they differ in terms of the evaluation protocol. To address these and make valid comparisons across DLKT methods happen, they introduced pyKT library. It consists standardized preprocessing procedures on 7 popular datasets across different domains and 10 state-of-the-art model implementations. The authors provided 5 observations and suggestions from their results. One of these observations suggests that wrong evaluation setting may cause label leakage that generally leads to performance inflation. The problem and the contribution are important and explained very clearly. The authors propose a platform which helps different methods to be compared and evaluated in a more standardized way. They worked on 7 popular datasets in KT and 10 state-of-the-art DLKT models. Their open-source library, which includes some DLKT implementations and evaluation protocols, can be valuable for future research. Some minor comments: The authors discussed about evaluation protocols, comparison of different models, and so on but I do not see a very concrete discussion regarding scalability, which could be nice to include. Table 1 basically repeats some of the statistics in the text which could be avoided. <doc-sep>This paper introduces pyKT, a python library providing implementations of 10 Deep Learning based Knowledge Tracing (DLKT) models. This library also includes standardized data pre-processing methods and evaluation protocols allowing to benchmark the main deep learning based approaches to perform Knowledge Tracing. The paper also proposes to perform a benchmark of these models on 7 datasets. - The authors tackle an often neglected but still extremely important aspect which is reproducibility of existing methods. - They provide opensource and *on-the-shelf* implementations of existing methods together with ready-to-use data pre-processing, data splitting and model evalutation protocols. This work should ease reproducibility and reduce barriers for new methods to compare to existing ones. - The benchmark are well conducted with extensive hyper-parameter search. - The code is documented with an online documentation but I think some work remains to be done on this documentation to easily use this software. For instance, there is no *quickstart* available in English and many of the methods are not documented. I am afraid this is a brake to make the code easily usable. Maybe the authors can consider adding some notebooks to illustrate a basic usage of their software as well. - The authors states that they are open to new contributors and so I suggest they polish their documentation and include some guidelines for new comers to know how to easily integrate their models and contribute. - The repository does not include any unit-testing tool allowing continuous integration. <doc-sep>This paper conducts a comprehensive comparison across a representative sample of deep learning-based knowledge tracing (DLKT) models, producing several insightful (if worrying!) findings, including that recently proposed methods hardly outperform one of the first DLKT methods published seven years ago. I think this on its own is a major contribution to this area of research, and so I am recommending acceptance. - a comprehensive comparison across a representative sample of deep learning-based knowledge tracing (DLKT) models - this comparison yields several worrying findings, such as the existence of data leakage artificially increasing performance metrics - publically release a benchmark to standardize evaluation for future work in DLKT methods At least for English, the GitHub repository does not appear to have any instructions beyond installation (see "Documentation" below). Docking a point for this, but would be happy to raise my score if the authors agree to add these instructions.
This work develops a python-based benchmark platform (PYKT) that implements several Deep Learning based Knowledge Tracing models. The research is well motivated and the paper is well written. The experimental section is also thorough and provides procedures for handling several popular datasets across different domains. The reviewers raised some minor concerns, and the authors are requested to address them in their final submission.
This paper studies the multi-source domain adaptation (MSDA) problem. The authors argue that the existing MSDA solutions (1) do not explicitly consider distribution conditioned on labels of each domain, (2) rely on limited feature extraction based on one extractor, (3) do not well explore target data due to the absence of label. Correspondingly, Multi-EPL is proposed based on moment matching. Although the design of the proposed method seems reasonable, its novelty is marginal. Additionally, the evaluation in the experiments is somewhat unfair. I vote for rejection. The paper is well organized. The technical details are clearly presented. A few comments are summarized below. -In Eq.1, the authors minimize the discrepancy between every two distinct domains. It is unclear to me why not minimize the pairs of each source domain and target domain, (N-pairs). The goal is to align the distributions between source and target. Please clarify the motivation of aligning two source domains. Besides, it would be good to have one baseline only considering label-wise moment matching losses for only between N source and 1 target pairs. -In page 5, the motivation of diversifying features from different extractors is unclear to me. Please clarify the benefit of classifying feature according to extractor ID labels. Moreover, the ablation study presented in section 5.3, does not show a clear improvement by introducing the diversifying loss. I would encourage the authors to design another analytical experiment to show its effectiveness. -The performance improvement compared to the state of the arts is limited. Specifically, for Digit-Five dataset, a missing recent work [ref-1] reports an average performance of 91.8. To show the consistent performance improvement over this strong baseline, I’d encourage the authors cite and compare it under the same setting. [ref-1] Hang Wang et al., Learning to Combine: Knowledge Aggregation for Multi-Source Domain Adaptation, ECCV 2020 -Besides [ref-1], there are several other recent MSDA papers are missing, including but not limited to [ref-2] Chuang Lin et al., Multi-source Domain Adaptation for Visual Sentiment Classification, https://arxiv.org/abs/2001.03886 [ref-3] Haotian Wang et al., Tmda: Task-specific multi-source domain adaptation via clustering embedded adversarial training. ICDM 2019. -Other minor point. A typo in page 7, “threshoold” -> “threshold” Updates: Thanks for the authors' response. Some of my queries (1st and 3rd) were clarified. However, unfortunately, I still think more needs to be done to show the superiority of the results. I retain my original decision.<doc-sep>- Summary and contributions - In this work, the authors proposed an algorithm for multi-source domain adaptation. While the results seem promising, the technical contribution is incremental and limited. Meanwhile, more empirical results are needed to validate the effectiveness of the framework. - Strengths: - The paper is well written and easy to follow. - The problem investigated in this paper, i.e., multi-source domain adaptation, is of significance. - Weaknesses: - The technical contribution of this work is limited. Label-wise moment matching or adversarial training (e.g., [1]) has been a common practice in single-source domain adaptation. The authors simply applies this idea of multi-mode aware domain adaptation to multi-source domain adaptation. Moreover, comparing the first line, i.e., MULTI-0, in Table 2 with M3SDA in Table 1(a), we find that this label-wise moment-matching makes almost no contribution. - The empirical results, especially the ablation studies, do not hold. Pseudo-labeling the unlabeled data in the target domain and using multiple feature extractors can also be easily used by M3SDA and DCTN. Expecting a definite performance boost, I expect the authors to provide such results. Moreover, pseudo-labeling and ensemble learning, however, are not novel; they are widely adopted techniques and can be easily incorporated into any algorithm, e.g., M3SDA, for performance improvement. - Even on the dataset Office-Caltech and Amazon Reviews, the performance improvement of the proposed algorithm is minor. [1] Pei, Z., Cao, Z., Long, M., & Wang, J. (2018). Multi-adversarial domain adaptation. AAAI 2018.<doc-sep>--Paper summary-- The authors propose a novel method for multi-source domain adaptation (MSDA). For effective adaptation, the proposed method adopts three techniques: (1) label-wise moment matching, (2) pseudo-labeling target data, and (3) ensembling multiple feature extractors. Experimental results show that the proposed method outperforms several state-of-the-art methods in both image and text domains. --Review summary-- Although the design of the proposed method seems reasonable, its novelty is marginal. Additionally, the evaluation in the experiments is somewhat unfair. I vote for rejection. --Details-- Strength - This paper is well-organized and is easy to follow. I believe that the proposed method can be easily implemented without any obstacles. - Good performance in both image and text domains is appealing. Such results should be highly appreciated especially in machine learning community. Weakness and concerns - Marginal novelty. The three techniques that the proposed method adopted are all similar to those already proposed in the literature. I could not find any novel and specific design or strategy to combine them specialized for MSDA. - Class-wise distributional alignment is a common technique in recent domain adaptation methods, e.g. [R1] and [R2]. [R1] "A DIRT-T Approach to Unsupervised Domain Adaptation," ICLR 2018. [R2] "Unsupervised Domain Adaptation via Regularized Conditional Alignment," ICCV 2019. - Pseudo-labeling is also a common technique in recent domain adaptation methods, e.g. [R1] and [R3]. [R3] "Asymmetric Tri-training for Unsupervised Domain Adaptation," ICML 2017. - Using multiple feature representations is not so common but is presented in [R4] and [R5]. [R4] "Domain Adaptation with Ensemble of Feature Groups," IJCAI 2011. [R5] "Domain Separation Networks," NeurIPS 2016. - The design of the feature diversifying loss is not reasonable. It can be minimized by just making feature representations to be easy to discrminate their extractors, which does not necessarily increase the diversity of the representations. For example, given two extractors that share the same parameters, adding a large offset to outputs from one extractor leads to high performance of the extractor classifier but does not increase diversity of the feature representations. - The exprimental setting is somewhat unfair. Since the proposed method utilizes n feature extractors, the model complexity in the proposed method should be n times larger than that in existing methods. <doc-sep>In this paper, the authors propose a MULTI-EPL for multi-source domain adaptation. The key idea includes two folds: (1) to align label-wise moment, and (2) to ensemble multiple feature extractor. Experimental studies on 3 datasets are done to verify the proposed MUTL-EPL. Overall, the paper is well-written. The technical approach is simple and sound. My major concern is on the technical significance of the method. Here are the detailed comments: (1) One motivation of the paper is that current methods fail to consider the shifts among sources. However, there are some multiple source transfer methods explicitly modelling the inter-domain similarities, e.g., [ref1]. The paper also misses some important multiple-source references. Please refer to the survey [ref2] for different types of multiple source transfer methods. It would be better to have a comprehensive discussion on the related works. (2) The proposed label-wise moment matching is not new in transfer. Early subspace based work, e.g., JDA [ref3], and latest semantic deep learning based transfer methods, e.g., [ref4] share the similar idea. (3) The threshold \\taw is used to obtain the good target labels. On the one hand, it is unclear how this parameter should be set for different transfer tasks. On the other hand, high confidence score does not imply correct target label prediction. Error reinforcement may happen even with a well-tuned \\taw. (4) The usage of ensemble feature extractor is actually using high complexity to enhance prediction accuracy. The scalability could be an issue of the proposed method, especially considering that multiple source may have extremely large data size. (5) Sensitivity analyses on the balancing parameters \\alpha and \\beta should be done. (6) Why data augmentation is done on office-caltech10 datasets? There are many datasets containing multiple domains with sufficient data, e.g., office-home, Please use these datasets instead of constructing `artificial’ real-world dataset. (7) The baseline methods can be further improved (please consider [ref2] for more baselines). Regarding the current results, MUTLT-EPL performs larger improvements over M3SDA on the 2nd, 3rd, and 4th tasks in Digit –five dataset, while only achieves marginal improvements in other tasks, e.g.. tasks in office-caltech10 dataset. More analyses on the performance difference of different tasks should be discussed. (8) Based on the ablation study, Mutil-epl-r achieves comparable results with multi-epl, which indicates that the extractor classifier and feature diversifying loss have less importance in the overall objective. [ref1] Source-target similarity modelings for multi-source transfer gaussian process regression [ref2] Multi-source Domain Adaptation in the Deep Learning Era: A Systematic Survey [ref3] Transfer Feature Learning with Joint Distribution Adaptation [ref4] Deep Transfer Learning with Joint Adaptation Networks Update: Thanks for the authors' response. However, I am not convinced on several points, e.g., (3) - (7). Considering the other reviewers' comments, I think the paper needs to be further improved. Thus, I will keep my score.
In this paper, the authors proposed a solution to the problem of multi-source domain adaptation. All the reviewers have two concerns: 1) the technical contribution/novelty is limited, and 2) the experimental results are not convincing. Therefore, this paper does not meet the standard of being published in ICLR. The authors are encouraged to improve this work by addressing the issues raised by the reviewers.
This paper looks at LSTMs with the intention of understanding their functional connectivity. I am not sure exactly what the relationship between the brain and LSTMs is being assumed or proposed herein — however I understand the need to understand complex neural networks regardless of their relationship to biological systems. I would have liked to have a discussion with respect to what the hierarchical organisation is due to. Is this merely a repercussion of the connectivity, for example? What do the authors think? In terms of work that looks at ablation (i.e., damage), it might be useful to bear in mind limitations of such work if various (seemingly, perhaps) extraneous factors are not taken into account, see: https://doi.org/10.1007/s42113-020-00081-z I think this paper can be polished to the level of a solidly good paper if the authors can sketch out a bit more their rationale and syllogisms with respect to my above questions. Minor: * Figures are very hard to read, is it possible to redesign them slightly to make the text bigger? * In LaTeX to open double quotes you need to use two backticks. Also the \\cite and \\citep commands should be used appropriately in terms of places where \\citep is needed as well as use of optional arguments to avoid double parentheses. <doc-sep>This paper applies tools from neuroscience to understand how language models integrate across time. The basic approach is to present a phrase, preceded by two different context phrases: one that is natural (i.e. the phrase that actually preceded it in the corpus) and one that is randomly selected. The authors then measure how long it takes for the unit activations to become similar for the two different contexts, which provides a measure for how long the context impacts the representation. They find that (1) timescales increase at later layers of the language model (2) that only a small fraction of units exhibit long timescales (3) that long/medium-timescale units appear to come in two forms which they try and characterize using graph-style analyses. -- Pros: How language models integrate across time is clearly important, and this paper describes interesting first steps in characterizing the analysis of time using relevant tools from the neuroscience literature. The method presented is simple and broadly applicable. The graph-style results seem intriguing if a little hard to make sense of. I also think that the sparsity of the long-timescale units is cool and interesting. -- Limitations and questions: 1. It’s not clear to me if the notion of time is a meaningful one in a language model. For example, the duration of contextual effects on a unit that codes syntactic number will presumably be highly variable and depend upon the details of the particular sentence being encoded. Thus a natural question is how variable are these timescales from moment-to-moment? What’s being plotted is the average across a bunch of sentences, segmented at a particular moment (a conjunction). How robust are these results if one examines a different point in a sentence? Are the timescales of some units more variable than others? -- Update: the authors have repeated their analysis for a different sentence point (after the 10th word) and report similar results. This analysis is helpful, though of course the 10th word is not a very principled break point, and there presumably is a lot of variation in timescales that are being averaged across. I continue to wonder how meaningful the notion of an absolute timescale is. -- 2. None of the steps in the graph analyses seemed particularly natural or well-motivated to me. Why were the graph edges thresholded at z>5 and why was k-core analysis performed? I find it hard to make sense of what this analysis tells us about how language information is processed. Is there some reason why medium timescale “controller” units and long-timescale “integrator” units should help with language processing? If these results are purely exploratory and lack a clear interpretation, then perhaps the authors could help the reader by explaining the thought process behind the exploration. Perhaps starting with the MDS plot would be useful rather than the k-core analysis, because the MDS plot clearly shows some interesting structure. -- The authors have motivated some of their analyses by discussing brain research reporting that longer-timescale regions are more densely connected. Of course, the relationship between connectivity between large-scale brain regions and the units in a LSTM remains highly speculative. But having some motivation is helpful. -- 3. It would be interesting to know how dependent these findings are on the model’s architecture. Would similar results be found for a Transformer or a simpler GRU-style RNN? -- The authors have attempted to address this point, but with limited time were not able to train a network to a high level of performance. -- -- Minor points: In Figure 4, it would be helpful if the absolute timescale was labeled in all plots rather than the rank of the unit or the “normalized timescale”. The absolute timescale seems much more meaningful to me (and the units can of course still be ranked, just the axis labels changed or augmented). The legend for Figure 4c is incorrect. <doc-sep>This paper explores the application of innovative methods to track the flow of linguistic information in LSTM language models. In particular, the overarching question is how contextual information might be encoded in the network at the level of single units, and how context disruption might alter the LSTM dynamics and thus impact its predictive ability. The paper is clear and it tackles an interesting question. The approach is well motivated, and the authors give a brief survey of the most recent applications of this kind of methodology in linguistics and cognitive neuroscience studies. The methodology is generally appropriate, though some details and parameters (e.g., numerical thresholds) seem to be chosen arbitrarily. Also, the analysis could be improved by applying statistical testing in order to better quantify the strength of the observed effects. Overall, I think this is a nice paper, though it might be especially relevant to the linguistics community rather than to the ICLR community. Moreover, I think that further analyses are required in order to better clarify some important aspects. In particular, I think that ablation studies should be performed in order to better identify the functional role of the “controller” and “integrator” units, whose actual functional role remains a bit speculative (and mostly based on structural / connectivity information). It would also strengthen the paper to have some more controlled simulations, where the contextual information is defined according to specific linguistic constraints, in order to better characterize what the target units are actually encoding. Indeed, as also noted by the authors almost “all the long timescale units are of unknown function”. Finally, I think that it would be important to establish whether these findings are generally applicable to LSTM models, regardless of the specific architecture under investigation (e.g., What happens if we force the LSTM to rely on fewer units? Does the hierarchical organization of the context improve by adding more layers?). Other comments: - Why did the author choose to test the model on a different corpus (Anna Karenina novel) rather than considering a test set from the same corpus from which the training set was derived? The Tolstoy book might have a quite different linguistic structure from that of the corpora used to train the LSTMs. - It might be informative to also include a third condition in-between “Intact” and “Random” context, where the same context words are maintained with scrambled order. This would allow to better understand the role of individual words in shaping context representation and activating the LSTM units. - In Fig. 1D, it is interesting to note that the Unit 823 (green line) actually exhibits a sharp increase in difference after the shared segment starts. Do the authors have a possible explanation for this kind of phenomena? Was it observed systematically in other units? - In relation to the results shown in Fig. 3A, I did not understand how the thresholds and parameters for the k-core analysis were chosen. - Pg. 3: there is a typo regarding the size of the output layer (5,0000) - In Fig. A1, error bars would help in better understanding the actual difference between the curves. - In order to improve reproducibility, it would be very helpful to share the source code used for these analyses. <doc-sep>_**Update after author response**_: I think this is a very promising paper, and I am really excited about seeing techniques from neuroscience employed to answer questions about neural network models. The authors have further conducted several additional experiments after reviewer comments, which I appreciate. However, my most fundamental concern -- the mismatch between the method and the way that it is validated -- unfortunately still stands, which is why I would encourage the authors to further pursue this line of work, but recommend to reject it for ICLR. **Summary** This paper proposes to apply time-scale methods from neuroscience to investigate the timescale organisation in neural language models. More specifically, the authors test the timescale of individual units in a word- and character-level LSTM by comparing the units' activations values on the same sentence, but with different contexts. Using this method, the authors first show that the higher layers on average have longer timescales. They then, for all units, they fit a logistic function to the "recovery" curves and use the half-times of this curves as an indication of the time scale of these units. They test the syntax unit and two long-distance units found by Lakretz et al and show that the number units have similar time-scales, while the syntax unit have a longer time scale. Lastly, the authors analyse the connectivity between the longer time scale units and find that the units with longer processing timescales make a larger number of strong projections. Within these units, the authors identify two sets of units in the word-level LSTM: "controller units", that play a role in how the connectivity of the network is updated, and "integrator units", that instead integrate information. **Strong points** - Neuroscience has long been asking questions about the brain that are very similar to the questions we now ask about neural networks, cross-pollination between these fields is extremely important, and this paper contributes to this - Aside from the main technique, the paper introduces some interesting and useful methods, such as projectivity analysis and k-core analysis. I think these methods can be useful for other researchers as well - Time scale analysis of LSTMs is a very relevant and interesting topic, that deserves more attention than it is currently getting *Concerns* - My main concern is that there seems to be a mismatch between the "language time scales" on which the authors operate: their experiment is designed to investigate the impact of extra-sentential context, but the Lakretz et al results they keep coming back to concern syntactic phenomena that are only relevant *within* a sentence, which is a different scale. In other words, the units found by the authors of this paper are long-distance when it comes to integrating context, but the syntax and number units found by Lakretz et al are not really related to that: they model relationships *within* sentences. Theoretically speaking, they should be reset at the beginning of every new sentence and they should thus be completely independent from the content. That the authors find this to be untrue is interesting, but inconsistent with what Lakretz et al describe these unit do. Since this is not addressed at all in the paper, it makes the results in general a bit difficult to interpret. _**Update after author response**: In their response the authors clarified that the they have only analysed single sentences, where two distinct subsentences are combined with a conjunction. This, unfortunately, does not make a difference for the argument: whether two sentences are split by a full stop or instead concatenated with "and" does not make any difference for the argument above, since the subject-verb agreement relationships that the units the authors look at model do not cross these boundaries either. Furthermore, in their response the authors state that the find that the context representations of units was 'reset' at sentence boundaries, as I asked before. I appreciate that the authors did these additional experiments, but I find the result somewhat worrisome: since the units they are looking at are syntactic units that encode number across long distance subject verb relationships, they should be reset both when a new sentence starts, as well as when a new conjunct with a new relationship starts. In terms of SV relationships, there should be no difference between "The boy kicked the ball and the girl caught it" and "The boy kicked the ball. The girl caught it." That the authors do find a difference points to a potential flaw in methodology._ - Relatedly, the authors say that their result that the syntax unit is a long distance unit, while the number units are not. This is not consistent with what they say in the related work of the section, but also not with the results reported by Lakretz et al, who hypothesise that the syntax units represent the depth of the syntactic dependency. This is something that changes with every new incoming word, whereas the number units are the ones that have to keep their activation constant across time. - While, as I said before, I think it is great that the authors try to use methods from neuroscience into the field, I do think that in this case the main method they propose is only very marginally different from earlier work (in particular Khandelwal et al. Perhaps it would make more sense to put a bit more stress on the rest of the methods as well (btw, also Lakretz et al do connectivity analysis). - The results are a bit underexplained, and understanding them requires many back and forths to the appendix. I would have appreciated a bit more motivated interpretation of several aspects. For instance: why is there such a large difference in activation differences in different units in the "pre-shared segment" part, and is this related to the half-time (it seems so from the plots)? What is the difference between character and word-level models in terms of expectations (we'd expect there to be an additional level of time-hierarchy, perhaps?) How do assessing activation differences and correlations differ in terms of conclusions? These things should, in my opinion, all be worked out a bit better. - Lastly, there are a few unsupported claims, the most important of which that their method recovers the previously discovered units of Lakretz et al, while (as far as I understand), they actually only *use* their method to analyse those neurons, but did not find them independently. (for other suggestions and comments, see below). To summarise, while I think the idea is very nice and definitely worth working out further, I do think that some work is needed to make this a publishable paper. *Suggestions/comments for authors* _Typographic_: - If you use quotes in latex, you should use different ones for left (`) and right ('), for them to appear correctly (check for instance line three in the introduction) - To prevent additional spaces after abbreviations like e.g. and i.e., put a backslash: "e.g.\\ " - Lerner et al --> put all references within parenthesis - Introduction switches from present tense to paste tense in the last paragraph - "we measure the time-taken for the effect of this prior context to ”decay” (see Methods)" --> I don't really understand what this means, you measure how long it takes for these changes to not be measurable anymore? - Try to avoid double parethesis with abbreviations, e.g.: (WLSTM Gulordava et al. (2018)) should be: (WLSTM, Gulordava et al; 2018). You can do this with \\citep[text before][text after]{citation}. - "has an 650-dimensional" --> "has a 650-dimensional" - "without fine-tuning to the novel" --> I first thought this sentence was unfinished until I read back and realised that "the novel" is your corpus. This is a bit confusing perhaps you could rephrase. - "how the cell state activation differ" --> "how the cell state activations differ" - "we will see that the activation difference drop quickly' --> drops quickly / see the activation difference drop quickly - There are several references that were published at ACL* conferences that are listed as arxiv papers in the reference list (Lakretz et al, Gulordava et al, Khandelwal et al) _Content_ - I would say that the conclusion that "Overall, prior works suggests that a small subset of units track long-range dependencies" is rather overstated: Lakretz et al found that the units representing long distance number information were sparse, but this does not imply that long range information in general is represented sparsely. Their method also focusses quite exclusively on finding sparsely distributed properties, as more distributed properties cannot be found with ablation. Furthermore, this is just one study, focusing on one syntactic aspect. I would suggest to rephrase this a bit. - Lakretz at all actually identified several syntax units, but only one of them was interpretable. - I find it a bit confusing that in 3.2, second paragraph, you first talk about comparing cell state activation, then say that you compare hidden state activations and then talk again about the cell state activation - Figure 1 C & D: I don't think these figures add much to the paper, for the following reasons i) They show only individual units and no average, making it difficult to interpret the values ii) while, as pointed out in 5.1, the *rate* of decay is the most important, the cut-off point is not indicated in the figure, which puts a stress on irrelevant aspects: the actual difference between the two lines. - I would appreciate to have Figure A.1 in the main text, it is important for the story.
This paper applies methods inspired by neuroscience to analyze the inner workings of LSTM language models. In particular, a simple and clever approach is proposed, in which a sentence is presented in its observed context vs. a random one. The time for a unit activation to become similar in the two contexts is used as a probe of the timescale of contextual effects. The main results are that timescales increase with layer and that there are two classes of long-timescale units with different graph-theoretical properties. The functionality of syntax-sensitive units previously identified in the literature is confirmed. Finally, the analysis is replicated for a character-level model. The paper received detailed and insightful reviews, and there was a lively (but always respectful) discussion between authors and reviewers. Overall, the reviewers liked the topic of the paper and the overall methodology, however they had several issues with it. One of the issue pertained to the "holistic" approach to time in the paper, which is measured in number of tokens, rather than in terms of syntactic distance. More in general, there was a feeling that the paper was somewhat short on actual insights on the exact functional role of units in a linguistic context. The reviewer who assigned the most severe score was mostly concerned about one specific instance of this, namely the fact that the authors focus on syntax-tracking and number agreement units whose scope should not really extend across sentences. Moreover, the reviewer was surprised that the syntax-tracking units maintain information across longer distances than the number-agreement units, that should, by definition, keep track of long-distance relations. I am divided. I welcome work that focuses on novel qualitative and quantitative analyses of an existing model. I wished there were clearer take-home messages on how LSTMs process language, but I recognize that our knowledge of deep-learning models is very preliminary, and I am thus not surprised that the conclusions are not entirely clear. The reviewers raised important concerns, but I would not confidently claim that we know enough about the relevant units to be genuinely surprised by some of the results. For example, can we really say that number-agreement units are only limited to clause-internal agreement tracking? Couldn't it be, say, that we will discover in the future they also play a role in tracking discourse-determined pronominal number (going out on a random limb, here, of course)? Overall, I would like to see this at least as a poster at the conference, but I am assigning low confidence to my recommendation as I respect the reviewers' point of view.
The paper explores an alternative loss function for fitting critic in Reinforcement Learning. Instead of using the standard mean squared loss between critic predictions and value estimates, the authors propose to use a loss function that also incorporates a variance term. The authors dub the approach AVEC. The authors combine their approach with popular RL algorithms such as SAC and PPO and evaluated on the standard benchmarks for continuous control. Although the paper demonstrates interesting empirical results, I think that the current experimental evaluation has a number of flaws that prevent me from recommending this paper for acceptance. The paper provides basic motivation but it is lacking thorough theoretical investigation of the phenomena. Also the proposed loss is biased in the stochastic mini batch optimization due to the expectation under the squared term that is not addressed in the paper either. Finally, I have major concerns regarding the experimental evaluation. The set of OpenAI mujoco tasks is different from commonly used tasks in literature. In particular, Hopper and Walker2d, which are used in the vast majority of the literature, are ignored in table 1 and figure 2. This fact raises major concerns regarding generality of the approach. In conclusion, the paper presents interesting results on some tasks for continuous control. However, the paper requires more thorough experimental evaluation to confirm the statements. Also a deeper theoretical analysis will greatly benefit this work. I strongly encourage the authors to continuous working this approach and revise the paper to improve the theoretical and empirical analysis. This paper presents a very interesting idea but in the current form it is not ready for acceptance.<doc-sep>### Strengths The paper proposes a simple and elegant idea for changing the value function objectives in deep RL and demonstrates reasonable empirical evidence of it's potential usefulness. The authors also provide a clearly articulated intuitive motivation and provide experiments to support the proposal. The idea complements several other algorithms and is therefore quite widely applicable (and easy to try). The analysis of the experiments is also quite interesting and clearly presented. ### Weaknesses The paper is mostly well written and has interesting theoretical insights as well as empirical analysis. Here are a some weaknesses. * The theoretical justification for the variance reduction while technically correct, seems like it should be miniscule in theory. For the $T$ independent RV case being analyzed, the condition required for the improvement is that $\\Delta \\triangleq 2 \\mathbb{V}(X_i) - \\frac{1}{T} \\sum_{j=1}^T \\mathbb{V}(X_j) > 0$, which seems reasonable unless the sample in question is an outlier with a very small variance to begin with. However, the overall reduction itself has another $\\frac{1}{T}$ scaling, i.e. the variance reduction over the squared error case is equal to $\\frac{\\Delta}{T}$, which seems to be vanishingly small as the number of samples $T$ is large even if $\\Delta \\gg 0$. Note that for the situation where this core idea is being applied, the parameter $T$ is approximately, the number of samples in the expectation over $(s, a)$, which is large in practice. * The improvements are a good sanity check, but somewhat marginal in many cases (especially given the error bars). ### Additional comments/feedback * In Section 4.2 paragraph on State-value function estimation line 3, should the targets be $\\widehat{V}^\\pi$ rather than $V^\\pi$? * In Figure 1, some additional detail on the claims seems necessary (e.g. what parameterization is being considered?) * In the discussion below the specification for $\\mathcal{L}^1_{AVEC}, \\mathcal{L}^2_{AVEC}$, the authors say "the reader may have noticed that these equations slightly differ from Eq. 3", but I am not able to see what difference is being alluded to. * Figure 4 looks quite surprising in terms of the large qualitative difference between the baseline and AVEC-baseline graphs. Just to be sure, do you measure the fit with respect to $f_\\phi$ or the bias corrected version, $g_\\phi$? (obviously, the latter makes more sense?). * The Ablation study in Section 5.4 seems intriguing, but what the conclusions imply seems unclear. It appears the authors were expecting to see some non-zero value of $\\alpha$ to improve over $\\alpha=0$ (AVEC), but this isn't the case? Some additional clarification here would be useful. Also, it is a bit confusing to separate the plots into two depending on whether the weighting is less than one; as I'm guessing the exact same plot is used for the non-alpha versions in each pair of these graphs? * In Figure 5, the distance to the true value function seems to be relatively flat (or even mildly increasing) through the entire horizon in both graphs. Is this simply due to the resolution, as I'd expect there to be a drop at least in the initial phase over time. <doc-sep>This paper presents AVEC, a new critic loss for model-free actor-critic Reinforcement Learning algorithms. The AVEC loss can be used with any actor-critic algorithm, with PPO, TRPO and SAC being evaluated in the paper. The loss builds on the mean-squared-error, and adds a term that minimizes $E_s [f_{\\\\phi}(s) - \\\\hat{V}^{\\\\pi_{\\\\theta_k}}(s) ]$. The addition of that extra term is motivated by recent research on the stability of actor-critic algorithms, and the benefits obtained by the AVEC loss are empirically demonstrated in numerous environments, with AVEC+PPO, AVEC+SAC and AVEC+TRPO. Quality: the paper presents an interesting idea, that is simple but well-motivated, and leads to encouraging empirical results. Both the theoretical and empirical motivations are strong. Clarity: the paper flows well and is quite clear. However, an intuition for what the added term in the AVEC loss is missing. Section 4.2 motivates the added term in a mathematical way, but a few sentences explaining what the added term does, in simple terms, may help the readers understand why AVEC is a better loss than simple MSE. Originality: the contribution of this paper seems original. It builds on recent work, but the recent work identifies problems while this paper offers an original solution to these problems. Significance: the fact that AVEC provides good empirical results, and can be used as the critic loss of any actor-critic Reinforcement Learning algorithm, points at the high significance of this work. Many actor-critic implementations can easily be improved by using the AVEC loss. Another positive point is that the paper discusses how to implement the AVEC loss in algorithms that fit a neural network on batches of samples. This really helps implementing the proposed loss, that contains an expectation in an expectation and is therefore not trivial to properly implement. In general, I like this paper and recommend acceptance. A few questions/issues: - An explicit mention of the gradient of the loss, or at least a discussion of where to stop back-propagating gradients, would have been interesting. $f_{\\phi}$ appears two times in the AVEC loss, and it is unclear whether the loss contributes to gradients in $f_{\\phi}$ two times, or if the expectation over states is first computed (without computing any gradients), and then used as a constant in the rest of the evaluation of the loss. - As mentioned in "clarity", an intuition of what the added term of the AVEC loss does, especially since it is "inserted" in the mean-squared-error (inside the square), would help the less mathematics-savvy readers. It is not crucial to understand the paper, but the generality of the approach proposed in the paper may lead it to be used often by students, and so an intuition of why AVEC works and what it does would greatly help. Author response: the authors clarified my questions, so I maintain my recommendation for acceptance.
This paper is accepted, however, it could be much stronger by addressing the concerns below. The theoretical analysis of the proposed methods is weak. * As far as I can tell, the proposition has more to do with the compatible feature assumption than their method. Furthermore the compatible feature assumption is very strong and not satisfied in any of their experiments. * Sec 4.2 does not provide strong support for their method. R2 points out issues with their statements about variance and the next subsection argues from an overly simplistic diagram. The experimental results are promising, however, R3 brought up important issues in the private discussion: * Their implementation of SAC systematically produces results worse than reported in the original paper (they use a version of SAC with automatically tuned temperature https://arxiv.org/pdf/1812.05905.pdf); 1a) Their SAC gets average returns of 2.5k at 500k steps while the original implementation gets 3k at 500k steps; 1b) Their SAC on HalfCheetah 10k at 1M steps, original paper - 11k at 1M steps; 1c) The same applies to Humanoid, there is no improvement with respect to the original SAC; * Their approach degrades performance on Hopper. * They use non-standard hyper parameters for SAC. 0.98 instead of 0.99 for the discount and 0.01 instead of 0.005 for the soft target updates. That might be the main reason why their SAC works worse than the original implementation. * The authors use the hyper-parameters suggested for HalfCheetahBulletEnv for all continuous control tasks. For HalfCheetah, however, the authors of the stable-baselines repository (which this paper uses) suggest to use the hyper parameters from the original SAC paper (https://github.com/araffin/rl-baselines-zoo/blob/master/hyperparams/sac.yml#L48). Nonetheless, the results for the unmodified SAC reported in this work for HalfCheetah/Hopper/Walker/Ant are subpar to the original results, suggesting that the hyper-parameters for HalfCheetahBulletEnv are suboptimal for these tasks. Given the simplicity of the change and the promising experimental results (with some caveats), I believe the community will find this paper interesting and will lead to followup work that can patch the theoretical gaps.
This paper presents a method to transfer policies between different MDPs based on the minimization of Gromov-Wasserstein distance. This distance provides a pseudo-reward that can be used to learn via RL the optimal policy in the target MDP given an optimal policy in the original MDP. The method is optimal if the MDPs can be mapped into each other through an isometry, but works also empirically in other cases. Strengths: - Good mathematical grounding - Further exploration of an interesting alternative to map optimal policies to new embodiments (possible practical applications) - Well written Weaknesses: - The main issue with this paper is the experimental evaluation. The presented results are just images of three cases. The images of the first case (Fig. 3) are hard to see. There is no numerical performance (success, reward, some degree of progress). This makes the experimental evaluation insufficient to understand the applicability of the presented approach. Add more experiments, numerical performance, different metrics… In summary, while the paper presents an interesting turn on previously presented ideas, and the mathematical foundation is well worked out, the experimental evaluation is insufficient to support conclusions about the method. <doc-sep>In this paper, the authors focus on a more general cross domain imitation learning problem where only expert demonstrations from one domain is available. To solve such a problem, the authors use the Gromov-Wasserstein distance to align and compare states between tasks from different domains and propose a Gromov-Wasserstein Imitation Learning (GWIL). They also show theoretically the possibilities and limitations of GWIL. Strengths: 1. This paper introduces and addresses an important and general cross domain imitation learning problem. 2. Appling the Gromov-Wasserstein distance to align and compare two MDP domains provides insights to study cross domain imitation learning. 3. The proposed GWIL is novel. The authors also well justified limitations theoretically. Weakness: 1. There are only 3 tasks shown in the experiment section. More experiment results are preferred to show the effectiveness of the proposed solution. 2. Experiment results are not well-visualized. It would be better to give a link showing the results in animation. In general, the paper addresses a general cross domain imitation learning problem where only expert demonstrations are available and proposes a novel GWIL. Though the experiment results are not visualized well, the work is highly likely to show new insights to researchers in imitation learning and domain adaptation domains. <doc-sep>A method is proposed for cross-domain imitation learning, without resorting to any form of correspondence. This is done using a Gromov-Wasserstein distance between policies (in practice, Euclidiean distances on collected state-action pairs, within a given domain), which finds isometric transformations that best preserve distance measures, between the two domains. Given an imitation domain and an expert domain with example trajectories, a pseudo-reward is computed based on the degree to which the distances from a state to its neighbors in the imitation domain, are preserved in the expert domain. Given these pseudo-rewards, as computed for collected episodes, SAC is used as an RL algorithm to optimize the policy. The paper contributes both a theoretical analysis and experiments with: U-maze, pendulum-to-cart-and-pole, and half-cheetah-to-fallen-walker. Strengths: - novel idea, to the best of my knowledge, for a difficult problem; will inspire future work on "learning by analogy" - combination of theory, and empirical experiments; I'm surprised by the extent the method works in practice Weaknesses: - with no learning curves presented, it is unclear if the cross-domain imitation learning actually provides a benefit, for non-trivial systems, in terms of learning time or performance, as compared to learning from scratch. It would be beneficial to see these learning curves, and a wall-clock compute time comparison. - the limitations could be better articulated. The scalability is unclear (although it is unreasonable to expect the first iteration of an idea like this to scale right away). The isometry constraint is likely to be limiting in many settings, as is the choice of the Euclidean distance metric in the (state,action) space. - lack of an intuitive presentation of the Gromov-Wasserstein distance. I had to go elsewhere to obtain the intuition. The actual method used to compute the GW distance, on discretely sampled trajectories. The notation used for the GW sets-of-state action-pairs is confusing, i.e., \\tau, because the GW-distance is invariant with respect to temporal ordering (to my understanding), whereas the notation GW(tau, tau')seems to imply that the ordering needs to be preserved. Perhaps introduce a different notation for the data when the temporal information no longer needs to be preserved? The connections of the trajectories / (s,a) data / occupancy measures needs to be better articulated for this reader. Is there a simple figure that could depict the essence of eqns 6 and 7? Figure 1 appears to simply show rotations. If the goal is to show isometry for translations, it would be better to scatter the spirals more irregularly throughout the domain of the figure. Similarly, why not include a reflection, as stated in the caption? In Figure 3, the agents position is largely invisible. Most of the readers may simply think there is an editing mistake and that the same figure was included 8 times. In Figure 4, do the top and bottom row come from GW-corresponding state/actions? re: adding time to (s,a) to preserve uniqueness Wouldn't this cause problems, given that the GW-distance would now include time? The paper introduces a novel idea for imitation learning. It likely has many limitations, but the idea of find suitable imitation-based correspondences is one that is being pursued on multiple fronts, and this is a new approach, with a mix of theory and some initial proof-of-concept examples. The paper could do better at explaining core ideas, and still needs learning curves in order to understand the benefit of the cross-domain transfer. <doc-sep>This paper frames cross-domain imitation learning as an optimal transport problem using the Gromov-Wasserstein distance. This problem is highly relevant to imitation learning settings where there is often substantial domain mismatch between action and state spaces, eg. a humanoid robot learning to walk from a human demonstrator. The paper introduces a reward function that can be optimised and proves that this is equivalent to minimising the Gromov-Wasserstein distance between state action occupancies of an agent and expert. Substantial discussion/ proofs are included to show that minimising the Gromov-Wasserstein distance is equivalent to recovering an optimal policy up to an isometry. This is both a blessing and a curse, as it allows for optimal policies to be recovered under extreme changes in domain or differences, but does mean that recovered policies could be entirely unsuitable due to isometry. The paper is well written and concisely written, although does get excessively mathy at times, when a figure could be more helpful. Experimental results corroborate the proofs and propositions, and highlight the value of the proposed approach. Strengths: This looks to be a strong contribution, attempting to solve an important problem. The paper is well-written and relatively easy to follow, and results and proofs are interesting. Questions: It's unclear how easy an objective the proxy reward is to maximise. I would appreciate more clarity around this (eg. can you show some reward curves across multiple seeds/ runs), and convergence. The Gromov Wasserstein distance is quite an expensive distance to compute, can you comment on the computational feasibility of optimising eq 7 and potential scaling issues? Repeated mention of seed dependencies and effects is also concerning, I would appreciate more commentary on this. While I agree there are certainly settings where the Gromov Wasserstein distance makes sense from an imitation learning perspective, recovery up to an isometry can be prohibitive. eg. A human showing a drone how to take-off could result in a policy that lands drones, which is the opposite of what was demonstrated. I would value some discussion on these limitations - would it make more sense to optimise a different distance metric, or to use a different eg. non-euclidean kernel in settings like these? I suspect this is a non-trivial choice, that needs a substantial level of domain specific knowledge - does this then run counter the original objectives of this work? Minor: Pg 1. Intro is well written, but it would be great if Fig 2 could be shown earlier to give some more intuition into the Gromov Wasserstein distance and the solution framing. Pg. 2 typo "This takes us beyond limitation..." In existing imitation learning literature, much is made around limitations around learning from non-expert demonstrations. I would be interested to hear how the proposed approach would cope with these? Eq 1 is in dire need of a figure to explain this. As I understand it, although all proofs are provided in finite action and state spaces the proposed approach is said to scale to continuous spaces as ultimately it is only reliant on a suitable kernel function that can be expressed for continuous spaces. Is this correct? "We will see that in practice, running our method on different seeds enables to find an optimal policy in the agent’s domain". How do we know which seed produced the "right" policy? Fig 3 needs improving - it is extremely hard to see the agent. Fig 4/5. I'd love to see videos of these policies - did it actually learn to balance cartpole, or just to swing up? I enjoyed reading this paper, and think it adds greatly to the conversation around cross-domain imitation learning. The proposed approach has a number of strengths and limitations which I would appreciate hearing more about, particularly when it comes to convergence speeds, repeatability and computational requirements, but also whether strengths/weaknesses of optimal recovery of a policy up to an isometry have just shifted the need for specification of a mapping between expert and agent into a different domain. ==== Post rebuttal comments ==== Thank you for engaging in the process, I still believe that this is a good paper.
All reviewers suggested acceptance of the paper based on that the paper addresses an important problem and presents and validates interesting ideas for approaching it. Therea are some concerns regarding limited experiments - I'd like to encourage the authors to make an effort to address these concerns and also a few others raised in the reviews in the final version of their paper. The authors already made several updates to their paper in that regard during the discussion phase so I believe that the paper would be an interesting conttribution to the conference and I am recommending acceptance of the paper.
Interesting idea applying the representation invariance capability of deep sets to mosaics of patches extracted from a Whole Slide Image. The method was compared with a state-of-the-art search engine with competitive results. The data split also is properly described in the manuscript. Several elements of the method are not described in detail, and it would allow for increasing the reproducibility of the presented methodology: - The initial selection of the image via clustering is not properly described - The formulation of deep sets is missing in the manuscript <doc-sep>* Modelling the slide as one vector improves search speed over existing work based on bags of patches to represent whole-slide images * The proposed approach outperforms previous work (e.g., Yottixel) in searching similar cases in most primary sites of TCGA. * One reference is missing (?) in section 2 * I don't understand what is the advantage of making a mosaic with 40 images and make batches with 16 of those if in the end all patches are treated independently by EfficientNet as 640 different samples * The Yottixel method is heavily used in this paper, from patch sampling and clustering to using its search functionalities. However, this makes the paper lack many technical details (for example, patch sampling and clustering techniques) and also makes it difficult to clearly appreciate what the novel contribution of this work is since it heavily relies on previous work. If the novelty is in the use of a single vector to represent the slide instead of a bag of patches, then this paper would have benefitted from a more detailed comparison between the two strategies, focusing on failures of one or the other method, for example. Why are there 46 brain patients and CNN-DS finds 91? Are the 45 additional cases some false positives? A confusion matrix would have helped, as well as some discussion on how to further improve the presented method. * The Deep Sets method is used, based on previous work, but is not explained in this paper * The use of a permutation invariant approach based on Deep Sets is not fully justified, and it is not clear why other approaches like pooling would not work here * No visual examples are shown, no discussion on cases of failure and possible reasons why this happens <doc-sep>The authors focused on solving an important task in computational pathology, that is fast WSI searching. The performed experiments show method potential and possibility. Moreover, the authors compared their approach with another available method shows significant improvement. The method description should be extended. It is not clear why mosaic is created from 40 patches? How method will works if we use 64/32/16/8 patches? Will be faster? Lack of details about applied data augmentation. The "related work" section does not include sufficient information. A few very important approaches, such as [1] or [2] are missed. [1]Tellez D, Litjens G, van der Laak J, Ciompi F. Neural Image Compression for Gigapixel Histopathology Image Analysis. IEEE Trans Pattern Anal Mach Intell. 2021 Feb;43(2):567-578. doi: 10.1109/TPAMI.2019.2936841. Epub 2021 Jan 8. PMID: 31442971. [2]Campanella, Gabriele, Vitor Werneck Krauss Silva, and Thomas J. Fuchs. "Terabyte-scale deep multiple instance learning for classification and localization in pathology." arXiv preprint arXiv:1805.06983 (2018). <doc-sep>+ The proposed framework is technically sound and easy to implement. + It is an interesting topic, which is very challenging and notoriously difficult to address. + This paper is clearly written and easy to follow. - The proposed method is only compared with one method. - The proposed method is only evaluated on one dataset. - Some methods which are related to the proposed method are cited but not compared, e.g., Hemati’s work [1]. [1]. Sobhan Hemati, Mohammad Hadi Mehdizavareh, Shojaeddin Chenouri, and Hamid R Tizhoosh. A non-alternating graph hashing algorithm for large-scale image search. arXiv preprint arXiv:2012.13138, 2020.
Initially, the majority of reviewers suggested a weak rejection. After the rebuttal, one reviewer changed their opinion to weak accept, giving an even split between weak reject and weak accept. I do think the authors did an adequate rebuttal, specifically the new experiment on lung cancer to compare against the state-of-the-art. As such I propose acceptance as a poster presentation. I do agree with the comment by Reviewer 1 that some additional discussion on the final experiment should be added to the camera-ready paper.
Knowledge of the underlying graph is not required to estimate spaceIV. The results section is fairly comprehensive and investigates violations of the assumptions and compares against an existing approach and two oracle algorithms. The graphical conditions for identifiability seems to be quite restrictive and simulations suggest that the spaceIV estimator can be quite off when assumptions are violated. Consider defining terms that might not be obvious such as Id and Im. I recommend providing a discussion about when we might expect the identifiability conditions of this model to hold in the real world. <doc-sep>1. The idea of the identifiability assumptions is new. 2. The introduction of the algorithm for the proposed estimator is clear. 1. The proposed assumptions are not intuitive. 2. There are some errors in the proof of theorems. 3. The proposed estimator in the algorithm is not consistent with the estimator used in the numerical experiments. 4. The proposed method performs bad when the sample size is not too large and there are many outliers in the estimates even if the sample size becomes large. 1. I think in the introduction selection, "this is the case if we ... being independent of X and I" should be changed to "this is the case if we ... being independent of I." Because if H and \\varepsilon^Y are independent of X, then we do not need to use the instrumental variable method. 2. Can you explain more about "More precisely, we can choose I=e_K with e_k, k\\in\\{1,m\\}, being ...and K\\sim U(\\{1,\\ldots,m\\})''? Based on the definition of I, I belongs to R^m. I feel they are contradictory. 3. In proposition 2, I think it is better to explain the meaning of the notation "dagger" in the main paper. Because without referring to the appendix, the readers may not understand the meaning of this notation. 4. There are some errors in the proof of Theorem 3, the formulas of (6) and (7) are wrong. 5. Can the proposed assumptions be verified in practice? 6. In the algorithm, it proposes to use the limited maximum likelihood estimator (LIML). But in the numerical experiments, it says that "due to computational reasons, in the experiments we use the two-stage least squares estimator instead of LIML. The former estimator minimises the enumerator of (14)". On one hand, they are not consistent. On the other hand, it also shows that LIML is not a good choice in this algorithm. In addition, can you introduce more about the LIML method? What is the meaning of "The former estimator minimises the enumerator of (14)''? 7. Based on the simulation results in Figure 4, spaceIV estimator performs bad when the sample size is not too large. The range of the estimated values is very wide and there are some outliers, which means this estimator is not stable. 8. There are some typo errors in the paper. For example, in Figure 6, "A1&A3" needs to be changed to "A1&A2&A3". Please check all the typo errors carefully. <doc-sep>+ The paper presents some potentially novel and useful ideas + while I am not aware of many cases where multiple instruments are available, the authors motivate the work in the setting of multiple experiments which may be represented as multiple instruments. While some of the ideas seem to have merit, the paper seems to be written very hastily to be published as is. I found the notation very confusing: For example, in Section 2 instruments and covariates are indexed I_j, X_j, in Section 2.1 they are indexed I^j, X^j and as far as I could tell, in Section for they are just denoted using integers (e.g., S is a set of nodes but takes values in {1, \\dots, d}? These inconsistencies make it very hard to read the paper and understand the technical part, and many notations are not explained very well (for example, what does supp(\\beta) in proof of theorem 3 mean? Isn't \\beta an element of \\matchal \\Beta? In addition, the results are not explained in a very intuitive manner, e.g., why is absolute continuity with respect to Lebesgue measure important for identifiability? How is condition B2 in Section 4 a graphical condition? I suggest that the authors rewrite the paper with a preliminary section where they can be explain all of the notations and definitions used in the text. <doc-sep>The paper is well written, and for the most part, easy to follow. Assumptions about the models and the corresponding graphs are well motivated and made explicit, with helpful examples and descriptions. Both graphical criteria and and estimation algorithm are provided, which makes the proposed methodology easy to apply in practice. The paper suffers from some notational issues, mainly, some concepts are only partially defined or not defined at all, making the paper at times harder to read, than it otherwise would have been. (I will list these in more details in the detailed comments). Section 5 considers the estimation of the causal effect. There seems to be a slight logical jump here, since up until this point, no distributional assumptions have been made (aside from the absolute continuity), but suddenly a test statistic is presented that is supposedly F-distributed. I think the authors should clarify exactly what assumption have to be made about the model so that the test is valid. The time complexity of Algorithm 1 is not considered. The algorithm involves evaluations over all subsets of specific size, which at a glance would quickly make the proposed estimator unfeasible as the size of the graph increases. In the simulations, the authors consider a graph with 20 X-variables, which seems small to me. There are some notational issues in the paper. In section 2, the authors present the SCM, but it is not fully specified. How are the instruments defined in this model? What are h and g? In section 2.1, the graphical representation is defined somewhat informally. Later, the associated graph G is used, but it has never been defined and operations such as ancestors (AN) are not defined (is a node ancestor of itself for example?) I do not understand the motivation to call the components of instruments "intervention nodes", since these have nothing to do with interventions defined by the do-operator in the SCM context. I guess this is related to the last paragraph of Section 1, but the motivation is not clear to me. Algorithm 1 performs possible a large number of hypothesis tests, increasing the likelihood of Type I error. I wonder if the authors have considered to include a correction for this, based on the number of tests made. In Appendix B, the authors provide a proof of Proposition 7. The claim is that P(AW \\in Im(B)) = 0, but at the last step, only an inequality P(AW \\in Im(B)) \\geq 0 is obtained. Is this a typo? Should the inequality be \\leq instead? Also, the final line of the proof mentions "Lemma 7", while this is a proof of Proposition 7.
Meta Review: I thought the paper provides a novel family of assumptions concerning IV estimation with a large and structured treatment "variable". Results are of much relevance to the UAI community. However, some of the technical parts of the paper need to be cleared from (minor, but distracting) mistakes. The graphical characterization of assumptions does a solid job, but it is still somewhat evolved.
This paper provides a benchmark to evaluate approximators for Wasserstein-1 distances as loss functions in the generative adversarial network setting. - **{S1}** While previous works use discrete distributions for benchmarking solvers, this work suggests continuous distributions, which is a novel aspect for benchmarking W_1. - **{W1}** The benchmark contains only one image dataset with a single mode (faces). The addition of more image datasets, especially multi-modal ones (e.g. CIFAR-10), would improve the versatility of the benchmark and extend it to conditional models. <doc-sep>Authors propose a generic methodology to construct benchmark pairs with ground truth OT plan, OT cost, and OT gradient. We can use this tool to evaluate the performance of the neural dual OT solvers approximating the Wasserstein-1 distance or the gradient of Wasserstein-1 distance. Specifically, the authors employ the 1-Lipschitz MinFunnel functions to compute transport rays and define the ray monotone map. With them, we can define a target distribution $\\mathbb{Q}$ and compute OT cost and OT gradient based on the original distribution $\\mathbb{P}$ The authors provide an elaborate introduction to the Wasserstein-1 and its neural dual OT solvers. Followed by compact math proof about their benchmark pairs. Experiments are also reasonable. It is also a good point of view to consider the gradient of the Wasserstein-1 distance. Some minor concerns. Is it hard to turn hyperparameters for this method? For example, when you compute the High-dimensional benchmark pairs, you choose $b_n \\sim \\mathcal{N}(0,0.1)$ and p = 8. How do you choose it? How long does it cost for the hyperparameter search? The dimension of images ,in reality, is higher than $2^7$. Can this tool handle higher dimensions? If we carefully choose MinFunnel function u, instead of randomly picking, will the performance be better? What will be the effect of increasing N and D? Paper mentions "in WGANs, the solvers move the generated distribution (bad images, $\\mathbb{Q}$ in our construction) to the real distribution (good images, $\\mathbb{P}$)". However, $\\mathbb{P}$ is synthetic distribution and $\\mathbb{Q}$ is computed ground truth 'real image' distribution, in the case of images benchmark. Why do the solvers move $\\mathbb{Q}$ to $\\mathbb{P}$, instead of the opposite? Authors mention solvers MM, MM:R takes longer for training, compared with GP, SO, and LP. Is the time gap significant? <doc-sep>Motivated by the lack of benchmarks for W1 dual methods (other than perceptual measures such as FID or IS), this paper proposes to create a (semi-)synthetic set of benchmark datasets with known optimal transport plans, maps, and distance. To do this, the paper first develops theory about maps that are optimal by construction. Then, the paper proposes concrete methods for constructing the necessary functions and computing the necessary plans, maps and gradients. Finally, synthetic dataset pairs are generated from truncated Gaussian data and CelebA data at various dimensionalities and used to evaluate and discuss many existing W1 methods. - Discusses good overview of W1 methods. - Proves theoretical results about how to construct maps that are optimal w.r.t. W1. - Proposes novel way to construct ground-truth (semi-)synthetic benchmarks for evaluating Wasserstein-1 dual solvers. - Provides code and datasets for benchmark datasets and algorithms. - Evaluates the gradient of the W1 w.r.t. the parameters, which is actually most important for most generative methods. - Only one real-world dataset (celebA) is considered. And the synthetic datasets are quite simple (i.e., truncated Gaussians). It seems including more real-world datasets (even MNIST or CIFAR10) would be useful or using interesting real-world tabular data for smaller dimensions (e.g., even something like iris). - (This limitation is mentioned in the text but does seem to be the main limitation) It seems the benchmark only considers maps where the samples are grouped more closely together (or the reverse). Maps that expand parts of the space or where some expand and some contract would be better. It is unclear whether the benchmark maps properly represent real-world OT maps. - (Minor but nonetheless important for final paper) All result tables are in the appendix. And the figures are in odd places with nonstandard captions. At least some summary table of the results and your recommendations for suggested methods based on context would be important to include. What methods would you recommend and why? The answer may be a combination of ease-of-use, convergence behavior, and overall performance. <doc-sep>This paper proposes a benchmark to evaluate the methods of computing the Wasserstein-1 distance. The authors construct 1-Lipschitz functions and use them to build ray monotone transport plans, which yield pairs of continuous benchmark distributions in high-dimensional spaces. Some WGAN dual form solvers are evaluated using these benchmark pairs. 1. This paper proposed a benchmark to evaluate the methods of computing the Wasserstein-1 distance. The problem is interesting to the community. 2. This paper is well-written and technically sound. The method uses 1-Lipschitz functions to construct pairs of continuous distributions, which is well designed. 3. This paper thoroughly evaluates popular WGAN dual form solvers in high-dimensional spaces using these benchmark pairs. 1. The title of this paper is ambiguous and may lead to inappropriate reviewers. 2. The theoretical analysis and the intuition of the proposed method is weak. It is unclear why the proposed method works well than previous methods. 3. Evaluating the Wasserstein-1 distance does not directly validate the superiority of the methods on specific tasks, which may need more explanations. <doc-sep>This paper proposes a benchmark for computing the Wasserstein-1 distance. The authors first propose to use 1-Lipschitz functions to build ray monotone transport plans and obtain known OT maps. These ground truth maps are then used to benchmark dual OT solvers used in particular in the Wasserstein GAN framework. - This papers proposes a method to build **known** OT maps using 1-Lipschitz MinFunnet functions. This choice is clearly justified as these functions are universal approximator of 1-Lipschitz functions (Prop.2). Having known OT maps allows to faithfully compare the OT solvers - They carefully build transport ray of these functions. - The paper is well written and easy to follow. - The authors tackle an interesting problem and having more comparison like this one is crucial - I regret that the results of the benchmarks are only available in the Appendices. I would recommend the authors to include some of them in the main paper since those are the main results of the paper. - The restriction to 1-Lipschitz *MinFunnet* functions seems to be a main limitation of this work. - It seems that in the experiments only one random start is considered. Is there any reasons why the authors did not perform multiple runs? This seems to impede to assess the methods stability and robustness with regard to the random start and the parameters $a_n$ and $b_n$ in the *funnel*. <doc-sep>This paper proposes a benchmark for methods computing the Wasserstein-1 distance. Section 1 summarizes background information on computing W1, often with the dual in eq (4) and (5), and how the W1 is used in GAN training. Section 2 summarizes methods estimating the dual potentials and transport maps. Section 3 describes the benchmark distributions, and Section 4 shows the results of evaluating the methods on the results, which are quantified in Section D of the appendix. + Approximating W1 computations is widely used and a difficult setting to benchmark because the ground-truth transport maps and distances are often not known. I am not aware of an established W1 benchmarks and papers often have to rely on downstream tasks (such as inception scores) to justify an algorithmic improvement to the W1 approximation. + This paper presents non-trivial settings where the ground-truth transport map is known and uses it to + The experimental results are thorough and the paper strongly shows that minimax methods solve the benchmark tasks in most settings, at least for obtaining a gradient that approximates the true gradient. + While the paper proposes a new benchmark for approximating the W1, it unfortunately does not present results in established GAN settings as the ground-truth maps are not known. Thus research that is ultimately focused on improving the W1 computations in settings such as GANs may be able to use these benchmarks for preliminary experiments, but these benchmark tasks may not reflect the true difficulties. of these methods thus established and powerful + It is not clear how "solved" W1 OT is, how much work remains in the field, and how many new directions this benchmark will enable. In other words, better solutions to this benchmark will not directly enable new methods (or new GAN results).
This paper proposes a new benchmark to evaluate the solution of optimal transport problems. The reviewers concur that the benchmark is well-executed and novel. Some are concerned that a better benchmark for OT problems will not drive progress, as the successes of Wasserstein GANs occur despite their failure to solve OT. However, it seems like a useful intermediate check to deepen understanding of why Wasserstein GANs (and models to come!) work by (at least) eliminating non-explanations.
The paper describes a new loss function for training, that can be used as an alternative to maximum likelihood (cross entropy), or as a metric that is used to fine-tune a model that is initially trained using ML. Experiments are reported on the WMT 2014 English-German and English-French test sets. I think this is an idea worth exploring but overall I would not recommend acceptance. I have the following reservations: * I found much of the motivation/justification for the approach unconvincing - too heuristic and informal. What does it mean to "overgeneralize" or "plunge into local optima"? Can we say anything semi-formal about this alternative objective? * The improvements over ML are marginal, and there are a lot of moving parts/experimental settings in these models, i.e., a lot of tweaking. The results in tables 2 and 3 show a 0.36/0.34 improvement over ML using DSD. (btw, what is meant by "DSD-deep" or "ML-deep"? I'm not sure these terms are explained?) * The comparison to related work is really lacking. The "Attention is all you need" paper (Vaswani et al.) reports 28.4/41.0 BLEU for these test sets, respectively 3.4/5.96 BLEU points better than the results in this paper. That's a huge gap. It's not clear that the improvements (again, less than 0.5 BLEU points) will remain with a state-of-the-art system. And I think the paper is misleading in how it cites previous results on these data sets - there is no indication in the paper that these better results are in the literature. Some small things: * unplausible -> implausible * "Husz (2015) showed that D(P || Q) is not identical to its inverse form D(Q || P)" this is well known, predating 2015 for sure. <doc-sep>This paper presents a new loss objective for NMT. The main idea is to optimize an interpolation of KL(P|Q) and KL(Q|P), which is Kulback-Liebler Divergence computed at the word-level for model distribution Q and true distribution P. The motivation is that KL(P|Q) finds a Q that covers all modes of the data whereas KL(Q|P) finds a Q that concentrates on a single mode. So optimizing on the interpolation gets the best of both worlds. In my opinion, this is a relatively simple and known idea in ML (but perhaps not in MT? I'm not sure.) On the other hand, the NMT experiments are well-implemented and convincingly shows that it improves BLEU on a WMT dataset. In general, the experiments look solid. I applaud the multiple baseline implementations, in particular even including the SMT baseline. The lack of transformer/CNN models is not a demerit in my opinion, since the focus is on loss objectives and the LSTM models are just as reasonable. The paper is clearly written, with a few exceptions. It is not clear why you have to first train with ML before switching to the proposed DSD objective. As such, Section 4.5 should be prefaced with a motivation. Also, Figure 3 is hard to read with the two kinds of plots -- maybe split into two figures? An open question is: does your model capture the issues of mode covering as mentioned in the motivation? It would be helpful to include analyses of the word-level distributions to quantify the differences (e.g. word entropy) between ML and various KL/DSD solutions. Also I would recommend showing train/test set perplexity scores of the various proposed and baseline methods. As a minor point for argumentation: it is not clear that your proposal addresses the sequence-level loss vs word-level loss issue. It is conceivable, but it seems indirect and there is no quantifiable connection between the word-level loss (such as DSD) and a sequence-level loss. Or is there? <doc-sep>This paper describes an alternative training objective to cross-entropy loss for sequence-to-sequence models. The key observation is that cross-entropy is minimizing KL(P|Q) for a data distribution P and a model distribution Q; they add another loss that minimizes the inverse KL(Q|P) to create their dual-skew divergence. The idea is tested in the context of neural MT, using a model similar to that proposed by Bahdanau et al. (2015) with results on English-to-French and English-to-German WNT 2014. In the context of beam search, improvements are small (<=0.5 BLEU) but statistically significant. This is an interesting idea, and one I certainly wouldn’t have thought of on my own, but I think it is currently lacking sufficient experimental support to warrant publication. The paper feels strangely dated, with most experiments on two-layer models, and only two citations from 2017. The experiments compare against an in-house maximum likelihood baseline (varying greedy-vs-beam search and model depth), and against a number of alternative training methods (minimum risk, scheduled sampling, RL) with numbers lifted from various papers. These latter results are not useful, as the authors (helpfully) point out that the baseline results in this paper are universally higher than the baselines from these other papers. Furthermore, it feels like methods designed to address exposure bias and/or BLEU-perplexity mismatch are not the right comparison points for this work, as it does not attempt to address either of these issues. I would instead be much more interested to see a comparison to label smoothing (Szegedy et al., 2015), which perhaps addresses some of the same issues, and which produces roughly the same magnitude of improvements. Also, the literature review should likely be updated to include Edunov et al., 2017. In general, the improvements are small (though technically statistically significant), the baseline models are somewhat shallow and the deltas seem to be decreasing as model depth grows, so it is hard to get too excited. Smaller concerns: For Table 1, it would be helpful to explain why Baseline is not equal to \\Beta=1. With some effort, I figured out that this was due to the alpha term modifying the cross-entropy objective when \\Beta=1. It would also be useful to tell us what “switching point” was used for Table 1 and Figure 2. Christian Szegedy, Vincent Vanhoucke, SergeyIoffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Rethinking the inception architecture for computer vision. CoRR abs/1512.00567. http://arxiv.org/abs/1512.00567. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of NAACL-HLT 2018.
This paper proposes a new loss function that can be used in place of the standard maximum likelihood objective in training NMT models. This leads to a small improvement in training MT systems. There were some concerns about the paper though: one was that the method itself seemed somewhat heuristic without a clear mathematical explanation. The second was that the baselines seemed relatively dated, although one reviewer noted that this seemed like a bit of a lesser concern. Finally, the improvements afforded were relatively small. Given the high number of good papers submitted to ICLR this year, it seems that this one falls short of the acceptance threshold.
This paper shows that the problem of defending MNIST is still unsuccessful. It hereby proposes a model that is robust by design specifically for the MNIST classification task. Unlike conventional classifiers, the proposal learns a class-dependent data distribution using VAEs, and conducts variational inference by optimizing over the latent space to estimate the classification logits. Some extensive experiments verify the model robustness with respect to different distance measure, with most state-of-the-art attacking schemes, and compared against several baselines. The added experiments with rotation and translation further consolidate the value of the work. Overall I think this is a nice paper. Although being lack of some good intuition, the proposed model indeed show superior robustness to previous defending approaches. Also, the model has some other benefits that are shown in Figure 3 and 4. The results show that the model has indeed learned the data distribution rather than roughly determining the decision boundary of the input space as most existing models do. However, I have the following comments that might help to improve the paper: 1. It would be more interesting to add more intuition on why the proposed model is already robust by design. 2. Although the paper is designed for MNIST specifically, the proposed scheme should apply to other classification tasks. Have you tried the models on other datasets like CIFAR10/100? It would be interesting to see whether the proposal would work for more complicated tasks. When the training data for each label is unbalanced, namely, some class has very few samples, would you expect the model to fail? 3. Equation (8) is complicated and still model-dependent. Without further relaxation and simplification, it’s not easy to see if this value is small or large, or to understand what kind of message this section is trying to pass. 4. Although the main contribution of the paper is to propose a model that is robust without further defending, the proposed model could still benefit from adversarial training. Have you tried to retrain your model using the adversarial examples you have got and see if it helps? <doc-sep>In this paper, the authors argued that the current approaches are not robust to adversarial attacks, even for MNIST. They proposed a generative approach for classification, which uses variational autoencoder (VAE) to estimate the class specific feature distribution. Robustness guarantees are derived for their model. Through numeric studies, they demonstrated the performance of their proposal (ABS). They also demonstrated that many of the adversarial examples for their ABS model are actually meaningful to humans, which are different from existing approaches, such as SOTA. Overall this is a well written paper. The presentation of their methodology is clear, so are the numerical studies. Some comments: 1) it was not very clear to me that the authors were estimating the p(x) for each y. The transition from p(x|y) to p(x) at the end of page 3 was astute and confused me. The authors should make it more clear. 2) it would be beneficial if the authors could comment on the how strict/loose the lower bound of (2) is, as it is critical in estimating the class specific density.<doc-sep>Paper summary: The paper presents a robust Analysis by Synthesis classification model that uses the input distribution within each class to achieve high accuracy and robustness against adversarial perturbations. The architecture involves training VAEs for each class to learn p(x|y) and performing exact inference during evaluation. The authors show that ABS and binary ABS outperform other models in terms of robustness for L2, Linf and L0 attacks respectively. The paper in general is well written and clear, and the approach of using generative methods such as VAE for better robustness is good. Pros: Using VAEs for modeling class conditional distributions for data is an exhaustive approach. The authors show in Fig 4 that ABS generates adversarials that are semantically meaningful for humans, which is not achieved by Madry et al and other models. Cons: 1) The main concern with this work is that it is heavily tailored towards MNIST and the authors do mention this. Scaling this to other datasets does not seem easy. 2) Using VAEs to model the conditional class distributions is a nice idea, but how does this scale for datasets with large number of classes like imagenet? This would result in having 1000s of VAEs. 3) It would be nice to see this model behaves for skewed datasets.
The paper presents a technique of training robust classification models that uses the input distribution within each class to achieve high accuracy and robustness against adversarial perturbations. Strengths: - The resulting model offers good robustness guarantees for a wide range of norm-bounded perturbations - The authors put a lot of care into the robustness evaluation Weaknesses: - Some of the "shortcomings" attributed to the previous work seem confusing, as the reported vulnerability corresponds to threat models that the previous work did not made claims about Overall, this looks like a valuable and interesting contribution.
This work presents a study on applying 1) combining pseudo-labeling with entropy-based label filtering for representation learning and 2) novel-novel and base-novel manifold mixup with entropy-based filtering for adapting base representation to novel classes for improving few-shot image recognition tasks. Under appropriate hyperparameter settings, the proposed approach achieves competitive performance on standard few-shot image recognition benchmarks. Ablation studies are conducted to investigate the gains brought by individual techniques. Strengths: 1) The paper is well-written and easy to follow. Approaches, experiment settings and implementation details are clearly described, in a way that helps reproducibility of the proposed work. Experiment results are well organized. 2) Experiments and ablation studies seem thorough. Standard benchmarks are used. Latest works that follow the same experimental settings are included as baselines. Necessary baselines are included. Ablation studies included all components of the system. 3) Experiment results seem to show highly competitive performance (without transductive learning). Weaknesses: 1) While it is clear that the proposed approach worked, it is not very clear how it worked. Here are my recommendations - For entropy-based filtering, try to fit the qualitative examples in the main paper - Show some hard examples selected by mixup. Maybe something like (0.9x image A + 0.1x image B) - Which classes often benefit from pseudo-labeing, mixup and entropy filtering? 2) While not necessary, it would be interesting to learn about the sensitivity of the proposed approach to hyperparameters. The use of different hyperparameters for 1- and 5-shot might be a deviation from classical setups? Although it's not entirely surprising that different hyperparameters might be necessary as the dataset size changes. It would help if measures are taken to control overfitting to specific benchmarks. ============================= Author response adequately addresses my concerns. Limitations are addressed adequately. <doc-sep>The paper presents a hard manifold mix up process to augment the few-shot samples during fine-tuning for improving accuracy performance. The mix-up is carried out in different settings: novel-novel, novel-base and the hard samples from the mix up based on margin are utilized during fine-tuning. The papers' idea of using manifold mix up is complemented by the use of hard sampling for further use during fine-tuning, which is simple and interesting. I do not see much improvement from this method in comparison to previous methods. In fact in many settings the improvement is negligible such as in Table 1. Moreover, sometimes it is not practical to have access to the base class samples during fine-tuning stages and the assumption. Therefore source free fine-tuning is not practical with this method. The pseudolabel creation for base class samples with a classifier initially built on novel target samples is cumbersome as the label set do not match in most cases. Therefore, the filtering step on top of this assumption with entropy criterion does not make sense with a reasonably big domain between base classes and novel classes. Also, many standard benchmark datasets are missing in the evaluation such as tieredImagenet, CUB, domain shift experiments such as mini-imagenet to CUB. <doc-sep>This paper presents FeLMi, a type of mixup method for few-shot learning, with margin-based uncertainty criteria. The work aims to augment new data in a mixup form to tackle the overfitting problem in a few-shot classification setting. With this, the presented method consists of six phases. Overall, after pretraining on the base classes and obtaining the pseudo labels of the base examples, both novel-novel and base-novel mixup samples are generated for data augmentation. The pretraining method employs self-supervised-based Invariant and Equivariant Representation learning (IER) [19]. Finally, the evaluation is presented using three few-shot learning benchmarks. In general, the idea of the paper is interesting. Specifically, I like how the presented method is inspired by several recent works of base examples and mixtup method for few-shot image classification to present novel data-augmentation methods. Additionally, the paper is well-written and easy to follow. However, I think the method contains the following weaknesses: - The overall model is complex (having several stages with different criteria). - The method uses many tricks, such as self-supervised learning for pretraining and active learning for the hard mixtup. Therefore, the method's evaluation and justification became more complicated compared to standard methods such as ProtoNet. - The proposed method is evaluated on three small datasets, but I think some large datasets are required. - Having a 5-way classification problem, the FeLMi can not gain significant accuracy in CIFAR-FS dataset. I think this is ok, but some extra evaluation might help us understand the proposed method's classification gain/loss. Though the presented method is proposed for few-shot learning, it is only evaluated with image classification. Additionally, evaluation with large datasets (such as Tieredimagenet) is missing. While the presentation of the paper is good, I think the author(s) can improve the related work by clearly discussing the difference between the approaches (such as Base examples for few-shot learning) with the proposed method. I think the conclusion can include some limitations and future works too. <doc-sep>In this paper, Few shot Learning with hard Mixup (FeLMi) is developed to mitigate the issue of data scarcity. The proposed method is composed of 6 steps. (1) Pretrain the model on the base dataset using the cross-entropy loss with auxiliary self-supervised loss. (2) Train a linear logistic regression model on the top of the learned feature extractor using the novel data to generate pseudolabels for the base dataset. (3) Filter the pseudolabels based on entropy. (4) Do novel-novel and base-novel mixup to generate more data. (5) Choose hard mixup samples based on the margins in classification probability. (6) Finetune the model using the filtered base data, novel data, and hard mixup data. Experiments indicate that the proposed method leads to improved performance in few-shot learning. Strengths: The paper is well organized and easy to follow. The proposed method is straightforward and easy to understand. Sufficient ablation studies in the experiments. Weaknesses: Among the 6 steps in FeLMi, only Step 5 is original (choosing hard mixup samples). However, Table 4 indicates that Step 5 only leads to marginal improvement in few-shot classification accuracy. The model is pretrained by cross-entropy loss and self-supervised loss. It is known that pretraining with auxiliary self-supervised loss leads to improved performance in transfer learning for downstream tasks. Although FeLMi achieves the best performance in Tables 1, 2, and 3, it relies on a stronger pretraining method. It does not demonstrate the effectiveness of hard mixup. Compared with the simplest fine-tuning methods in few-shot learning (e.g. RFS-simple, ECCV 2020), the proposed method is more computationally expensive. Specifically, the proposed method must assign pseudo labels for the base data. It becomes an issue when the base dataset is huge. It’s better to provide an empirical or theoretical analysis of the time and space complexity.
The submission introduces an approach to few-shot learning called Few-Shot Learning with Hard Mixup (FeLMi) which, as its name suggests, applies hard manifold mixup as an augmentation strategy for adapting a pre-trained model to a small training set of downstream examples. The model is first trained on the base classes using a combination of supervised learning and Invariant and Equivariant Representation learning (IER), then a linear classifier is trained on top of the frozen backbone using the novel classes' support set and pseudolabels are generated for the entire base class dataset. Base class examples are filtered to exclude ones with low pseudolabel entropy (using a thresholding hyperparameter). Feature-level mixup is applied on base-novel and novel-novel example pairs, and the resulting examples are subsampled to the N hardest ones based on the difference in top-2 probabilities. The model is then fine-tuned on the pseudolabeled base examples, novel examples, and hard-mixup examples. Results are presented on two CIFAR100-based few-shot classification benchmarks (CIFAR-FS, FC-100) and mini-ImageNet in the 5-way, 1-shot and 5-way, 5-shot settings. FeLMi is shown to outperform competing approaches. Ablation analyses are also presented to assess the contribution of various components on performance improvements. Reviewers highlight the submission's writing quality and clarity (7gPu, 2Qz6, WHpk). Opinions are split on how straightforward the proposed approach is, with Reviewers 3CtS and WHpk noting its simplicity, and Reviewer 2Qz6 expressing concerns over its many moving parts. Opinions are also split on the significance of the performance improvements; Reviewer 7gPu finds FeLMi's performance competitive with competing approaches, and Reviewers 3CtS and WHpk are concerned that the improvements are modest. The authors respond by emphasizing that FeLMi is simple and effective, but Reviewer 3CtS remains eager to see a clearer performance gap. Reviewer 3CtS is also concerned that the approach is not source-free, to which the authors respond that the unlabeled data could also come from another source than the upstream training dataset. Following the discussions, opinions remain divided among reviewers, although the majority is either leaning towards or strongly recommending acceptance. Reviewer 3CtS still recommends rejection, but is open to an acceptance recommendation. I therefore recommend acceptance.
This paper proposes a loss to relax the assumption of using a fixed k for top-k classification learning. The authors use the existing differentiable sorting and ranking operators. Experimental results also achieve a state-of-the-art on ImageNet. Strengths:  The motivation of this paper is clear to draw k from a probability distribution for training.  The idea of this paper is pretty novel and exciting which makes the classification model robust.  The extensive experiments conducted on five data sets are sufficient to show the advantages of the proposed idea Weaknesses:  The details of the differentiable sorting networks is not represented. How to rank the predicted scores of the final classification layer and get the probability distribution?  In figure 1, the first row (rank1) are multiplied by 1 and the second row(rank2) are multiplied by 0.5. Please explain the reason. This paper derives a family of top-k cross entropy losses which is a novel practice. The experimental analysis on ImageNet including the impact of the distribution and ranking set size m, etc, is concrete and sufficient. <doc-sep>This paper addresses Top-k classification learning. Based on the recent progress on differentiable sorting and ranking, the author proposes a loss function for top-k classification where the k is not fixed but follows a given probability distribution. To improve the efficiency, a splitter selection network is proposed so that fewer layers are required for the sorting network. The proposed loss function can be combined with different sorting methods. In experiments, the loss function is shown to be effective in training a model from scratch on Cifar10. It can also be used in fine-tuning on ImageNet dataset and has performance gain. strengths 1. The idea of using different probability distributions for k is interesting. The results also demonstrate the effectiveness of this idea. 2. The experiments of incorporating different sorting methods are comprehensive. weaknesses 1. In my opinion, the P_K is more like a set of weights, rather than a probability distribution. If it is the case, I recommend improving the descriptions to reduce confusion. 2. It would be nice to present an experiment with conditional probability distributions for k of different classes based on their semantic meaning (like person, animal). I think it is also a significant contribution of this paper. This paper proposes a flexible loss function for top-k classification, providing useful insights for image classification. So it is worth of reading for the researchers in this area. <doc-sep>The paper proposes a method to employ the benefits of differential sorting methods towards top-k classification learning. The presents several experiments with sampling weights for different ranks and presents the results. The loss is used for fine-tuning in most experiments (apart from the CIFAR100 case). The method seems to give minor improvements on the ResNeXt-101 32*48d baseline. Strengths: - The efforts on optimizing the top-k classification learning through differentiable sorting appear novel to me. The discussion on differential sorting is comprehensive. The paper specifically discusses each of the options and how it is optimized for the studied scenario. - Experiments are thorough - The work is also interesting because the performance gains come only because of fine-tuning. Weaknesses - The second term and eqn 2 would be constant with k=5 (if only the top five rows of the P matrix are constructed). Expanding eqn2 for the given example in Fig1, the loss would be: -log(0.5 *.03 + 0.5 (0.3+0.6)), assuming Panda is the ground truth class. Consider a case of P_K [0.5 0 0 0 0.5], the equation would be -log (0.5 top1 + 0.5 (top1+top2+top3+top4+top5)). If only five columns are reconstructed and if they are column stochastic, then the sum of top1 to top5 would always be 1. Then the second term will always give a constant value. Requesting the authors to clarify this aspect. - At first, it appears that the distribution would be a sample. However, fixed distribution is used for a set of experiments. For example, it is either [0.5 0 0 0 0.5] or [0.2 0.2 0.2 0.2 0.2] for the entire experiment. Hence, presenting it as "sampled" is confusing. The best results come when you have the top1 and the sum of the top five values. Hence, the initial discussion and intuition can be improved a bit. - The improvements on Noisy Student EfficientNet-L2 are negligible. 88.35 to 88.36 is certainly not statistically significant. Were experiments for Table1 were also ran 10 times (like table 2?). - Please mention the number of rows that were reconstructed for each experiment. The number of columns (m) is mentioned in the experiments but not the number of rows. - Berrada et al. was used to train the model from scratch. It would be worth comparing their loss for fine-tuning purposes as well. I think that would be a fairer comparison. Although the paper brings several novel perspectives, there remain several ambiguities as well. Some additional experiments, clarifications can also strengthen the draft. Overall, in the current form, the paper is a borderline one and the final decision will depend a lot on the discussion during the rebuttal phase. <doc-sep>The paper proposes a differentiable loss for top-K classification based on differentiable sorting networks, i.e. sorting neural networks in which basic min/max operations are replaced by smoothed versions (i.e. softmax/softmin). The main principle is to use the sorting network to estimate the probability of the rank of each class and then filter only the top-k. An extension consists in considering that k can take several possible values at random (e.g. 50% chance of being 1 and 50% chance of being 5). The resulting loss is experimented on three datasets (CIFAR-100, ImageNet-1K and ImageNet-21K-P) and with three existing sorting networks. Performances are mainly compared to cross-entropy showing low improvements. Strengths - Set-valued classification is an important topic to cope with class ambiguity. Few works (only one as far as I know [1]) have proposed a top-k loss for neural networks and there is room for improvements - The proposed approach is different from [1] as it relies on sorting networks to determine the set of the most likely classes rather than a purely top-k objective Weaknesses - A first weakness is that the contribution is quite incremental and not well justified from a theoretical point of view. Using sorting networks for top-k is an acceptable strategy from a practical point of view but a bit over-kill and not very new from a theoretical point of view. The proposal to use several values of K is also not really justified. The principle of top-K is to predict sets of fixed size contrary to other set-valued classification approaches that attempt to solve other objectives (e.g. adaptive set sizes but equal to K on average). We let the authors refer to [2] for a clear overview of the different objectives. Here, the objective is not really clear. If K is supposed to be a random variable (e.g. 50% chance of being 1 and 50% chance of being 5), that means that for the same image x, the classifier is supposed to return randomly either one class or 5 classes without any consideration with regard to the image content itself. - another main weakness is that no significant improvement of the proposed loss over cross-entropy is shown. The reported top-K accuracy gains are not systematic and so low that they may be not statistically significant. As a first step towards a better understanding of the results, the authors should first compute some significance tests (e.g. p-values on several runs and a clear cross-validation procedure for model selection among epochs). But even so, it won’t resolve the fact that the performance gain is observed only for some specific configurations (e.g. a specific sorting network and specific values of K probabilities) and remains very low even in such advantageous conditions. [1] Berrada, L., Zisserman, A., & Kumar, M. P. (2018). Smooth loss functions for deep top-k classification. arXiv preprint arXiv:1802.07595. [2] Chzhen, E., Denis, C., Hebiri, M., & Lorieul, T. (2021). Set-valued classification--overview via a unified framework. arXiv preprint arXiv:2102.12318. An interesting attempt to improve the top-K classification but consistent limitations: (I) an incremental contribution and no clear justification of considering k as a random variable (ii) no significant improvement of the proposed loss over cross-entropy
The main consensus among the reviewers was that although the approach is interesting, this submission suffers from two main weaknesses: - The methodology is not very novel, and the proposed parts of the method not well justified (in particular regarding the interplay of a differentiable sorting approach and of the random choice of k) - The results, compared to a standard cross-entropy loss are not very convincing: there does not seem to be a statistically significant advantage.
This paper considers a continuous version of the classical Hopfield network (HN) model.In contrast to well studied discrete models where the patterns (vectors) that are stored are discrete, this paper studied continuous vectors and a new continuous energy function. Convergence results to a fixed point are proven for the new rule, and it is shown that for the case of random patterns, the Hopfield network can memorize exponentially many patterns (with high probability).  Finally several implementations are given showing how incorporating the new Hopfield net in classification tasks can improve classification accuracy in regimes where data is scarce and where neural networks do not fare well. The paper is rather long and I did not verify all results. The description appears sound.The proofs appear non-trivial and rather technical. While the results here are nontrivial I was left me wondering about the added value of this new model. One of the biggest advantages of HN was its simplicity and elegance. More recent results of Hopfield and others with higher degree energy functions managed to maintain this clarity and brevity. The new model however is significantly more involved. It was not clear to me what is gained by this greater complexity and whether the gains justify the larger complexity. In actual implementations very limited precision is often necessary.How does this discretization influence the continuous model? How robust is it to rounding errors? Don't we get "old" discrete models in disguise? The (impressive) empirical results raise similar questions. Can't we use old discrete HN instead of the new model and achieve similar results? It would be perhaps more informative to compare different HN to the new model presented in this paper. It seems a bit strange that previous uses of HN (discrete ) did not achieve such an improvement in previous studies. It would be beneficial to add more on related work in this area. The authors might consider breaking their long paper to two different sections, one presenting the theoretical advantages of their new model and the other focusing on practical benefits. Finally, the nature of convergence to a fixed point wasn't clear to me. It seems likely that if patterns are not random convergence can take a long time as is the case for discrete HN. Some recent work about the complexity of finding fixed points of continuous functions may be relevant here:A converse to Banach's fixed point theorem and its CLS-completeness. More specific comments: 1) The paper starts with a rather lengthy discussion of previous work. I would recommend outlining the contributions of this paper earlier on. 2) "converge in one update step with exponentially low error and have storage capacity proportional to..." It was not clear to me that random patterns are considered here. 3) "proven for c= 1.37andc= 3.15 in Theorem 3" for what c exactly is the result proven? 4) "Furthermore, with a single update, the fixed point recovered with high probability"I presume this is true for random patterns? 5) Is beta>0?<doc-sep>The paper introduces a new Hopfield network which have continuous states and propose update rules for optimizing it. It also draws connections between the new model and attention mechanism used in transformers. Small scale empirical study is presented. Overall I like the technical contribution of the work but feel the paper could be revised to improve clarity about the optimization in the new proposed variant of hopfield networks. Below some specific comments: Pros: - connecting hopfield networks to attention mechanism and drawing out the variants in section 3 (as hopfield layers) is useful - The exposition in section 1 and 2 where the authors describe the hopfield network with continuous states is written well (although I do feel the motivation behind update equations could be explained a bit better) Cons: - As I mentioned earlier, I don't fully understand the intuition behind convergence in one update. Can the authors clarify this? Also the paper mentions update rule in eqn (5) converges after one update for well separated patterns. What happens to the updates / optimization when the patterns are not well separated? This should be discussed after equation (5). Maybe present different scenarios to make it clear. - Empirical study is limited in my opinion and can be improved. Is the trend in Fig 2 observed across more or less across all datasets? Can the authors comment on this? I like the visualization in the figure but it is bit hard to interpret (perhaps a more clearer label for it could help with that). Other comments: - The idea of separated patterns leads me to ask this question: is there any connection of this work to max-margin classifiers / kernel methods? - Did the authors consider what would happen if non-linear transformations (e.g. activation functions in DNNs) are applied on top of the inputs? How does the existing network change in that case? - Can the authors comment on the utility / challenges in applying their proposed method on datasets / tasks beyond the small scale UCI datasets used in their experiments? e.g. using them in large scale language modeling tasks where transformers are popular right now. <doc-sep>This work extends the binary Hopfield network (Demircigil et al., 2017) to continuous patterns and states. Connections are drawn between the result model to the attention layers of the transformers, the pooling operation of LSTM, similarity search, and fully connected layers. Experimental results are briefly described for analyzing the attention of Bert models, multiple instance learning, and small UCI classification tasks. The proposed model seems very interesting, and the proposed applications seem reasonable at a very high level. However, there is just not enough detail in this paper for me to understand how the models are implemented or why the model works better than other approaches. For example, section 3 declared 3 types of Hopfield layers, but without any formal definitions to them, or how they are integrated to the proposed models. The experiment section compares performances with existing models, but lacks any analysis of why the proposed models work better. Similarly, there is a lack of motivation/intuition in the introduction section. ## After author feedback ## Thanks for the paper update, and now I have a better understanding of the proposed approach. I have updated my review to the following: Previously Widrich+ (2020) showed that integrating transformer-like attention (or equivalently modern Hopfield networks based on softmax) into deep learning architectures outperforms existing methods (kNN and logistic regression) for massive MIL such as immune repertoire classification. More specifically a pooling layer can be formed by attending over a repertoire of instances with a fixed (but learnable) query vector. This work provides theoretical analysis of such a layer for its energy function, convergence of updates, and storage capacity, and points to directions of how such a layer can be understood and controlled. It extends the previous experiment: 1) apply HopfieldPooling (attention with fixed learnable query Q) to more MIL datasets (animal image and breast cancer) and achieve state of the art results. 2) apply Hopfield (attention) to 75 small UCI benchmarks replacing feedforward nets. Here Selu units (Klambauer+ 2017) are used to map input to storage Y and query R. The result is quite positive beating previous approaches including SVM, random forest, and SNN (Klambauer+ 2017) 3) apply HopfieldLayer (attention with fixed training data Y as storage) to 4 drug design tasks acting as an instance-based learning approach. The result seems quite interesting indicating that general purpose layers such as feedforward, pooling and nearest neighbors can be improved (in terms of robustness, learnability, or controllability) by adding attention like operations. I think the paper can talk less about existing results, and focus more on the new results and their analysis: - remove [Immune Repertoire Classification] result since it is from previous work. - move the Drug Design experiment details to the main text, and add some comment about under what condition Hopfield outperforms/underperforms RF. - for the UCI benchmark experiment the transformer layer (Vaswani+ 2017) seems to be a natural baseline and should be compared to. Suggestions for the presentation: - Should only in the future work section state that Hopfield can potentially substitute LSTMs or GRUs, since it is all hypothetical with no experiment result at this point. - The word "implemented" in Section 4 seems misleading as there is nothing changed in the Bert model structure? "Transformer and BERT models can be implemented by the layer Hopfield." - Can be more specific in descriptions. For example in the description of (2) Layer HopfieldPooling and (3) Layer HopfieldLayer in Section 3, R and W_K can be referenced again for "state (query) patterns " and "The stored (key) patterns" respectively. - It is probably more informative to replace figure 1 with a table to directly compare the energy function and updating rules of different Hopfield nets--i.e., classical, exponential and attention. - Avoid using "x" in equation 1, since the symbol has already been used for the stored patterns. - "HopfieldLayer" seems to be a very strange name.
The novelty of the paper are: + introduces a new Hopfield network with continuous states, hence can be learned end-to-end differentiation and back propagation. + derives efficient update rules + reveals a connection between the update rules and transformers + illustrate how the network can be used as a layer in deep neural network that can perform different functions The presentation was clear enough for the reviewers to understand and appreciate the novelty, although there were a few points of confusion. I would recommend the authors to address several suggestions that came up in the discussions including: - additional analysis to highlight when and how the networks is able to outperform other competing models - intuitions about the proofs for the theorems (okay to leave the detailed derivation in the appendix)
This work tackles the problem of learning linear sorting functions with bounded noise under Gaussian martingales. The proposed algorithms enjoy strong theoretical sampling guarantees and a polynomial runtime, for both the normalized Kendall’s Tau distance and the top-r disagreement loss. Strengths: - Presentation: the problem is well introduced and the main results are clearly presented - Impact: the results established seem to be of general interest in addition to solve the label ranking problem - The paper is technically sound. Weaknesses: - No experimental results limit the impact of the work. - Clarity: although the first two sections are very clear, the second half of the paper feels harder to follow. It does not feel clear to me whether the stated algorithms are solutions to the problem with KT Distance or with top-r Disagreement, or both. The theoretical limitations are adequately addressed. The authors state that the potential negative societal impacts of their work is N/A due to its theoretical nature. It might still be valuable to mention what could go wrong if the suggested algorithms were actually deployed. <doc-sep>This paper is the first to study the problem of learning linear label rankings in the presence of noise. In the label ranking problem, we are given access to samples of the form $(x,y)$ where $x \\in \\mathbb{R}^d$ and $y$ is a permutation of the sequence {$1, 2, 3, \\ldots, k$}. For example, this can correspond to a ranking of movies by preferences of a particular user in a movie recommendation system. In the **linear** label ranking problem, there is an additional constraint that the ranking should be such that it can be formed by the indices corresponding to a descending sort of the entries of $Wx$ for some matrix $W \\in \\mathbb{R}^{k \\times d}$. Further, in the **noisy** linear label ranking problem, we are given access not to *pure* samples from a linear label ranking distribution but instead samples whose labels are corrupted by some noise. This paper also assumes that the marginal distribution of $x$ needs to be Gaussian. They provide two algorithms, one improper and one proper, for learning with error bounds in the normalized Kendall tau (KT) distance. They also provide an algorithm with error bounds in the top-$r$ disagreement metric. In particular, their improper learning algorithm in the KT distance uses algorithms for learning linear-threshold-functions (LTFs) in the Massart noise model as sub-routines. Originality: I am not an expert in this area so I am not entirely sure about other related work. The proposed algorithms and getting them to work (as in proving guarantees for them) are quite non-trivial and so the paper is quite original in my opinion. Quality: The submission is technically sound. All claims are well-supported with proofs. Clarity: The submission is clearly written and well-organized. Significance: The paper is the first to study a very natural problem and so I think it is quite significant. Ranking functions have many applications and developing robust algorithms for learning ranking functions can have good practical impact. On the theoretical front, these problems are also clearly of interest to the NeurIPS community. As mentioned on page 2 of the paper, the case of $k = 2$ captures the problem of learning halfspaces with Massart noise - the best paper award winner of NeurIPS 2019 was on this topic. This is primarily a theoretical paper and so the authors have mentioned that it doesn't have any negative social impact. <doc-sep>This paper considers the learning of linear sorting functions under Gaussian marginals in presence of bounded noise. In the special case k=2, the problem reduces to the well-studied learning of halfspaces with Massart noise. The author generalized the problem setting and provided efficient algorithms with respect to Kendall’s Tau distance and top-r disagreement loss. The work makes a significant contribution by proposing the first efficient algorithm for learning of LSFs with bounded noise. The basic algorithmic ingredient is an efficient learner ([ZSA20]) for the class of halfspaces (for the special case of k=2). However, the algorithm is generalized to any k (improperly), and is further used to obtain a proper learner using the ellipsoid method. When the error is measured by top-r disagreement loss, the proper learner also achieves improved sample complexity comparing to a naive invocation of the improper learner. The paper is very well-written with technical highlights appropriately placed and the analysis is sound. The work does not have negative social impacts. <doc-sep>The setup is the following: there is an unknown k x d matrix W and a player observes a feature vector x in R^d and a 'ranking' sigma(x) (i.e., a permutation) over [k] generated as follows. The feature vector x is sampled from a d-dimensional standard Gaussian. Then, the permutation sigma(x) over [k] is generated by sorts the indices of Wx in decreasing order. The goal is the learn a matrix ~W which approximates the label ranking, in particular, we want that with high probability over a fresh x drawn from a d-dimensional standard Gaussian, sorting ~Wx gives a permutation which is very close to that of Wx. The paper studies two notions of closeness: kendal-tau distance, and top-r distance. The kendal-tau distance (KT) between two permutations is the fraction of pairs i,j in [k] where their relative order agrees. In the learning setup, this corresponds to saying that with high probability over x, with high probability over a random (i,j) from [k]x[k], the relative order of (Wx)_i and (Wx)_j agrees with (~Wx)_i and (~Wx)_j. While the kendal-tau distance is well-studied, it is perhaps less motivated in ranking setups, where one is more interested in higher ranked elements. In settings where higher ranked elements are more important, the paper studies the top-r distance. This is a 0-1 distance based on whether the top r ranked elements are exactly the same in exactly the same order. While exact versions of the above are relatively simple (an algorithm using linear programming can find ~W), there is some noise in what the player observes. In particular, the player observes a draw from a distribution which is promised that each pair disagrees with the ground-truth ordering with probability at most eta (where eta < 1/2). Results: 1. A polynomial-time algorithm for learning ~W in KT distance from O(d log(k) / (eps (1 - 2eta)^6 ) ) up to distance eps. 2. A polynomial-time algorithm for learning ~W in top-r distance from O(d k r / (eps (1-2eta)^6 ) ) up to distance eps. Important remarks: The noise model is arbitrary as long as it has marginals on pairs which are different with probability at most eta. This, along with the fact that sorting functions are linear, makes the problem a similar of learning halfspaces with Massart noise. Because of this connection, the assumption that x is Gaussian is somewhat necessary because there are super-polynomial lower bounds in the statistical query model. The algorithm proceeds in three steps. First, a reduction from a ranking to O(k^2) binary comparisons. Second, an improper learner which aggregates the O(k^2) binary comparisons. Third, an algorithm which uses the intermediate steps of the improper learning to output a hypothesis ~W. While the first and second steps are known and have appeared in the literature before, the novel aspect of this work is finding the matrix ~W -- to do this, the paper proves two interesting geometric lemmas relating the angles between proposed rows of ~W and W with the corresponding KT and top-r distance. Strengths: The paper studies a natural problem in learning rankings. The problems seem like natural extensions of learning halfspaces with Massart noise, and a good model for learning rankings with noise. From a technical perspective, the approach is natural and the geometric lemmas interesting. The paper is also well-written. Weaknesses: I don't really see any strong weaknesses in the paper. The work is purely theoretical at this point, and seems to have no potential negative societal impact.
The reviewers are unanimous in their strong positive opinion on this paper. The authors have given the first efficient algorithms for learning noisy linear sorting functions with theoretical guarantees a relevant and useful problems setup for the NeuRIPS community. The reviewers consider the paper clear and well-presented and thus this is a natural accept.
Summary of paper This paper presents an approach for quantising neural networks such that the resulting quantised model is robust to adversarial and random perturbations. The core idea of the paper is to enforce the Lipschitz constant of each linear layer of the network approximately close to 1. Since the Lipschitz constant of the neural network is bounded by the product of the Lipschitz constant of its linear layer (assuming Lipschitz 1 activation functions) the Lipschitz constant of the trained neural network is bounded by 1. This results in a model which is robust to adversarial and random noise ad all directions in the model space are non-expansive. Algorithmically, controlling the Lipschitz constant is achieved by using the orthogonal regulariser presented in the paper Cisse et.al which has the same motivation for this work but for standard neural network training but not quantising. The authors presents thorough experimental study showing why standard quantisation schemes are prone to adversarial noise and demonstrate clearly how this approach improves robustness of quantised network and sometimes even improve over the accuracy of original model. Review: The paper is well written with clear motivation and very easy to follow. The core idea of using orthogonal regulariser for improving the robustness of neural network models have been presented in Cisse et.al and the authors re-use it for improving the robustness of quantised models. The main contribution of this work is in identifying that the standard quantised models are very vulnerable to adversarial noise which is illustrated through experiments and then empirically showing that the regulariser presented in Cisse et. al improves the robustness of quantised models with rigorous experiments. The paper add value to the research community through thorough experimental study as well as in industry since quantised models are widely used and the presented model is simple and easy to use. Some suggestions and ideas: 1. It will be great if the authors could add a simple analytical explanation why the quantised networks are not robust. 2. The manifold of Orthogonal matrices does not include all 1 - Lipschitz matrices and also the Orthogonal set is not convex. I think a better strategy for this problem is to regularise the spectral norm to be 1. Regularising the spectral norm is computationally cheaper than Orthogonal regulariser when combined with SGD using power iterations. Moreover the regulariser part of the model becomes nice and convex. 3. Another strategy to control the Lipschitz constant of the network is to directly penalise the norm of the Jacobian as explained in Improved Training of Wasserstein GANs (Gulrajani et. al). <doc-sep>Summary: The paper proposes a regualrization scheme to protect quantized neural networks from adversarial attacks. The authors observe that quantized models become less robust to adversarial attacks if the quantization includes the inner layers of the network. They propose a Lipschitz constant filtering of the inner layers' input-output to fix the issue. Strengths: The key empirical observation that fully quantized models are more exposed to adversarial attacks is remarkable in itself and the explanation given by the authors is reasonable. The paper shows how a simple regularization scheme may become highly effective when it is supported by a good understanding of the underlying process. Weaknesses: Except for observing the empirical weakness of fully quantized models, the technical contribution of the paper seems to be limited to combining the Lipschitz-based regularization and quantization. Has the Lipschitz technique already been proposed and analysed elsewhere? If not, the quality of the paper would be improved by investigating a bit more the effects of the regularization from an empirical and theoretical perspective. If yes, are there substantial differences between applying the scheme to quantized models and using it on full-precision networks? It looks like the description of the Lipschitz method in Section 4 is restricted to linear layers and it is not clear if training is feasible/efficient in the general case. Questions: - has the Lipschitz technique been proposed and analysed elsewhere? Is the robustness of full-precision models under adversarial attacks also improved by Lipschitz regularization? - how popular is the practice of quantizing inner layers? Has the performance of fully quantized models ever been compared to full-precision or partially quantized models in an extensive way (beyond adversarial attack robustness)? - are the adversarial attacks computed using the full-precision or the quantized models? would this make any difference? - the description of the Lipschitz regularization given in Section 4 assumes the layers to be linear. Does the same approach apply to non-linear layers? Would the training be feasible in this case? <doc-sep>imho, this manuscript is clearly written, addresses a confusing point in the current literature, clarifies some issues, and provides a novel and useful approach to mitigate those issues. reading the other comments online, the authors seem to have addressed those concerns as well.
The reviewers agree the paper brings a novel perspective by controlling the conditioning of the model when performing quantization. The experiments are convincing experiments. We encourage the authors to incorporate additional references suggested in the reviews. We recommend acceptance.
This paper proposes an algorithm for computing an approximation of the posterior and marginal likelihood by analysing the sequence of programs using neural networks, as well as a meta-algorithm for learning the network parameters over a training set of probabilistic programs. Experiments demonstrate the feasibility of the meta-algorithm for learning inference algorithms that generalise well to new but similar programs; these learnt algorithms were sometimes found to outperform alternatives in terms of time-efficiency. There were quite a few terms that I was not familiar with - for example, what is the "state" of an algorithm? I did not find a formal definition of this term in the paper. In the definition of the INFER function, what does the keyword "in" mean? It did not seem clear how well the white-box inference algorithm performs when the number of commands in the program is very large. I am not at all familiar with probabilistic programming; this does look like a serious piece of work, though I do not know how novel it is or whether the claims in the paper are correct. <doc-sep>The paper presents an algorithm for composing inference algorithms out of simpler neural net building blocks, one per unique statement type in the probabilistic programming language presented. The language is simple without recursion or loops, reducing issues due to feedback problems from the approximation. The networks are trained using HMC or importance sampling samples from many programs in a similar space. There is an empirical study of several classes of small Gaussian probabilistic programs. - The paper claims that the learned inference algorithm works well for tasks which are "similar" to the training problems, but the notion of similarity is not fully defined, nor is there an example of how the system fails when applied to a dissimilar program. Are there diagnostics for checking the output when the model is applied to a program which is too dissimilar? - The experimental study tests performance within a family of programs, each using small neural networks to infer each program statement. Does training on all families of programs allow the system to make accurate inferences on any of those families? Does it allow it to generalise across families? Without some notion of how the system generalises I'm not sure when I would choose to use this rather than running HMC on my program, given a single HMC run will be faster than training the neural networks on multiple HMC runs for different programs "close" to the program of interest. - The experiment in section 5.3 shows that the system is around 2x faster than importance sampling from the prior, but this doesn't take into account the time necessary to train the neural nets nor the time taken for the importance sampling runs used to generate the training data. - How are losses propagated through the program? If each neural net is 3 layers, then programs with 10 statements have at most 30 layers, which is usually past the point where some amount of regularization or normalization is necessary to stabilize training or prevent vanishing gradients. Could the authors comment on the stability of training? - What's the failure mode when the test loss diverges? Is it detectable without having HMC or other high quality samples? - How robust is this approach to differing choices of neural net architecture? The paper uses a 10 dimensional state space when parsing the program, but it's not clear how this value should be modified as the number of latent variables or the program complexity changes. - Overall the paper is well written and explained, and the experimental study is detailed for the areas it covers. The paper presents a learned inference algorithm, but it's not clear how it generalises either across program types (which is necessary to amortize the training cost wrt HMC), or across neural network architectures (e.g., changing the internal state space in response to increased program complexity). Additionally it's noted that occasionally the test error diverges, but there's no discussion of how to detect this in practice if the system was used for inference. <doc-sep>This introduces a meta-learning algorithm for learning inference algorithms applicable to any probabilistic program. This is accomplished by associating a neural network with every grammar rule of a probabilistic programming language and outputting posterior draws. This generalisation is possible because each neural network component is feed marginal likelihood information for each PPL instruction. This work is very interesting and novel. It's a unique attempt to learn a general inference algorithm. I think meta learning is something that is a good fit for many Bayesian approaches and I want to see more work like this. I'm curious about the expressivity of the language. The grammar suggestions a modelling language that consists of sequence of commands. I don't see how this language would be able to express recursive programs. It seems one would need something like a label and jump commands to accomplish that. This is admitted in the appendix but not really acknowledged in the main text. I think the main paper should reflect the present limitations and not over promise the existing contribution. I have some concerns about the experiments. Some examples are fairly simple and the results for the more substantial ones are not shown (like hierd and rb). The test losses look fairly bad for the experiments that are shown, so I'm not fully sure generalisation has been demonstrated. If the issue is a few bad generated programs maybe show median loss? The results in Figure 4 seem to be for during training, but feasibility presumably requires similar results on unseen programs at test time? I also worry about correctness. As the neural networks are used as is and not as a proposal, how much can we trust the posteriors that come out of this method? Is there anything that can be said about the learnt posterior? It seems right now that the learned distribution can recover the mean of the true parameters and maybe the variance? Some of the writing is slightly sloppy. For example the phrase "so-called static single assignment assumption" is used. I don't know what that means, but I do know there is a static single assignment intermediate representation that exists within many compilers. I think that's what the paper meant to refer to. Related work should cover how this approach differs from Stites et al. (https://arxiv.org/abs/2103.00668). I recommend this paper for rejection. While I think the approach has the potential to work, right now it's very hard to get a sense of what was learned by the meta learning algorithm, how well any of this generalises when model structure or even observed data changes significantly, or even if the language is too restricted to make this is a significant enough contribution. <doc-sep>The paper proposes a new restrictive class of probabilistic programs with fixed number of random variables and without loops. The authors then propose an inference technique that learns the parameters of a neural network for sampling from the programs posterior distribution by composing it from individual neural networks for each atomic command in the language. This technique is shown to perform well during inference once the neural network has been trained on training programs. At the very least the inference speed is shown to be very high. The main drawback in this work is a lack of novelty. The use of neural networks to train a proposer for a model is not new. While the authors attempt to cast their work as something different than IC this doesn’t quite come out. The claim that a neural network for one program can be used for a different program even though the neural network takes a one hot encoded representation of the variables in the model is hard to see. A clear technical statement of what kind of cross model generalization is possible is needed. The paper shows results across model structures where the dependency graph and the position of a function changes but the number of variables is the same. It is not clear why IC can’t deal with this minor variation in the same probabilistic program. These models are so simple, how hard would it be to train ic on these models and then do inference? I would like to have seen ic results in this paper to believe that this work is different. The language chosen is not a universal ppl. I can’t follow how this can be used as an intermediate language by a compiler for a universal ppl as claimed on page 3. Please show an example of how a program with unbounded random variables can be compiled into this language. The inference in this paper looks like mean field variational inference. Which makes me wonder whether hmc is really such a good comparison. Please show some comparisons to vi. In Stan this would be trivial to run since you are already running hmc. The models are very simplistic with no discrete variables and no multimodal posteriors. It is not a meaningful claim to make (footnote on page 9) that the inference algorithm provides a good coverage of the posterior by covering all the modes. The posteriors shown for multimodal models should at least look multimodal. Variational distance or symmetric kL divergence results would be needed to make claims about correctness of the posterior. Regarding ess per second results these can be misleading. The algorithm might have a cap of ess for example. It would be better to run your algorithm for the same duration as hmc and show higher ess numbers. The claim that the paper provides generalization of compiled inference across models is not supported by the description or the simple examples. These appear to be covered by existing work on inference compilation. The focus on a very restricted class of ppls makes this work very limited. I don’t believe I learned anything from this paper.
The paper presents a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs. While the reviews agree that this is a very interesting research direction, they also reveal that there are several questions still open. One reviewer points out that there learning to infer should take both the time for learning+inference and the generalization to other programs into account, i.e., what happens if the program is too different from the training set? Is benefit than vanishing? Moreover, as pointed out by another review, recursion as well as while loops are not yet supported. Also, the relation to IC needs some further clarification. These issues show that the paper is not yet ready for publication at ICLR. However we would like to encourage the authors to improve the work and submit it to one of the next AI venues.
This paper presents an important and interesting approach on fully decentralized MARL. Fully decentralized Q-learning is highly applicable to realistic and real-world applications. The method is evaluated extensively, showing a great potential. ## Strengths - Theoretical analysis - Extensive evaluations - Interesting perspective to the MARL problem with potential real-world applicability ## Weaknesses - missing out on some related work on fully decentralized MARL - Lack of SOTA baselines Yes <doc-sep>This paper proposes a new MARL algorithm under the DTDE paradigm. Specifically, the proposed algorithm (I2Q) is introduced based on the ideal transition probability (where each agent assumes that the others adopt the optimal actions for each decision) and a previous idea named QSS-Learning. Theoretical guarantee on the convergence of the proposed algorithm is provided under certain conditions. For experimental studies, the significant superiority of I2Q is demonstrated in matrix games, MPE, MA MuJoCo and SMAC. Pros: + This paper is clearly written. + The experimental part is relatively diverse and adequate. &nbsp; Cons: - The novelty of the proposed algorithm is limited. &nbsp; Minor issues and typos: - l.299 "SAMC" → "SMAC" The rationality and limitations of the main assumptions adopted in this paper are discussed in Sec.3.4. <doc-sep>This paper presents I2Q, an algorithmic approach for decentralized MARL. The authors present the non-stationarity problem in this setting and propose to use "ideal transition probabilities" to solve it. Particularly, these are transition probabilities for which all agents are ensured to converge to an optimal solution when trained in a decentralized manner. The authors then propose to use the next state (in deterministic environments) as a representation of an action, and show that it induces an ideal transition probability, which ensures convergence to an optimal solution. They experiment on many baselines in various domains, showing the benefit of their approach. The paper proposes an elegant solution to the non-stationarity problem of decentralized MARL. I'm not able to say if it is the first method to solve this problem, and I hope one of the other reviewers will address this. The paper is clearly written, and the presentation is great. Also, I found everything to be easy to read and follow. Finally, the experiments section seems to have chosen a wide variety of tasks, and I'm glad the authors also chose to show results on the high dimensional problem of SCII. ---------------------------------------------------------------------------------------------------------- The paper doesn't have strong flaws, but there are some issues that make it a borderline paper for Neurips. First, the theory is not very deep. There are many questions that remain open that the authors don't address theoretically, and I think are important for a better understanding of the problem. One of these, is convergence proof of I2Q, which the authors don't really prove, but only discuss informally. Second, I feel that the deterministic assumption in the paper is a strong one, unless carefully addressed. In favor of the authors, they do discuss this in the paper, showing a result of the value gap, and also experiments on a wide variety of tasks. Still, I believe this is not adequately addressed. A stronger result for stochastic environments should be provided. I assume there exist some "ideal transition probabilities" for this setting. If it is the case that such are impossible to theoretically find, then this is an important point to address in the paper. Overall, I find Theorem 3 to be a trivial result. I wish to see an approach that tackles stochasticity explicitly, and provides a tighter bound for approximation errors. Third, the fact that I2Q must learn a forward model is troubling, as model-based methods usually fail againt state of the art model-free methods on high dimensional tasks (unless latent spaces are used, such as in MuZero). The authors don't address the problem of estimating $f$ in their work. Moreover, I feel that this is not addressed fully in the experiments either. Finally, while the experiments show results on different types of environments, I find that I2Q was not compared against enough baselines. There are a lot of new baselines on MARL, and particularly I would expect the authors to compare I2Q to at least three more baselines which are considered SOTA, and not only IQL - even if they are not decentralized. ---------------------------------------------------------------------------- Strengths: 1. A new solution for decentralized MARL 2. Proofs to formal statements seem correct 3. Paper is clearly written and presentation is great 4. Experiments show a variety of interesting tasks Weaknesses: 1. Theory is weak 2. Stochastic environments should be addressed 3. Forward model should be addressed theoretically and in experiments 4. Experiments are lacking comparison to other algorithms The authors discuss limitations of their work. Some of these limitations coincide with points I've already raised. As mentioned above, I believe some of these points should be addressed more thoroughly in the paper.
The paper presents a novel method for dealing with nonstationarity in decentralized multi-agent reinforcement learning (MARL). While there are some concerns about the level of novelty, the approach is interesting and presented well. There are also concerns about the discussion and comparison with the state-of-the-art in decentralized MARL methods. We suggest the authors include comparisons to other decentralized MARL methods (such as the ones below) or state why such comparisons are not reasonable. Omidshafiei, Shayegan, et al. "Deep decentralized multi-task multi-agent reinforcement learning under partial observability." International Conference on Machine Learning. PMLR, 2017. Palmer, Gregory, et al. "Lenient Multi-Agent Deep Reinforcement Learning." Proceedings of the International Conference on Autonomous Agents and MultiAgent Systems. 2018. Lyu, Xueguang, and Christopher Amato. "Likelihood Quantile Networks for Coordinating Multi-Agent Reinforcement Learning." Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. 2020.
The authors propose a Variance-Invariance-Covariance regularization technique for self-supervised learning. The loss function used in the paper consists of three terms: the invariance term encouraging samples with different view to have similar embedding; the variance term, which is a hinge loss on the variance of the embedded variables (this is the main contribution of the paper, and the authors claim that it helps to avoid variance collapse); and a covariance term which borrows from the previous work Barlow Twin. The proposed method has greater flexibility for siamese architecture design, such as not requiring batch-normalization and weight-sharing, which the authors claim opens the door for multi-modal signal embedding. Experiments and ablation study have been conducted to demonstrate the performance of the proposed components. Strengths: + The authors did a very good job in explaining the background and presenting the paper. The main idea is conveyed very clearly. + The idea of adding a variance term to the total loss to avoid representation collapse is interesting, intuitive and novel. + A great number of experiments compared with prior methods with detailed set up have been conducted. + Ablation analysis has also been conducted, showcasing the effects of different components. + A study on multi-modal signal representation learning is presented, demonstrating the importance of not requiring architecture or weight sharing in two branches. Weakness: - It seems that the main contribution, which is the variance term, plays a somewhat insignificant role in Table 1 and Table 2. In fact, compared to Barlow Twins, which does not have the variance term, the proposed method in many cases actually underperforms. - Not requiring shared weight between different branches is a feature of Barlow Twin as well. Can the authors provide an explanation on the inferior performance of Barlow Twin in Table 3 and Table 5? - The authors mentioned that using standard deviation instead of the variance in the hinge loss is important. Can a toy numerical example be provided to showcase the presence of representation collapse when variance is used? The paper is easy to understand, and has its contribution and novelty. Many experiments have been conducted, but theory is a bit lacking. I am willing to increase my rating if the authors can respond to my comments. <doc-sep>This paper combines three objective functions for the self-supervised visual pre-training on ImageNet. (1) The alignment between the two different views of an identical image, which is very common for existing methods; (2) The covariance term to bring the off-diagonal coefficients of the features' covariance matrix to zero, which is modified from the Barlow Twins; (3) The variance term that defines a hinge function on the standard deviation of embeddings along the batch dimension for every specific dimension of the feature projections . To the best of the reviewer's knowledge, such objective function is firstly applied for the visual pre-training in this paper, although the same measure has been used to analyze the model collapse problem (e.g., in the paper of SimSiam), but not be designed as a specific pre-trained loss function. Strengths: 1. The paper is well-written and easy to follow; 2. The method is simple and achieve comparable performance for both linear evaluation and downstream transferring; 3. The authors provide a clear and detailed discussion to compare this work with the previous methods. Weaknesses: 1. The reviewer does not feel very excited about the work. In fact, the three loss functions are not very novel. As the reviewer mentioned in the summary, the covariance term is just directly modified from the Barlow Twins. The same measure of the variance term has been used in some previous works (e.g., SimSiam) to analyze the model collapse problem, while it is not designed as a pre-trained loss function. 2. In the table 1, the comparison with previous methods might not be very fair. In particular, some compared methods such as MoCo v1/v2, SimSiam and InfoMin are just pre-trained for 800 epochs, while the proposed model is pre-trained for 1000 epochs. Besides, some of the previous methods do not use LARS optimizer and warmup strategy that are applied in this work. 3. While the proposed method is simple, however, the computation time of the covariance matrix is quadratic in terms of the feature dimension, which slow the pre-training significantly. 4. Although the authors have provided detailed discussions to illustrate the differences of this work with previous works in terms of the design details, however, can the authors elaborate theoretically on the advantages of the variance and covariance terms against the whitening operation in W-MSE? 5. Besides ResNet-50, it will be more beneficial to the community if the authors can compare the proposed method with the MoCo v3, by showing the performance with the Transformer backbone. Overall, the reviewer tend to vote for accept for this work since the proposed method is simple and it has conducted thoughtful experiments to demonstrate the effectiveness. The reviewer encourages the authors to speed up the proposed method, make the comparison with previous methods fairer and try to test the method on different architecture. <doc-sep>The paper propose a new self-supervised method. New loss is designed to explicitly avoid collapsed solution. Advantages: 1. Authors give an explicit loss function to deal with the collapsed solution problem, which is understandable and explainable compared with BYOL and SimSiam. And the design of minimizing standard deviation for each dimension is insightful. 2. The application of minimizing variance and covariance to other methods, especially SimSiam, is interesting, which can help people understand the mechanism of how negative-free methods work. 3. Well-written and easy to follow. Comments: 1. The invariance term and covariance term seems a decouple version of BarlowTwins. So I thought the main difference is the variance term. However from the results, it seems that VICReg does not bring extra improvements compared with BarlowTwins. It is not clear that what kind of problem authors aim to solve. If the variance term is the key, it will be better to show the std of BarlowTwins features, and give more analysis of why the combination of variance-invariance-covariance is advantageous. 2. Authors emphasize that one of the advantages of VICReg is it does not require the weight sharing. It is indeed the VICReg can work without siamese network design, but the property maybe not a exclusive advantage of VICReg. According to my understanding, SimCLR, Barlow Twins can also work with two different architectures. I thought authors should also compare with these method in the setting of non-shared architectures. 3. About the ESC-50 experiments. It is not clearly that why VICReg perform much better than BarlowTwins in this experiment. And I can not find details in the paper that whether BarlowTwins also use the multi-modal data. Because I believe that Barlow Twins can also work with different architectures, so it is important to figure out why VICReg perform better. 4. Table 4 shows the effect of variance term and covariance term on different method, but missing BarlowTwins. I believe the effect of variance term on BarlowTwins is a key experiment to compare. The variance-invariance-covariance framework is insightful, but the experiments are not so convincing. <doc-sep>The paper proposes a novel objective function for self-supervised representation learning. The objective function consists of three terms, the invariance, the variance, and the covariance terms. The invariance term drives representations to be invariant to input transform, the variance term ensures each dimension of the representation has enough variability, and the covariance term inhibits co-adaptation of dimensions. The proposed objective function shows competitive performance to existing self-supervised learning techniques. # Strengths - The overall exposition of the paper is clear and easy to follow. - The proposed method is simpler than the previously proposed self-supervised learning techniques. It is agreeable that the variance and covariance terms prevent the collapse of representations. - The ability to handle the heterogeneous encoding networks seems to be a meaningful improvement. - The proposed method requires a moderately sized batch of 2048. # Weaknesses - It is unclear that the collapse of representations, the main problem tackled by the paper, is the major bottleneck in self-supervised learning. The experimental results presented in Table 1 and Table 2 are okay, but not pushing the boundary of self-supervised learning. - While Table 3 and Table 5 showed that VICReg is more suitable for using heterogeneous encoders, the necessity of heterogeneous encoders is not demonstrated very clearly, because the setting is not practical. The performances reported in Table 3 are far from the state-of-the-art, and in Table 5, the shared weight setting performs best. A more natural setting, such as representation learning for multi-modal data as in VSE [1], should be investigated. - The contributions of the variance term and the covariance term are not well analyzed. Table 4 is supposed to show the contributions, but it lacks CovReg column so that the conclusion from the table is somewhat vague. Additional efforts for illustrating the effect of the variance and the covariance terms will make the paper more persuasive. - The difference from Barlow Twins needs to be elaborated in detail. Otherwise, the proposed method is conceived as a minor improvement over Barlow Twins. I found that the definition of the covariance term is meaningfully different from that of Barlow Twins, but it is not emphasized. [1] Faghri, Fartash, et al. "VSE++: Improving visual-semantic embeddings with hard negatives." arXiv preprint arXiv:1707.05612 (2017). I vote to reject because the contributions of the paper are not well demonstrated in the paper.
This paper presents a self-supervised learning method for the multi-modal setting where each modality has its own feature extraction mapping, and i) the extracted features shall be close for paired data, ii) in the feature space each view has close to diagonal covariance, while iii) the scale for each feature dimension is constrained away from zero to avoid trivial features. The presentation is clear and the reviewers do not have major confusion on the methodology. There have been some discussions between the authors and reviewers, and most questions on the empirical study have been addressed by the authors with additional experiments. The remaining concern is on the novelty (difference from prior SSL methods especially Barlow-Twins) and significance. I think that while it is relatively straightforward to extend methods like Barlow-twins to the multi-modal setting, I do see the value of empirically demonstrating the effectiveness of an alternative loss to the currently pervasive contrastive learning paradigm, and hence the paper is worth discussion in my opinion. In the end, the method resembles classical multi-modal methods like canonical correlation analysis, in terms of the objective (matching paired data in latent space) and constraints (un-correlated feature in each view, and unit-scale constraint for each feature dimension); such connections shall be discussed.
This paper proposes a method to scalably compute Wasserstein-2 barycenters given samples from input measures. In general the authors also allow for continuous measure settings. Inspired by Li et al. (2020) the paper uses a potential-based approach and recovers the barycenter by using gradients of the potentials as pushforward maps. In general, I feel this paper is well-written and provides a fast solution to a meaningful problem, thereby supporting the claim of novelty. The theoretical developments in the paper are reasonable and the experiments carried out are quite decent, both in simulation and real-data settings. The only point that bothers me is the approximation used. It would be great if the authors could give an extensive and detailed understanding of settings where the upper bound in Eq.(10) in the main text is small thereby leading to a good approximation. <doc-sep>This work introduces a new Wasserstein-2 barycenter computation method. The authors first derive the dual formulation of the Wasserstein-2 barycenter problem, and then parametrize the convex potentials by ICNNs. The congruent and conjugacy conditions are enforced by regularization terms, respectively. They then show that the algorithm can find a good barycenter if the objective function is properly minimized. Pros: 1. The algorithm does not introduce bias. 2. The algorithm does not require minimax, which is efficient. 3. The empirical performance is much better than existing methods, probably due to the above two reasons. Areas to improve: 1. It is good that the empirical analysis include how the performance change w.r.t. D. It would be better if there is a similar analysis to N. Furthermore, since 2N ICNNs are needed to be trained, it would be better if the training time is also reported, so that we can have a more comprehensive understanding of the method. Will there be a setting that discrete method can be faster than the proposed method to enforce comparable approximation error (say, large N for 3D applications)? 2. Since the congruent and conjugacy conditions are enforced by regularizations, they are not guaranteed to be satisfied. Therefore, it would be better if there is an experiment showing that how the conditions are satisfied. 3. The first section of related work should also briefly include https://arxiv.org/abs/1605.08527 and https://arxiv.org/abs/1905.00158. After rebuttal: The additional experiment results provided in the rebuttal stage suggests the efficiency of the proposed method, as well as the congruent and conjugacy conditions are approximately satisfied. I therefore believe this paper should be accepted.<doc-sep>Summary: The paper considers the Wasserstein Barycenter problems in the continuous setting. In particular, the authors propose an algorithm to compute the Wasserstein-2 barycenter when only samples from the marginals are accessible. Some theoretical analysis of this method is presented. Several numerical examples are carried out to compare this method with two other recently proposed methods. Reasons for score: The proposed algorithm utilizes an interesting regularization of the dual formulation of Wasserstein-2 Barycenter, resulting in a single minimization problem instead of a min-max problem. This algorithm is properly justified by theoretical results as well as numerical experiments. Pros: 1. The paper provides theoretical results on the consistency of the proposed algorithm. 2. The experiments are overall good and clear. 2. The paper is well-written and easy to follow. Cons: 1. High-dimensional examples other than the simple Gaussian setting are missing. 2. There is no analysis of computational complexity of the proposed algorithm. Also, the training expense is not reported. 3. The double gradient in the second regularization term could be expensive to evaluate. Questions: 1. In both Theorem 4.1 and 4.2, the smoothness of the potentials is crucial. If the smoothness B is too large, then the bound presented is essentially useless. Please comment on it.<doc-sep>**Summary** The paper derives the barycenter mapping problem as an optimization over *congruent* convex functions---each convex potential corresponding to a component distribution. Congruency is a property on the set of optimal potential functions that ties them together. However, this optimization is quite challenging and so the paper derives an principled objective function that includes two regularization terms. The first regularization term encourages congruency of the set of convex functions and can be seen as a variational bound on an ideal congruency regularization. The second regularization term encourages the pairs of convex functions to be conjugate. The paper proves that the optimal solution of this objective is the true potentials and thus no bias is introduced. The proposed approach is demonstrated on the tasks of generative modeling (2-256 dimensions), posterior inference, and color pallete barycenters (3D) **Strengths:** - Nice problem formulation and setup with respect to prior methods. - The derivation of the final objective function is clearly laid out and well-motivated. Each problem that is encountered is explained and then a solution or approximation is introduced. - The theoretical results give appropriate grounding for the approach. - The empirical results outperform prior potential-based methods for barycenters. **Weaknesses:** - It is unclear if this method can scale in terms of samples and dimensions. What is the computational cost of estimating these input convex neural networks? Can you provide (approximate) wall-clock times for the various methods and dimensionalities? What are the key computational bottlenecks either memory-wise or computation-wise? - The experiments seem small scale with a max dimension of 256. Barycenters for high dimensional real-world data (e.g., even MNIST (784D)) or some other high-dimensional real-world dataset would improve the paper. - The paper lacks comparison to methods that do not recover 2N potential functions. What are the closest methods for barycenter that do not use potential functions? For example, could the algorithms be compared to discretized barycenter algorithms to show the breakdown in higher dimensions? **Other comments or questions** - Is $D$ in equation 5 supposed to be $N$? - Some typos above Eqn. 12. **Update after author response** I appreciated the authors response to the scalability and raw computation times. Thank you also for the additional comparison to a non-potential function method. This will be a good comparison. My main concerns were answered, and I still think this is a good paper.
The authors propose the 2-Wasserstein barycenter problem between measures. The authors propose a novel formulation that leverages a condition (congruence) that the optimal transport (Monge) maps, here parameterized as potentials, must obey at optimality. The introduce various regularizers to encourage that property. The idea is demonstrated on convincing synthetic experiments and on a simple color transfer problem. Although experiments are a bit limited, I do believe, and follow here the opinion of all reviewers, that there is novelty in this approach, and that this paper is a worthy addition to the recent line of work trying to leverage ICNNs/Brenier's theorem to solve OT problems.
This paper explores the task of finding discrete adversarial examples for (current) dialog models in a post hoc manner (i.e., once models are trained). In particular, the authors propose an optimization procedure for crafting inputs (utterances) that trigger trained dialog models to respond in an egregious manner. This line of research is interesting as it relates to real-world problems that our models face before they can be safely deployed. The paper is easy to read, nicely written, and the proposed optimization method seems reasonable. The study also seems clear and the results are fairly robust across three datasets. It was also interesting to study datasets which, a priori, seem like they would not contain much egregious content (e.g., Ubuntu "help desk" conversations). My main question is that after reading the paper, I'm not sure that one has an answer to the question that the authors set out to answer. In particular, are our current seq2seq models for dialogs prone to generating egregious responses? On one hand, it seems like models can assign higher-than-average probability to egregious responses. On the other, it is unclear what this means. For example, it seems like the possibility that such a model outputs such an answer in a conversation might still be very small. Quantifying this would be worthwhile. Further, one would imagine that a complete dialog system pipeline would contain a collection of different models including a seq2seq model but also others. In that context, is it clear that it's the role of the seq2seq model to limit egregious responses? A related aspect is that it would have been interesting to explore a bit more the reasons that cause the generation of such egregious responses. It is unclear how representative is the example that is detailed ("I will kill you" in Section 5.3). Are other examples using words in other contexts? Also, it seems reasonable that if one wants to avoid such answers, countermeasures (e.g., in designing the loss or in adding common sense knowledge) have to be considered. Other comments: - I am not sure of the value of Section 3. In particular, it seems like the presentation of the paper would be as effective if this section was summarized in a short paragraph (and perhaps detailed in an appendix). - Section 3.1, "continuous relaxation of the input embedding", what does that mean since the embedding already lives in continuous space? - I understand that your study only considers (when optimizing for egregious responses)) dialogs that are 1-turn long. I wonder if you could increase hit rates by crafting multiple inputs at once. - In Section 4.3, you fix G (size of the word search space) to 100. Have you tried different values? Do you know if larger Gs could have an impact of reported hit metrics? - In Table 3, results from the first column (normal, o-greedy) seem interesting. Wouldn't one expect that the model can actually generate (almost) all normal responses? Your results indicate that for Ubuntu models can only generate between 65% and 82% of actual (test) responses. Do you know what in the Ubuntu corpus leads to such a result? - In Section 5.3, you seem to say that the lack of diversity of greedy-decoded sentences is related to the low performance of the "o-greedy" metric. Could this result simply be explained because the model is unlikely to generate sentences that it has never seen before? You could try changing the temperature of the decoding distribution, that should improve diversity and you could then check whether or not that also increases the hit rate of the o-greedy metric. - Perhaps tailoring the mal lists to each specific dataset would make sense (I understand that there is already some differences in between the mal lists of the different datasets but perhaps building the lists with a particular dataset in mind would yield "better" results). <doc-sep>Main contribution: devising and evaluating an algorithm to find inputs that trigger arbitrary "egregious" outputs ("I will kill you") in vanilla sequence-to-sequence models, as a white-box attack on NLG models. Clarity: The paper is overall clear. I found some of the appendices (esp. B and C) to be important for understanding the paper and believe these should be in the main paper. Moving parts of Appendix A in the main text would also add to the clarity. Originality: The work looks original. It is an extension of previous attacks on seq2seq models, such as the targeted-keyword-attack from (Cheng et al., 2018) in which the model is made to produce a keyword chosen by the attacker. Significance of contribution: The lack of control over the outputs of seq2seq is a major roadblock towards their broader adoption. The authors propose two algorithms for trying to find inputs creating given outputs, a simple one relying on continuous optimization this is shown not to work (breaking when projecting back into words), and another based relying on discrete optimization. The authors found that the task is hard when using greedy decoding, but often doable using sampled decoding (note that in this case, the model will generate a different output every time). My take-aways are that the task is hard and the results highlight that vanilla seq2seq models are pretty hard to manipulate; however it is interesting to see that with sampling, models may sometimes be tricked into producing really bad outputs. This white-box attack applicable to any chatbot. As the authors noted, an egregious output for one application ("go to hell" for customer service) may not be egregious for another one ("go to hell" in MT). Overall, the authors ask an interesting question: how easy is it to craft an input for a seq2seq model that will make it produce a "very bad" output. The work is novel, several algorithms are introduced to try to solve the problem and a comprehensive analysis of the results is presented. The attack is still of limited practicality, but this paper feels like a nice step towards more natural adversarial attacks in NLG. One last thing: the title seems a bit misleading, the work is not about "detecting" egregious outputs.<doc-sep># Positive aspects of this submission - This submission explores a very interesting problem that is often overlooked in sequence-to-sequence models research. - The methodology in Sections 4 and 5 is very thorough and useful. - Good comparison of last-h with attention representations, which gives good insight about the robustness of each architecture against adversarial attacks. # Criticism - In Section 3, even if the "l1 + projection" experiments seem to show that generating egregious outputs with greedy decoding is very unlikely, it doesn't definitely prove so. It could be that your discrete optimization algorithm is suboptimal, especially given that other works on adversarial attacks for seq2seq models use different methods such as gradient regularization (Cheng et al. 2018). Similarly, the brute-force results on a simplified task in Appendix B are useful, but it's hard to tell whether the conclusions of this experiment can be extrapolated to the original dialog task. Given that you also study "o-greedy-hit" in more detail with a different algorithm in Sections 4 and 5, I would consider removing Section 3 or moving it to the Appendix for consistency.
This work examines how to craft adversarial examples that will lead trained seq2seq models to generate undesired outputs (here defined as, assigning higher-than-average probability to undesired outputs). Making a model safe for deployment is an important unsolved problem and this work is looking at it from an interesting angle, and all reviewers agree that the paper is clear, well-presented, and offering useful observations. While the paper does not provide ways to fix the problem of egregious outputs being probable, as pointed out by reviewers, it is still a valuable study of the behavior of trained models and an interesting way to "probe" them, that would likely be of high interest to many people at ICLR.
This paper proposes a new machine learning method for classification called Fuzzy Learning Machine. The paper draws from concepts from cognitive science to derive a method based on fuzzy similarity relations of examples on the input space. The training method learns a similarity function and selects a set of exemplars from each category used during the prediction phase to compute the similarity of new examples to the exemplars in each category and then assign it to the category with more similar examples. Strengths: The method proposed is interesting and brings up a number of novelty elements. The method seems to improve significantly in relation to existing classification methods on a large number of data sets. Weaknesses: The paper makes a lot of assertions about human cognition that are questionable. For instance: - "In essence, the process of classification is the process of concept cognition" - "Concept contains our knowledge about the world, and we use concept to understand and organize the world. Without it, there will be no human intelligence at all." - "Similarity (...) plays a crucial role in the process of human classification" - "Concept is represented based on similarity for children, which is also a basic choice for adults" Also, sometimes it is difficult to understand if the paper makes assertions about its own definitions or about human cognition as in "the intrinsic property of concept is just the fuzziness rather than the randomness". I do not see a problem in using assumptions based on cognitive science for building models. In fact, most models in AI do that somehow. However, care should be taken to not state these assumptions in the paper as settled truths. I rather see the paper provide in advance a list of theories, hypotheses, and assumptions considered along with references for them, and then describe the model proposed using them as a basis. Finally, without details on how well the other methods used for comparison were adjusted, it is hard to know if the comparison is fair. I do not see any limitations or potential negative social impact of this work. <doc-sep>In the paper "Fuzzy Learning Machine" the authors propose an approach to learn a classifier via a neural network forming a fuzzy equivalence relation. Deriving the approach from fuzzy set theory, the authors find their approach to perform particularly well across a number of datasets comparing the approach to various other classifiers. ## Weaknesses The idea of employing fuzzy set theory for classification tasks is not new at all and I am wondering what is now the methodological novelty of the approach. In general the idea of comparing instances / data points according to their similarity is the basic idea behind learners using kernel functions where the shape of a concept is specified via the respective kernel. However, there is also a relatively large corpus of literature on classifiers leveraging fuzzy set theory, even working exactly with neural networks and the idea of fuzzy equivalence relations. Still this related work is neither discussed nor cited in the paper. See for example the following references: Acharya, U. Rajendra, et al. "Classification of heart rate data using artificial neural network and fuzzy equivalence relation." Pattern recognition 36.1 (2003): 61-68. Moser, Bernhard. "On Representing and Generating Kernels by Fuzzy Equivalence Relations." Journal of machine learning research 7.12 (2006). Meier, Andreas, and Nicolas Werro. "A fuzzy classification model for online customers." Informatica 31.2 (2007). Senge, Robin, and Eyke Hüllermeier. "Top-down induction of fuzzy pattern trees." IEEE Transactions on Fuzzy Systems 19.2 (2010): 241-252. Kuncheva, Ludmila. Fuzzy classifier design. Vol. 49. Springer Science & Business Media, 2000. Sun, C-T., and J-S. Jang. "A neuro-fuzzy classifier and its applications." [Proceedings 1993] Second IEEE International Conference on Fuzzy Systems. IEEE, 1993. Uebele, Volkmar, Shigeo Abe, and Ming-Shong Lan. "A neural-network-based fuzzy classifier." IEEE Transactions on Systems, Man, and Cybernetics 25.2 (1995): 353-361. It is unclear to me how this part of the literature is widely ignored by the authors when they seem to come from that area. Overall, the paper has a good structure but could benefit from proofreading. Especially, a vs an is a frequent problem in the text, e.g., "a input space", "a output space", "a FER". Then, "classifier", "concept" and "classification process" are used without an article. Some parts also seem overly complicated to me. For example, consider the proof that a non-linear model is needed to tackle the derived problem where the instances are concatenated. I do not know whether yet another proof for the fact that an XOR problem cannot be tackled via a linear model is really needed. This could have been simplified. Furthermore, I find that the example given in Figure 1 is not very well chosen. The concepts cat and dog have crisp biological borders and a human not being able to distinguishing the two categories is rather due to epistemic uncertainty than fuzziness of the concept borders. Personally, I would also argue that non of the three cats is more or less representative of the category or concept "cat". A claim that was made by the authors is that their approach indeed learns "concepts" instead of just assignments. However, there was no proof given in the paper that this is really the case. Especially, there is no presentation or demonstration of any particular concepts that were induced by fitting their model. I would even argue that from Figure 3 is rather becomes clear that it is learning not really any concepts as the FSR matrix shows more or less the same color for every cell not being on the main diagonal. If it was to learn real concepts I would also expect that a 0 would receive a lower membership score for the concept 1 than a 7 for example. A better overall performance is no proof for the claim that the method learns concepts. Another branch of classification literature also tries to capture concepts for classification purposes: Analogy learning. Bayoudh, Sabri, Laurent Miclet, and Arnaud Delhay. "Learning by Analogy: A Classification Rule for Binary and Nominal Data." IJCAI. 2007. ## Strengths Since most people in the machine learning community will not be that much familiar with fuzzy set theory, I liked it very much that all fundamental definitions were provided by the authors in the paper or supplementary material to make it self sufficient. According to the experiments the proposed method seems to perform very strong compared to a set of almost 200 classifiers. However, the way how the rankings were calculated is a little bit odd. Why are 65 learners sharing rank 1 with 100% accuracy receive a rank of 65? This will most likely also affect the average rank statistics compared for the ten classifiers later on. I would rather expect that performances with a tie receive the same higher rank, leaving free the next n-1 spots in the ranking. Limitations, except for runtime complexity to compute the FER matrix, are not really discussed. When does the approach fail and why does it fail? <doc-sep>This paper proposes a new learning machine for the general classification problem, which is one of the most important problems in ML/AI. The new learning machine is based on the concept cognition theory in cognitive science and fuzzy set theory in mathematics science. So its working mechanism is highly explainable and has a solid theoretical guarantee. Meanwhile, a large number of systematic experimental results demonstrate the superiority of the proposed method. The manuscript focuses on the classification problem which is one of the most important problems in ML/AI. The manuscript re-examines the classification from the perspective of concept cognition and reveals the essence of classification. And the manuscript provides a new view to interpret the structure of the classification problem by establishing the equivalence between binary classification problem and classification problem by employing equivalence relation in set theory. Furthermore, the manuscript realizes that fuzziness of concept is the main source of uncertainty in classification and then employs the fuzzy set theory to model this kind of uncertainty. Based on the above conclusions, the classification problem is modeled as a fuzzy equivalence relation problem, which well preserves the nature and intrinsic fuzziness of the classification problem. What’s more, the manuscript designs a clever model and loss function to approximate the fuzzy equivalence relation effectively and efficiently. Therefore, in this manuscript, the main proposals have the theoretical basis of cognitive science, and the key conclusions are proved mathematically. And extensive experiments (compared with 179 methods on 121 data sets) verify the rationality and superiority of the proposed method. Overall, the manuscript is clearly written and well organized with good clarity. To enhance the readability and completeness, it is suggested that some contents in the appendix should be moved to the corresponding part of the main manuscript. For example, the analysis of the working mechanism of the existing classifiers should be moved to the Introduction of the main manuscript. However, in the current manuscript, these contents are placed in Appendix A.2. N/A
The paper proposes an approach for the design of neural networks for classification based on fuzzy theory, and a specific implementation is presented and experimentally assessed. Arguments from cognition to justify the proposed approach are also used, although at the level of inspiration. The lack of reference to fuzzy systems based neural networks models in the relevant literature in the initial version of the paper has been solved in the revised version, and author's rebuttal seems to have clarified most of the issues raised by reviewers. The experimental assessment seems to be robust. Personally I find the jargon used in the paper a bit unfit for NeurIPS standards, however I do not think this should be a valid reason for rejecting a paper for which no serious drawback has emerged. In any case, I think it is good for NeurIPS to diversify the range of approaches and methodologies covered by the scientific program.
This paper shows that significant speed-up gains can be achieved by using KL-regularization with information asymmetry in sparse-reward settings. Different from previous works, the policy and default policy are learned simultaneously. Furthermore, it demonstrates that the default policy can be used to perform transfer learning. Pros: - Overall the paper is well-written and the organization is easy to follow. The approach is novel and most relevant works are compared and contrasted. The intuitions provided nicely complements the concepts and experiments are thorough. Cons: - The idea of separating policy and default policy seems similar to having high and low level controller (HLC and LLC) in hierarchical control -- where LLC takes proprioceptive observations as input, and HLC handles task specific goals. In contrast, one advantage of the proposed method in this work is that the training is end-to-end. Would have liked to see comparison between the proposed method and hierarchical control. - As mentioned, the proposed method does not offer significant speed-up in dense-reward settings. Considering that most of the tasks experimented in the paper can leverage dense shaping to achieve speed-up over sparse rewards, it'd be nice to have experiments to show that for some environments the proposed method can out-perform baseline methods even in dense-reward settings. <doc-sep>This is a very interesting piece of work. We know from cognitive science literature, that there are 2 distinct modes of decision making - habit based and top-down control (goal directed) decision making. The paper proposes to use this intuition by using information theoretic objective such that the agent follows "default" policy on average and agent gets penalized for changing its "default" behaviour, and the idea is to minimize this cost on average across states. The paper is very well written. I think, this paper would have good impact in coming up with new learning algorithms which are inspired from cognitive science literature as well as mathematically grounded. But I dont think, paper in its current form is suitable for publication. There are several reasons, but most important: 1) Most of the experiments in this paper use of the order of 10^9 or even 10^10 steps. Its practically not possible for anyone in academia to have such a compute. Now, that said, I do think this paper is pretty interesting. Hence, Is it possible to construct a toy problem which has similar characteristics, and then show similar results using like 10^6 or 10^7 steps ? I think it would be easy to construct a 2D POMPD maze navigation env and test similar results. This would improve the paper, as well as could provide a baseline which people in the future can compare to. 2) It becomes more important to compare to stronger baselines like maximum entropy RL ( for ex. Soft Actor Critic). And spend some good of amount time getting these baselines right on these new environments. <doc-sep> -- Originality -- This paper studies how to use KL-regularization with information asymmetry to speed up and improve reinforcement learning (RL). Compared with existing work, the major novelty in the proposed algorithm is that it uses a default policy learned from data, rather than a fixed default policy. Moreover, the proposed algorithm also limits the amount of information the default policy receives, i.e., there is an "information asymmetry" between the agent policy and the default policy. In many applications, the default policy is purposely chosen to be "goal agnostic" and hence conducts the "transfer learning". To the best of my knowledge, this "informationally asymmetric" KL-regularization approach is novel. -- Clarify -- The paper is well written in general and is easy to follow. -- Significance -- I think the idea of regularizing RL via an informationally asymmetric default policy is interesting. It might be an efficient way to do transfer learning (generalization) in some RL applications. This paper has also done extensive and rigorous experiments. Some experiment results are thought-provoking. -- Pros and Cons Pros: 1) The idea of regularizing RL via an informationally asymmetric default policy is interesting. To the best of my knowledge, this "informationally asymmetric" KL-regularization approach is novel. 2) The experiment results are extensive, rigorous, and thought-provoking. Cons: 1) My understanding is that this "informationally asymmetric" KL-regularization approach is a general approach and can be combined with many policy learning algorithms. It is not completely clear to me why the authors choose to combine it with an actor-critic approach (see Algorithm 1)? Why not combine it with other policy learning algorithms? Please explain. 2) This paper does not have any theoretical results. I fully understand that it is highly non-trivial or even impossible to analyze the proposed algorithm in the general case. However, I recommend the authors to analyze (possibly a variant of) the proposed algorithm in a simplified setting (e.g. the network has only one layer, or even is linear) to further strengthen the results. 3) The experiment results of this paper are interesting, but I think the authors can do a better job of intuitively explaining the experiment results. For instance, the experiment results show that when the reward is "dense shaping", the proposed method and the baseline perform similarly. Might the authors provide an intuitive explanation for this observation? I recommend the authors to try to provide intuitive explanation for all such interesting observations in the paper.
Strengths The paper introduces a promising and novel idea, i.e., regularizing RL via an informationally asymmetric default policy The paper is well written. It has solid and extensive experimental results. Weaknesses There is a lack of benefit on dense-reward problems as a limitation, which the authors further acknowledge as a limitation. There also some similarities to HRL approaches. A lack of theoretical results is also suggested. To be fair, the paper makes a number of connections with various bits of theory, although it perhaps does not directly result in any new theoretical analysis. A concern of one reviewer is the need for extensive compute, and making comparisons to stronger (maxent) baselines. The authors provide a convincing reply on these issues. Points of Contention While the scores are non-uniform (7,7,5), the most critical review, R1(5), is in fact quite positive on many aspects of the paper, i.e., "this paper would have good impact in coming up with new learning algorithms which are inspired from cognitive science literature as well as mathematically grounded." The specific critiques of R1 were covered in detail by the authors. Overall The paper presents a novel and fairly intuitive idea, with very solid experimental results. While the methods has theoretical results, the results themselves are more experimental than theoretic. The reviewers are largely enthused about the paper. The AC recommends acceptance as a poster.
This paper analyzes the global minima of deep linear networks with weight decay. Under the assumption of linear architecture, $l_2$ regularization and population risk, the paper takes advantage of the symmetry and invariance in the network and derives the analytical expression of the minimum points. Depending on the regularization strength, for two-layer networks, zero is either the global minimum or a saddle point; for deeper networks, zero is always a local minimum and can be global (see Figure 1). The paper also tries to connect these theoretical results with some phenomena in deep nonlinear networks. Strengths: 1. The paper proposes a set of assumptions under which the analytical expression of the minimum points can be derived, and the corresponding properties can be analyzed. 2. I think the most interesting contribution of this paper is to point out that the weight decay, i.e., the $L_2$ regularization, may introduce a local minimum at zero for deep networks. It is not surprising since the regularization term in Eqn (1) is quadratic while $L_0$ has higher order. It is good to formulate the phenomenon as a rigorous theory under the assumptions. 3. The presentation of the paper is pretty good. The settings are clearly stated and the results are supported by rigorous proofs. There are also detailed comments and discussions about the meaning and possible implications of the theoretical results. Weaknesses: My main concern is that the assumptions in this paper may over simplified the problem. The proofs are straightforward (though I believe Theorem 2 is not trivial), and heavily depend on the symmetry (thanks to the assumptions) that does not hold in general cases. If considering weaker assumptions, I guess we may still prove that zero is a local minimum, since the regularization is quadratic and the square loss has higher order, but the quantitative results in this paper may not be extended. Since the implications in Section 5 are all under weaker assumptions, the theoretical results may not support the discussions here strongly enough. In addition, I guess a ResNet architecture may avoid the local minimum at zero since the square loss is not in high order now. Please see the “Weaknesses” Section above. <doc-sep>This work studies the population loss landscape of stochastic (e.g., in the sense of dropout) deep, linear, neural networks under 2-norm weight regularization. The key contribution of this work is the derivation of analytical expressions for the global minima of the aforementioned loss landscape, at least up to a scalar quantity. The implications of this result for training both linear and non-linear neural networks are discussed. In particular, this result illustrates how weight decay and depth can lead to a more challenging optimization problems, as well as the importance of the role of network initialization in avoiding basins of attraction around bad minima. Originality: this paper continues the line of work analyzing linear neural networks. I am not a specialist concerning the study of linear networks, but the results appear, at least to the best of my knowledge, novel and interesting. Quality and clarity: on the whole I think the paper is well organized, well written and clear. A few very minor suggestions in regard to the paper's presentation are as follows. - I think upfront you could state your network architecture/ forward pass function more clearly / in terms of matrix vector product and give the dimensions of each of your parameter matrices. - Notation wise both scalars and vectors use lower case characters which can be a bit confusing, perhaps using bold lower case characters for vectors might help. - It might be helpful for the reader to restate statements of lemmas and theorems in the supplementary so they don't have to flick between. - Line 475 in the supplementary "...we see that the left hand side \\textit{is} larger..." Significance: I think the extent to which understanding linear networks is important for understand nonlinear networks is not entirely clear. I still think it is important that we understand deep linear networks regardless however, and this work seems like a useful contribution. I think the authors are reasonably upfront about the limitations of their work, although I think they could perhaps add some suggested avenues for future works. I can't envisage how this work might have a negative societal impact. <doc-sep>This paper provides a closed-form solution (up to some constant) of the global minima of linear neural networks when trained using square loss and strictly positive weight decay. This result can be extended to the case when the neurons are stochastic and independent. The formulas for the global minima are directly or potentially related to the weight decay, depth, stochasticity of neurons, and signal strength from training data. The authors also used the characterizations of global minima to explain multiple phenomena happening in real neural network training, e.g., deeper networks are harder to optimize. This paper also provided variance analysis in the asymptotic limits of network hyperparameters and did small-scale experiments on synthetic data to validate their theoretical results in non-linear networks. Strengths: - This paper theoretically gives an analytical formula (up to some constant) for the global minima of linear neural networks trained with weight decay, and this formula works for deep linear networks and could be generalized to independent stochastic neurons. These analytical expressions provide opportunities to study the properties of these global minima of deep linear networks in detail. - It is an interesting idea to connect the formula of the global minima of neural networks to various common phenomena in this field, e.g., the collapses in deep learning. - This paper is generally well-written and well-structured. The notations used in this paper are mostly well-defined, and the intuitions and implications of the theoretical results are provided in the main text. These make this paper easy to understand. - The theoretical proofs in this paper appear to be correct, and the related works are adequately cited. Weaknesses: - The theoretical results in this paper are all about linear networks, and the relationship between linear networks are non-linear ones seems somewhat unclear, so most of the conclusions in this paper might not translate directly to non-linear settings. This is my major concern about this paper. The authors claimed on Line 16 that the landscapes of linear networks are believed to well approximate that of non-linear ones, but this claim might be vague and need further explanation. The authors also did experiments with a small two-layer non-linear network on synthetic data, but the scale of this experiment is small, so it is unclear whether this result still holds for more general settings. It would be better if the authors could provide more theoretical or empirical evidence connecting the loss landscape of linear neural networks and those of non-linear ones. - The characterizations of global minima might not be enough to characterize the training of neural networks, which depends on the properties of the entire loss landscape. For instance, it is possible that the weights of neural networks could diverge and it never reaches a minimum, and it is also possible that the weights converge to a bad local minimum or saddle point. It might be better if the authors could (theoretically or empirically) eliminate these possibilities and show that the network weights will always converge to the points that they characterized, i.e., either the global minima or the bad local minimum at 0. - The proof techniques used in this paper seem to heavily rely on the existence of weight decay at all layers, making it hard to be generalized to other settings. Without weight decay, relationships like equation (13) will break and the characterizations of the local and global minima could become much more complicated. - Some arguments made in this paper might be somewhat vague. For example, in Line 268, it might be unclear what the authors mean by "cannot learn the data". Minor Comments: - The details of the experiments the authors did to produce Figure 2 are missing. These details (e.g., how the data are generated, and what the hyperparameters are) could be important for interpreting the experimental results. - The notation "$v$" on the left-hand side of equation (8) seems undefined. Should it be defined as some term in equation (5)? Typos: - Line 243, "are global and cannot generalize" -> "are global cannot generalize" - Line 269, "two-layer net, and the existence" -> "two-layer net, the existence" ------------------------------------------------- Update after author response: I have read all other reviews and the authors' responses, and I decided to increase my score by 2. There are two main reasons why I increase my score: 1. The authors have added empirical evidence (e.g., ResNet on CIFAR) to further relate the loss landscape of linear neural networks to non-linear ones. 2. The proof framework in this paper can be extended to more general settings with similar results, and the authors have provided theoretical results in more general settings, especially when there is no weight decay. The authors stated the assumptions for their theoretical results in the paper and had many discussions about the implications. It might be better if the authors could discuss more explicitly the limitations of this paper in the implication and conclusion sections. This paper is mostly theoretical and focuses on a fundamental problem in general neural network training, and I do not see any immediate negative societal impact of this work. <doc-sep>This paper studies deep linear neural networks with weight decay and stochastic neurons. The authors show that the analytical global minima of square loss can be found for shallow neural networks (thm1) and deep neural networks (thm2). The analysis has some implications on the role of weight decay and the depth of neural networks. Strengths: I like the setting in this paper, which is clean and simple but can manifest interesting properties of neural networks. The results are very interesting, especially the part where bad minima emerge with weight decay. Weakness: 1) I understand the difficulty of analyzing global minima hence some assumptions are needed, e.g., diagonal A_0 for the exact form of b^* for the shallow neural networks., and single data. But some results are not easy to interpret, e.g., Thm2, Prop 3. Maybe the authors can provide more intuitions. 2) I have a minor concern that in the main contributions, point 4 seems irrelevant to this paper. -- What's v in Eq.(8)? Yes.
There is a clear consensus amongst the reviewers that the manuscript advances the theory for linear deep networks to a degree warranting acceptance at NeurIPS. The authors responded well to the issues raised by the reviewers which results in increased support by the reviewers that the manuscript be accepted. Inclusion of weight decay, stochasticity, and architectures beyond feed forward networks make this a valuable addition to the theory of linear deep networks.
This paper presents new methods for inference and sampling for Archimax copulas. Archimax copulas are a family of copulas defined through an Archimedian generator and a stable tail dependence function (stdf). In order to discuss the inference and sampling for Archimax copulas, the authors first proposed inferential and sampling methods for Archimedian generator and stdf. Then, combining these methods, the methods for inference and sampling for Archimax copulas are established. In experiments, it is seen that the proposed inferential methods for Archimedian generator and stdf show satisfactory performance. Other experiments are given to illustrate that Archimax copulas outperform, or work as well as, some existing models for a couple of real datasets. *** STRENGTHS *** (a) [Originality] The presented inferential and sampling methods for Archimax copulas seem new. These methods are derived mainly by combining existing methods for Archimedian generators and stdfs. (b) [Quality] The paper seems technically sound. Comprehensive experiments are given to assess the performance of the proposed inferential methods and compare submodels of Archimax copulas with some existing models. (c) [Clarity] The paper is clearly written in general. Section 2 providing the background of the presented theory would be helpful for readers who are not family with copulas. (d) [Significance] Archimax copulas are flexible models which include Archimedian copulas and extreme-value copulas as special cases. Therefore the proposed inferential and sampling methods for Archimax copulas could be useful in practice when flexible modelling is required. *** WEAKNESSES *** (e) [Quality] Apart from Archimax copulas, there exist other flexible families of copulas such skew-$t$ copulas (see, e.g., Joe [49], Section 3.17.2). The paper does not sufficiently compare the Archimax copulas with those existing copulas. (f) [Significance] I am not sure about the popularity of Archimax copulas and the importance of the related theory this paper presents. (Nonetheless I appreciate the results of experiments which suggest the usefulness of the Archimax copulas.) The authors have addressed the limitations and potential negative societal impacts of their work in Section 5. Depending on the authors' response to my questions (g) and (i), I might claim that the usefulness of Archimax copulas is limited. <doc-sep>The authors propose scalable estimation and sampling procedures for Archimax copulas. On simulated and real data, they demonstrate that Archimax copulas fit using their procedure can model complex data with dependencies between extreme values accurately, in comparison to existing deep generative methods. The paper bridges two important areas of research: copulas and deep generative modeling. It is highly original (the first method of its kind) and technically excellent. It is also potentially very significant in its impact: modeling rare events is critical to managing risk in real world applications, and relying naively on modern deep generative approaches can potentially be very problematic. The paper is overall quite clear, though there are two areas where I struggled: first, an intuitive explanation as to why Archimax copulas are good for modeling dependencies between extreme events would be helpful to the reader; second, I’d appreciate a concise statement of the complete model up front, explaining how the deep generative model determines the stdf. The authors clearly explain some of the method's important limitations. <doc-sep>This paper proposes novel procedures for both inference and sampling of Archimax copulas. This family of copulas is important due to their ability to represent the bulk and tail of distributions simultaneously which can be suitable for healthcare and hydrology real-world data. The authors propose a 'hybrid' approach mixing copulas and neural networks to allow for more flexible estimation. In experiments, the proposed method is compared to SOTA density modeling techniques and the results suggest that their method can extrapolate to tails and scale to higher dimensions. Strengths: - Originality - to the best of my knowledge this is the first work to address flexible density estimation of bulk and tail distribution with Archimax copulas. - Quality - the authors put a lot of effort into including a high level of technical details and experiments in the paper and appendix - Clarity - since copulas are not a straightforward tool in the machine learning community, I appreciate the background overview and related work mentioned throughout the paper and supplementary. - Significance - considering the tails not only bulk of the data is overlooked problem and can be very significant in many real-world applications Weaknesses: - Originality - although the presented methodology is novel, it does still build on existing work regarding Archimax copulas - Quality - I would expect more challenging/motivating experimental results to support the claims of significance and contribution from the introduction - Clarity - A running example or one real data motivating example could improve/clarify why Archimax copulas are an appropriate/necessary tool in critical scenarios. This can help bring the paper closer to the ML community, make it more relevant for readers. - Significance - since copulas are still not widespread at the ML conferences, I feel like additional motivation for such papers is needed, either to showcase superiority on large-scale real-world datasets or find some new tasks where SOTA models fail. I also appreciate the code submission and effort of implementing everything in Python (rare for statistics methods). The authors have addressed the limitations and potential societal negative impact of their work. <doc-sep>The authors propose an efficient inference and sampling schemes for archimax copulae based on learning of the generator and stable tail dependence functions through deep learning techniques. *Originality*. The work is original. To my knowledge, the authors propose a new method to infer and sample from archimax copulas extending the previous work in this area. *Clarity*. The authors go a long way to make sure the paper is accessible by a larger machine learning community. They provide necessary background info on copulae and their use in machine learning. They also provide extensive derivations in the appendix. That said, the material itself is quite dense. I have not found typos. One minor thing is that Figure 1 does not seem to be referenced anywhere in the text. *Quality*. The authors build upon a previously developed theory and methods to derive a scheme to infer archimax copulae via the means of deep learning models which also allows for an easy sampling from. The authors provide an analysis of performance of the proposed method comparing it to other state-of-the-art methods. The details of the experiment setups are given in detail in the supplementary material. The authors provide extensive background/related work review in both the main text and the appendix. *Significance*. Inference of the multivariate distributions from the data is a core statistical/machine learning problem. The authors propose a way to infer multivariate dependencies via archimax copulae representations of which are learned via deep learning models. The method also allows for sampling from the learned distribution. The method serves both the bulk and the tails of a distribution. With that said, a proposed method is of major significance for the field. The authors faithfully address limitations and possible negative impact of the proposed method.
The paper proposes a new method for inference and for sampling in archimax copulas. All the reviewers praised the soundess and clarity of the paper, the novetly of the ideas and the experimental results. Copulas might not be one of the core topics of the NeuRIPS community, but the reviewers pointed out that: 1) the authors did a great job at explaining copulas to the ML community, a valuable tool to model extreme events. 2) the method builds a connection between copula and deep generative modeling, and hence opens new research directions. Hence, they all enthusiastically recommend to accept the paper, and I agree with them. Some of the reviewers [HCqW, 5Yjr] also supported the idea to highlight the paper (oral or spotlight presentation).
This paper studies square loss in a realizable time-series framework, the main result shows that whenever a trajectory hypercontractivity condition holds, the risk of least squares estimator on dependent data matches the iid rate order-wise after a burn-in time. The paper formulates a phenomenon called learning with little mixing and presents several examples where such phenomenon occurs. This paper gives solid theoretical results on learning with dependent data. It shows on a broad class of examples, the LSE applied to time-series model behaves as if all samples are independent given enough data. Although I am not familiar with the background of this problem, the results look insightful. On the other hand, I'd also be curious to see if the theory can be testified empirically on simple regression problems. There is no negative societal impact. <doc-sep>The authors study the problem of learning from dependent data over time, with the aim of obtaining empirical risk minimization bounds that do not depend on the mixing time of the process. They consider a time-series framework with martingale difference noise, and prove a general result: under an assumption they introduce called *trajectory hypercontractivity* (and sublinear growth in the dependency matrix), the risk of the least-squares estimator matches the iid rate after a burn-in time (note the burn-in time can depend on the mixing). This is in contrast to naive bounds where the effective sample size is deflated by a factor of the mixing time. The proof relies on using the hypercontractivity to control the lower tail of sums involving the dependent random variables. The authors specialize the result both to non-parametric function classes and those with logarithmic metric entropy. They give several examples where their conditions are satisfied (and which recover or generalize previous results): finite-state Markov chains, bounded function classes for which $L^2,L^{2+\\epsilon}$ norms are equivalent, and infinite-dimensional function classes based on subsets of $\\ell^2(\\mathbb N)$ ellipsoids (e.g., functions of bounded norm in a RKHS). The strength of the paper lies in the fact that it gives a very general result that unites previous results under a general framework, e.g., results on learning linear dynamical systems and finite Markov chains. There is a large degree of quantitative flexibility in the assumption that the authors introduce (trajectory hypercontractivity, which interpolates between boundedness and small-ball behavior). The proofs in the appendix are easy to follow. However, the main body of the paper is technically dense and not easy to digest; the examples are fairly abstract. It would help the exposition significantly to expand on concrete instantiations of the theorem (moving more techical commentary to the appendix as necessary), for example, writing out the theorem for linear dynamical systems obtained from the general theorem. There are also some limitations to the theorem (see below). Certain known results are not covered by the framework, in particular, learning linear dynamical systems that are marginally stable or which have unbounded noise. Additionally, as the max eigenvalue approaches 1, the necessary burn-in time given by the theorem blows up, whereas known results do not have this dependence. (This stems from reliance on rate of growth of the dependency matrix--while the asymptotic rates do not depend on the mixing, the burn-in time does.) The authors discuss this in Section 4.3. <doc-sep>The paper shows that for mixing systems under an easiness condition, the rate of convergence of the LSE for rather general hypothesis classes has i.i.d. data like performance. The paper proves excess risk bounds of LSE with dependent data, however the results aren't very surprising, and contributions are a bit too incremental for me. NA <doc-sep>The authors investigate the square loss in a realizable time-series framework with martingale difference noise, which is an interesting topic in machine learning with non-i.i.d. data. Their main result is a fast rate excess risk bound which shows that whenever a trajectory hypercontractivity condition holds, the risk of the least-square estimator on dependent data matches the iid rate order-wise after a burn-in time. Moreover, the authors give some examples of when the condition holds. I find the main context easy to follow. Strength: 1. This paper is technical. It is clearly written and well organized. 2. The result in this paper is significant. Weakness: 1. This paper requires a more detailed discussion and comparison with the previous related work. 2. There are some confusing mistake in the proof of the main results. 1. This paper lacks a detailed discussion and comparison with the previous work. 2. This paper seemed not to give any new insight on this field.
This paper studies the problem of learning under dependent data. Existing bounds usually work by deflating the effective sample size by a factor that depends on the mixing time. Essentially when the samples are far enough away from each other, depending on the mixing time, they can be treated as independent. This paper introduces a new framework that they call the trajectory hypercontractivity condition, which stipulates that there is sublinear growth in the dependency matrix. This is a flexible perspective, and the paper derives both general results and applies them in interesting settings. There are some weaknesses, e.g. they cannot recover results in the marginally stable case or in settings with unbounded noise. For example, as the maximum eigenvalue approaches one, the burn-in blows up. I think reviewer SNkB's perfunctory review should be ignored. The paper is somewhat borderline, but in my opinion it is technical stronger and more interesting than some of the other borderline papers in my batch. I recommend acceptance.
1. Strength: Targeting an important problem of FL: reducing the communication cost. 2. Weakness: This work simply applies the meta-learning method into the federated learning setting. I can’t see any technical contribution, either in the meta-learning perspective or the federated perspective. The experimental results are not convincing because the data partition is not for federated learning. Reusing data partition in a meta-learning context is unrealistic for a federated learning setting. The title is misleading or over-claimed. Only the adaptation phase costs a few rounds, but the communication cost of the meta-training phase is still high. The non-IID partition is unrealistic. The authors simply reuse the dataset partitions used in the meta-learning context, which is not a real federated setting. Or in other words, the proposed method can only work in the distribution which is similar to the meta-learning setting. Some meta earning-related benefits are intertwined with reducing communication costs. For example, the author claimed the proposed method has better generalization ability, however, this is from the contribution of the meta-learning. More importantly, this property can only be obvious when the data distribution cross-clients meet the assumption in the context of meta-learning. The comparison is unfair to FedAvg. At least, we should let FedAvg use the same clients and dataset resources as those used in Meta-Training and Few-Rounds adaptation. “Episodic training” is a term from meta-learning. I suggest the authors introduce meta-learning and its advantage first in the Introduction. Few-shot FL-related works are not fully covered. Several recent published knowledge distillation-based few-shot FL should be discussed. 3. Overall Rating I tend to clearly reject this paper because: 1) the proposed framework is a simple combination of meta-learning and federated learning. I cannot see any technical contribution. 2) Claiming the few round adaptations can reduce communication costs for federated learning is misleading, since the meta-training phase is also expensive. 3) the data partition is directly borrowed from meta-learning, which is unrealistic in federated learning. ---------after rebuttal-------- The rebuttal does not convince me with evidence, thus I keep my overall rating. I hope the author can obviously compare the total cost of meta-learning phase plus FL fine-tuning phase with other baselines. <doc-sep> This paper studied the combination of federated learning tasks in a meta-learning setting. In particular, with the assistance of the pre-trained meta-model, the new FL model's training can be completed within limited communication rounds. It was inspired by the meta-learning method used in few-shot learning scenario. This paper proposed a few-round learning (FRL) algorithm and designed global prototype-assisted learning (GPAL) scheme to assist training. It is an interesting topic to combine meta-learning with federated learning. The weaknesses of this paper are summarized below. 1. The proposed method updates meta-model in each client. However, the meta-learning task consumes lots of computation resources and highly relies on the large number of classes. These make it hard to train a meta-model in a local client in a federated system. Although the setting sounds useful, it is hard to realize in real-world applications. 2. This paper is relevant to two widely-known few-shot learning methods, MAML, and prototypical network. So, it is better to consider MAML+FL and/or ProtoNet+FL as baselines to make the proposed methods more convincing and prove the efficacy of the proposed loss functions. 3. Given the complexity of the proposed algorithm and associated hyperparameters, the authors could anonymously release the source code in the reviewing stage. More details about the experimental platform used in this paper should be given. 4. As illustrated in the Experimental setup on page 6, the meta/pre-training phase needs a large number of communication rounds. Is it appropriate for the bandwidth-limited or time-sensitive applications? Will this be a distracter in few-round learning scenarios? 5. For the 5-way setup in Table 1, there are 5 classes are randomly sampled from the dataset in each episode, which means that all the clients contain all the training classes, 64 classes for miniImageNet and 351 classes for tieredImageNet, locally. This is impractical because most local clients only have limited information to share. 6. The representation of trainable parameters in Algorithm 1 is a little bit confusing. For example, \\theta and \\phi are actually the same parameters. The only difference between them is that \\theta is updated during local update using the support set, while \\phi is updated during local meta-update using query set. Since the algorithm is an important part of this paper, the definition and use of these parameters should be much clearer. If possible, the authors can add a detailed interpretation of these two parameters. <doc-sep>## Summary This paper proposes a new paradigm to train federated learning models. In particular, following the spirit of meta-learning for few-shot learning, the authors propose to meta-train an initial model so that starting from this point, only $R$ (eg, 3) rounds of FL are needed to produce a satisfying test accuracy. ## Pros 1. The authors made significant efforts in designing the meta-learning strategy for few-round FL. 2. The proposed algorithm has the potential to redefine FL training paradigm. But there should be more validations. My questions and concerns are stated in the next section. ## Cons The major concern I have is about the way they construct the dataset and evaluate the algorithm. The training task the authors selected is more like a meta-learning standard setting and is not common in federated learning. So I doubt its performance in realistic FL settings. It would be great if the authors can evaluate their algorithm in a standard FL dataset, otherwise it is not convincing. 1. When constructing the meta-learning datasets for each episode, the authors sample several classes from the whole dataset and then simulate 10 clients based on the selected samples. However, in FL setting, this is infeasible, as the server cannot access the whole dataset. The authors should describe how to construct the meta-learning procedure given hundreds or even thousands of clients without accessing their local data. For example, Shakespeare dataset has 715 train clients and 715 test clients. How to construct the meta-learning procedure from this decentralized data and how the algorithm performs are unclear. 2. The scale of FL is relatively small. At each episode, there is only 10 clients. However, in practical on-device FL, there can be thousands of clients for training and testing. For example, in [1], StackOverflow has 342,477 training clients and 204,088 test clients. Even EMNIST dataset has 3400 test clients. The performance of the proposed algorithm is unclear in these realistic large-scale FL problems. 3. The meta-train algorithm require the computation of full-batch loss at each round, which consumes more computational resources than vanilla FedAvg. The authors are supposed to discuss this additional overhead. ## Post-rebuttal comments Thanks the authors for the response! I've read it and other reviewers' comments. I feel the authors didn't directly answer my questions and just reiterate what they have in the paper. Unfortunately, it is still unclear to me how to perform meta-training on standard FL training tasks, for example, shakespeare in [1]. In this training task, there're total 700+ clients. Does that mean in the meta-training phase, we need to sample 700+ clients for each episode? How to construct this meta-train dataset from a standard federated dataset? [1] Reddi et al. Adaptive Federated Optimization. 2020<doc-sep>The paper is to train a meta-model in a small number of selected nodes in a federated learning environment, and then use the meta-model to assist the federated learning in reducing the communication rounds. It is basically a federated version of a prototypical network. The proposed method relies on a strong assumption that there is a meta-training environment in federated learning. It is not a standard FL setting. Moreover, given the assistance of the meta-model, there is no guarantee that the federated learning environment will converge in a few-round. The major technique contribution of the proposed method is how to meta-train a global model in a federated setting. In particular, it adapts the prototypical network to fit the federated setting. It is unclear how the proposed method provides any theoretical contribution rather than applied research. In the experiment, one dataset is not enough to support the effectiveness of the proposed method. More federated learning-related benchmark datasets should be discussed, e.g., FeMNIST, Shakespeare texts, CIFAR, and FeCelebA. In particular, the proposed two-stage procedure is equivalent to: learn a global model in a standard FL setting, and then conduct personalized deployment for each device or a specific group of devices. Therefore, in the experiment part, the authors need to add more baseline methods, for example, some personalized federated learning method should be selected as baseline methods. THE MAJOR CONCERN: In Algorithm 1, lines 16 and 18 are a federated aggregation-based updating, and line 24 is a prototypical-based meta learner updating. These two updating methods are inconsistent which are to optimize different objectives, and the authors should give an overall loss to unify the updating steps rather than force two kinds of updating into one framework. Typo: “Metra-training” in Figure 1.
This paper proposes a meta-learning based few-shot federated learning approach to reduce the communication overhead incurred in aggregating model updates. The use of meta-learning also gives some generalization benefits. The reviewers think that the paper has the following main issues (see reviews for more details): * Limited technical novelty - the paper seems to simply combine meta-learning with federated learning * Not clear whether the communication overhead is actually reduced because the meta-learning phase can require significant communication and computation. * The experimental evaluation, in particular, the data distribution, could have been more realistic. I hope that the authors can use the reviewers' feedback to improve the paper and resubmit to a future venue.
The paper studies Goal-conditioned Hierarchical RL (GCHRL) and proposes a new algorithm called Hierarchical Exploration approach with Stable Subgoal representation learning (HESS) to improve the stability of subgoal representation learning and strengthen the exploration at high level. HESS is built on previous method LESSON. The instability of subgoal representation learning is alleviated by a representation regularization which is utilized to encourage the representation to be stable for the states with relatively lower triplet losses (originated from LESSON). Further, this paper proposes an active exploration method for the high-level learning. The method is built on the definitions of Novelty and Potential of states, which corresponds to accumulated visit counts of high-level state trajectory and a negative distance to the perturbed subgoals. Extensive experiments are conducted in a few MuJoCo environments with sparse reward, demonstrating the superiority of proposed algorithm and the effectiveness of different ingredients. Strengths: - I appreciate that this paper studies the subgoal learning instability and high-level exploration which are of importance to GCHRL research. - The regularization method for representation stability is reasonable, simple but empirically effective. Meanwhile, the representation instability is a problem encountered in many other scenarios and the proposed regularization method is general and of potentials to be leveraged in other representation learning problem. - This paper proposes effective active exploration for high-level exploration of GCHRL. To my knowledge, the subgoal perturbation along with the definition of potential is new in GCHRL. I appreciate the combination of novelty and potential which properly takes novelty and reachability into consideration for an effective exploration selection. - The experiments are extensive, well evaluating and demonstrating the characteristics of HESS across multiple perspectives. &nbsp; Weaknesses: - I think the methods proposed in this paper are relatively simple and somewhat incremental, however, thanks to the solid experiments, the effectiveness of these methods are demonstrated. At a first glance, the representation regularization seems to be disconnected to the active exploration method. Later, I found that the stable representation learned is important to the effectiveness of novelty calculation. I recommend the authors to make the connection more obvious for a better convey of the story. - Although I think the methods are reasonable in an overall view, I have a few concerns on concrete implementations. I list my questions and concerns below. &nbsp; My first concern is the calculation of novelty (Equation 4). I have no question on the maintenance of $n(\\phi(s))$; but for the calculation of accumulated visit count of high-level state trajectory, I wonder given a state $s_i$, how the trajectory of policy $\\pi_{hier}$ is obtained exactly? &nbsp; Second, for Equation 5, since the potential is defined over the expectation of high-level transition obtained by $\\pi_{hier}$ with the perturbed subgoal $g_e$, how are such transitions obtained? &nbsp; For both above two concerns, one possible way is to simulate the rollouts with a world model, but this seems not to be the way used in this paper. Alternatively, are these approximated with the trajectories in the replay buffer? If so, how should consider the off-policyness and suboptimality? &nbsp; The third question is on the computation complexity. The top $k%$ selection in representation regularization and the calculation in Equation 6 (the selection of candidates according to the constraints, the calculation of novelty and potentials). It seems the computation is heavy for these. What are the practical implementations? &nbsp; Besides, I have a few questions on the experiments. - How to understand that some baseline algorithms work better in image-version environments? E.g., H-SR/H-ICM on Ant-Maze and LESSON on Ant Push. - Is the sentence ‘so the intrinsic rewards of H-ICM may vanish before the policies have been learned well’ checked in the experiments? - In Figure 6, are the same 5 trajectories used for the upper and lower panels at each time point? And what are the trajectories exactly, since at the beginning of learning, the agent fails to reach the final goals (according to the results in Figure 4)? &nbsp; Minors: - Can the authors explain more on the sentence ‘to keep the low-level reward function $r^l$ stationary while learning $\\phi$, we concatenate states and state embeddings together as the low-level states’ (above Equation 1)? &nbsp; I will raise my score if my questions and concerns mentioned above can be well addressed. ================================ Post-rebuttal comments: Some of my concerns and questions are well addressed. I raised my evaluation to a borderline acceptation although my main concern on the relatively complex mechanisms involved in the implementation, e.g., hash table, iterative sampling and fitering, table look-up and so on (a few of these are sample-wise), remains. And I think these computation implementation should be noted and described in detail in the revision later. However, I recognize the authors' efforts in pushing the boundary of HRL. I vote for a borderline acceptation after discussing with the authors. I recognize the authors' efforts in pushing the boundary of HRL although I still have some concerns on the complexity of the proposed methods and the practical computation cost. <doc-sep>1. This paper investigates learning stable subgoals within a deep hierarchical reinforcement learning setup. 2. Two controllers are learned from the same experience replay buffer. The high level controller serves as a meta controller and the low level controller serves as a goal-achieving agent. The high level controller communicates abstract goals to the low level controllers. 3. The high level controller is optimized using an extrinsically specified reward function. The low level controller optimizes the intrinsic goal communicated by the high level controller. The subgoals are changed after a deterministic time length (known option termination). 4. The subgoals are designed with the key insight that "desirable novel subgoals should be reachable and effectively guide the agent to unexplored areas.". Typically count-based, predicted or successor feature based rewards have been used as novelty measures. However, these fall short in terms of reasoning about reachability of states. To handle this, a potential measure for subgoals is proposed which regularizes the novelty measure. 5. To go to unexplored states, a directional goal is synthesized/imagined using the current state and a directional vector. The potential function makes sure that this is approximately reachable by formulating reward as the expected negative distance between the ending state and imagined goal. This is similar to Feudal networks (Vezhnevets et al.) but goes beyond it to handle diversity and reachability. 6. The approach is validated on a set of hard to explore continuous environments with reasonably strong and relevant baselines. 1. I think this paper is interesting and explores a novel set of ideas. The baselines also seem reasonable. The closest baseline in terms of using directional goal vectors is Feudal Networks. I would have expected to see a head to head comparison with this approach, even though this proposed method goes beyond it. However, the core idea of having a meta controller output goal vectors and then sub-controllers learning to execute them was explored in Feudal networks. 2. What are the effects of changing the option termination condition? Currently it is hard coded to be c. What are the implications of this? Do the authors observe any deviations or improvements if this hyper parameter varies. It seems like the potential function, novelty measure and option termination are deeply interlinked. It would have been good to more clearly understand the relationship between these measures. 3. Figure 4 is the main quantitative figure. It seems important to test the effects of stability regularization. This is highlighted qualitatively in Figure 6 but not shown in Figure 4. 4. The qualitative analysis on the effects of the interaction between potential and novelty measure is quite sparse. It is not clear how it fails and where it works. Figure 6 is helpful but it needs improvements in terms of clarity and scope (other environments) 5. Figure 5 is truncated at 5 million steps. How does the asymptotic performance look like for this method? Does it plateau sooner than baselines? What is the maximum achievable reward for these tasks? This paper presents an interesting and novel idea at the intersection of deep HRL, novelty based exploration and reachability. The experiments are sound but could require further clarification and expansion of scope. The clarity of the paper can also be improved to more directly address the need and important of stability regularization. <doc-sep>The authors propose a hierarchical RL algorithm which augments an existing contrastive learning-based subgoal representation objective with heuristics for exploration. The proposed algorithm seeks to reduce representation drift over the course of learning by penalizing the learner for modifying $\\phi(s)$ for states $s$ with low contrastive loss. Furthermore, the authors propose exploration heuristics that encourages the learner to explore in promising areas of latent space by combining count-based novelty and potential measures. The proposed algorithm is demonstrated to have the desirable properties, and outperforms existing methods. The analysis is complemented by an ablative analysis that disentangles the effects of each proposed mechanism. **Pros** 1. Comparisons between the proposed method and other hierarchical methods demonstrate that the algorithm results in better performance. 2. The authors performed thorough ablations demonstrating the impact of each proposed component of their algorithm. **Cons** 1. The authors do not explain how the counts and potential measures are estimated from data. In particular: 1. How are the cumulative counts $N(s)$ in (4) estimated, given $\\pi_{hier}$ is changing over the course of training? 2. How is $U(g_t)$ estimated from buffer data, given that the expectation is calculated with $g_e$ being set as a subgoal for the policy, and thus would not have been observed in the actual environment rollouts? 2. Why is prioritized sampling used in Equation 3? The motivation on this point was not really explained in detail. 3. For the ablative analysis, it seems like it would be better to evaluate reactive exploration using cumulative counts instead of immediate counts to better isolate the impact of reactive exploration versus learning a policy to maximize the same intrinsic rewards. **Clarification Questions** 1. Why does choosing $\\lambda(s)$ as a continuous function of the representation loss impose heavy computational demands? It seems like the losses are already being computed in the process of obtaining the triplets with minimal representation losses. 2. How is the latent space partitioned into cells if there are no knowns bounds on $\\lVert \\phi(s) \\rVert$ a priori? 3. In motivating the potential measure, the authors claimed that the “novelty measure is a mixture of counts in the past and current representation spaces”, but it is unclear why this is the case if one can easily recompute $n$ when $\\phi$ changes. 4. How is the low-level policy training done? Is hindsight experience replay used? Overall, I vote for a weak accept. The ideas in the paper are interesting, and the experimental evaluation is thorough and demonstrates the benefits of the proposed algorithm. However, the work could benefit from a more detailed description of how the relevant measures are estimated, as well minor changes to the experimental procedure. <doc-sep>This paper proposes a new algorithm for goal-conditioned hierarchical reinforcement learning that is able to succeed in tasks with sparse rewards (differently from most other methods in the field). It does so through two innovations: (1) a representation learning procedure that is more stable, and (2) a exploration strategy that takes into consideration not only novelty but also reachability. Specifically, the representation learning procedure is based on what is now a standard a contrastive loss, but it is augmented by a regularization term that make the learning procedure stable where the representation is already satisfactory, allowing goal sampling to be more effective. Figure 6 is a particularly nice visualization of the impact of this regularization term. The exploration strategy to sample goals to be visited is also novel. Instead of using goal visitation counts, this paper proposes the idea of using expected sum of state visitation counts from that state onwards, capturing some notion of long term novelty. Moreover, the exploration bonus also has a potential term that captures how promising each goal state is in terms of how far from the goal state the agent is expected to end up. Quantitative impact is reported in Figure 7, but I particularly liked the intuitions/visualizations provided in Figure 8. This paper is really well executed. It builds on top of an already complicated architecture adding more than one new component to that architecture, but it does so while providing proper intuitions for each one of these new components and, more importantly, actually doing ablation studies that quantify the impact of each component. To me, Section 5.4 is the highlight of the paper. I also appreciated Section 5.5, which shows how the paper is also concerned with stability over different parameters introduced by the proposed metric. I think the paper would benefit from further clarifying some parts of the text, but otherwise this is a good paper. Specifically: * In the Introduction, for example, it is said that methods based on visit count are "_reactive_ exploration methods that need to learn how to maximize the intrinsic rewards before performing exploratory behaviors". I don't necessarily disagree with that, although the whole idea of visit counts is to incentivize these exploratory behaviors. My question though, is: isn't this exactly the same with the proposed idea? It does use counts and not only that, but also expected state visitation counts for the trajectory, which is even more demanding in terms of having to visit the state first. * In Section 2, when defining $U(\\phi(s_t))$, the distance is defined to be between $g_t$ and $\\phi(s_t)$. For the proposed algorithm, should it be $g_e$ instead of $g_t$? Still on Section 2, it is said "we concatenate states and state embeddings together as the low-level states". What does this actually mean? What are the states here? For images, for example, would it literally be all pixels on the screen? * In Section 3.1, I don't think $\\lambda_0$. * In Section 3.2, it is said "low-dimensional continuous latent space into discrete cells, i.e., the state embeddings are mapped into cells containing them.". What are these cells? How were they defined? I can imagine this is somewhat straightforward to do if you assume you have access to x,y positions, but how is this done in higher dimensional settings? How are these cells defined for Images, for example? * In Section 3.2, when discussing the potential measure, it is said that Figure 3 demonstrates that "with online representation learning, the novelty measure is a mixture of counts in the past and current representation spaces, so it might mislead the exploration". How is that? I couldn't understand what I should be looking at in Figure 3 to reach this conclusion. * In Section 3.3, it is said that the active exploration strategy "avoids the non-stationary issue". How? Aren't these reward signals changing constantly based on counts and the representation being learned? How does the active exploration strategy actually avoids the non-stationarity issue? * In Section 4, it is said that "Bottom-up HRL works learn a set of diverse skills or options in a self-supervised manner, and use those semantically meaningful low-level skills to explore in downstream tasks", but "those methods may produce some redundant and useless skills". This claim is not backed up by any reference or experiment. Why is this true when some of these methods explicitly ask for diverse skills that are not supposed to overlap to each other? * In Figure 4, how were the confidence intervals computed if only 5 samples were available? * In Section 5.2 it is said "the successor representation estimates the expected future state occupancy starting from a given state (Kulkarni et al., 2016b), but not the visitation number of the given state, which is less helpful to promote exploration." However, isn't this exactly what H-SR shows? That the $\\ell_1$ norm of the SR captures the visitation number of a given state? Moreover, the reference to the SR should be "Peter Dayan: Improving Generalization for Temporal Difference Learning: The Successor Representation. Neural Comput. 5(4): 613-624 (1993)". * No details were given on how Figure 6 was generated. I don't know how to reproduce it. * *Importantly, in the ablations, were the parameters of the ablated methods tuned?* This paper is really well executed. It builds on top of an already complicated architecture adding more than one new component to that architecture, but it does so while providing proper intuitions for each one of these new components and, more importantly, actually doing ablation studies that quantify the impact of each component. To me, Section 5.4 is the highlight of the paper. I also appreciated Section 5.5, which shows how the paper is also concerned with stability over different parameters introduced by the proposed metric. I think the paper would benefit from further clarifying some parts of the text, but otherwise this is a good paper.
The paper proposes a new goal-conditioned hierarchical RL method aimed at improving performance on sparse reward tasks. Compared to prior work the novelty lies in a new way of improving the stability of goal representation learning and in an improved exploration strategy for proposing goals while taking reachability into account. The paper does a good job of motivating the main ideas around stability and combining novelty with reachability. Reviewers found the quantitative evaluation and the choice of baselines to be good with the exception of not including Feudal Networks which the authors explained was due to poor performance on the hard exploration tasks (something that has been observed in prior work). Reviewers also found the thoroughness of the ablations and insightful visualizations to be highlights. Overall, reviewers were unanimous in recommending acceptance, which I support.
- Summary This paper presents a framework for performing both differentiable physics simulations and differentiable rendering. This fully differentiable simulation and rendering pipeline is then employed to perform system identification tasks, directly from video frames, being able to match or outperform both visual-based and state-based baselines. Moreover, the potential of this framework to be applied for visuomotor control is also demonstrated. - Pros This method unified advances in the differentiation of both physics simulation and rendering. The experimental results demonstrate a good ability to perform system identification for diverse parameters and control directly from videos. The ability to identify parameters or direct control tasks directly from images is useful, since it reduces the need for direct supervision/annotation in the form of state information. The presented simulator supports a variety of "domains", such as rigid and deformable body dynamics, cloth simulation, and these are efficient enough to be run faster than real time (at least for simple tasks). - Cons Overall, the proposed method is mostly a unification of pre-existing techniques from different fields, such as differentiable rigid and deformable body dynamics, differentiable rendering. The paper itself admits that a limitation of this method is that it currently "has limited capability to handle contact-rich motion that introduces a large number of discontinuities", which limits its applicability to real-world scenes. It cannot also currently handle joints. All of these would be important for possible robotic applications, for example. The tasks demonstrated in the experiments are simple, and issues from model mismatch does not seem to have been thoroughly evaluated (see comments below for more). - Reasons for score [Edit: Score updated, see discussion below] Overall, given the "pros" described above, notably the interesting results achieved for system identification and control directly from video frames by combining differentiable physics and rendering into a single framework, I recommend this paper for acceptance. Given some of the concerns raised in the "cons" and in more detail in the comments below, I for now will score this paper as a little above the acceptance threshold. - Additional comments The scenarios used for the system identification and control tasks are fairly simple, with usually only a single object and few contact points. Was the ground truth for the scenarios in the system identification tasks generated using gradsim itself? If so, isn't it unfair that it is compared to other models (e.g., pybullet), for which there would be model mismatch? (While not mismatch would be present for gradsim) Along the same direction, the experiments present a section on "Impact of imperfect dynamics and rendering models". It would also be interesting to see a quantification of the impact of model mismatch (possibly both while using the same renderer, i.e. only dynamics mismatch, or also different renderers) In the experiments section, it is said that "Inference ... is done by picking an initial guess of the mass (at random)". From what distribution is this random initial guess picked from? What are these starting guesses in relation to the true parameters? The section on "Impact of shading and texture cues" seems a little too short, which renders it hard to understand in detail what is going on. <doc-sep>This work presents a fully differentiable physics simulation coupled with neural rendering such that input video can be used to estimate object properties or find control policies to move those objects by trying to generate the same video at the output. The paper is well motivated by presenting a natural progression of ideas from this literature and it does a thorough job discussing related work. The paper is light on details in section 3 and it is necessary to refer to the appendix to get a complete picture. Overall, the technical contribution is solid and thus worth accepting the paper even if the validation is with relatively simpler experiments since they are sufficient to motivate this direction to be further researched. Below are a few comments to aid in improving the current work: - All experiments use what I am guessing are input (desired) videos from the same pipeline and then later hiding some parameters (to be learned). While this is a good validation the learning done here is still 'in distribution'. It would be useful to see if video (even simplistic) from a different simulator or simplified from a real world video could be applied. To what extent is this possible and are there any fundamental limitations that prevent this at the moment? - Analysis is mostly with one object in an empty scene. Are there technical limitations to handling realistic scenes where there are multiple objects and those objects interact with each other as well the environment? How does this affect performance wrt forward and backward pass timings? With such experiments, it would be helpful to understand if the released code can be easily extended to such (more complex) settings or if someone would need to start a new implementation from scratch. - The scale on the loss landscape is quite small, '0.4 pixelwise mse'. How good does the initial guess need to be to stay in the range, do the curves in fig 3 continue the trend beyond these values for larger error? - Reality gap: while this is discuss in reference to visual appearances, since the current experiments deal with synthetic scenes, the more relevant topic to discuss is the reality gap wrt physics and object motions. Experiments designed to study this would boost confidence in this approach. Other comments: - How much does the performance depend on good initial guess? - Currently a single impulse is used to set things in motion, can this be extended to handle more continuous actions? - Presenting qualitative results for baselines would be helpful - Some baselines not clearly explained: average, random, ConvLSTM - How does performance scale with the length of the video?<doc-sep>This work focuses on the problem of estimating object physical properties from video sequences. The proposed framework combines differentiable physical simulations and differentiable rendering to map physical parameters into images differentiably. This paradigm is then used to recover physical parameters from image sequences by means of gradient based optimisation. Validation of the proposed method is carried through two main synthetic applications, parameter identification and visuomotor control. Although the proposed approach still requires 3D ground truth information to yield reliable estimates, it is and encouraging step towards unsupervised physics understanding from image/video data. Positive: -Crucially and differently from previous attempts, the proposed approach does not require 3D supervision - except for geometry and appearance of the static scene (i.e. at t=0). -Approach is clever, simple and yields interpretable representation -First step towards physics understanding from videos Negative: - I would improve the quality of the visualisations and plots in the paper (e.g. I found Figure 6 impossible to read) - How to differentiate through the physical simulator was not obvious to me. I would have appreciated a more detailed explanation of how that is done in practice for one of the physical problems studied in the paper to be included in the main manuscript, in an effort to make the paper more readable.
This paper presents a framework for joint differentiable simulation of physics and image formation for inverse problems. It brings together ideas from differentiable physics and differentiable rendering in a compelling framework.
The authors present a new concentration of measure inequality for sum of independent bounded random variables namely split-kl inequality. They derive this new inequality by combining kl-inequalities (1 and 2) in a clever way. They provide empirical cmparison of this new inequalities with the existing concentration inequalities such as kl-inequality, Empirical Bernstein inequality and Unexpected Bernstein inequality. They show that their new inequality is tighter than all of these inequalities in some regimes. They further extend their contribution to PAC Bayes setting and derive PAC-Bayes-split-kl inequality. Again, they empirically (in synthetic and real world data) identify regimes where their inequality performs better than other existing inequalities such as PAC-Bayes-kl, PAC-Bayes Empirical Bernstein, PAC-Bayes Unexpected Bernstein, and PAC-Bayes Empirical Bennett inequalities. Strengths: The paper is easy to follow and claims stem from logical arguments. The experiments are extensive and support the claims made by authors. Theoretically, the idea is simple but interestingly, it leads to good empirical results. Weaknesses: It is difficult to understand that how is this new inequality fundamentally different than the kl inequality. Without a careful choice of $\\mu$, I am not sure if this new inequality would always be tighter than kl inequality in all the regimes. My observation comes from the following argument: consider Z $\\in [a, b]$. Take $\\mu = a$, then $Z^+ = Z-a$ and $Z^- = 0$. Similalry, take $\\mu = b$, then $Z^+ = 0$ and $Z^- = b - Z$. In both these cases, we are just translating Z and both kl inequality and kl-split inequality should behave similar for these choices of $\\mu$. Of course, there might be a clever choice of $\\mu$ which makes one perform better than the other but I am not sure how to make that choice. The limitations are discussed adequately. <doc-sep>The authors introduced a new approach to a concetration inquality for random variables over a bounded interval called "split kl inequality", which first decomposes the original random variable into three terms and then applies an existing bound "kl ineqaulity" to the decomposed terms. Then the authors proposed to use the split kl inequality for PAC-Bayes bounds of generalisation error of learning alrogrithms as well as to combine it with existing approaches of excess loss and informed prior. The derived PAC Bayes generalisation error bound were compared and examined in a few different experiments. The reviewer is personally very much fond of the authors' writing in this paper, which explains important matters of this work / other existing works in an intuitive and comprehensive manner. For example, the motivation of this work is nicely lined up with a proper technical level to wide audiences in introduction. In addition, the advantage of split kl inequality has been made clear in Figure 1. Comprehensive presentation and simplicity of the idea is a clear strengh of this work. My main concern is the significance / impact when we combine this idea with PAC-Bayes bounds. The derived new generalisation bound in Figure 2, 3 seemed similar to the other existing bounds at first glance, or it was unclear how to interprete the improvement level. For the first experiment for example, since the authors combined their idea of split kl inequality with existing approaches of "informed priors", some might get an impression from these figures that the "informed prior" part has already finished the majority of works to lower a bound in each bound and they may wonder about how critical the improvement by the split kl part is. There would not no concern for potential negative societal impact. To me personally, the current limitation is that it is difficult to interprete from experiments or equations if the proposed idea of PAC-Bayes-split-kl inequalities has imporved the generalisation bounds to a fair defree or not. For example, would the difference of number in the figures be significant in the context of PAC-Bayes? The reviewer's position on this paper is neutral and the reviewer is happy to increase the score if the technical or practical impact is well justified. <doc-sep>The paper introduces a new concentration inequality for the sum of iid bounded random variables. The paper uses a technique of splitting the samples with a threshold and then using a kl-inequality on each part. This splitting allows using both the lower and upper bound kl-inequalities. The resulting bound enjoys both the tightness of the kl-inequality and the ability to exploit the lower variance of r.v. that takes values within a segment. The empirical comparison clearly shows how the tightness of the new split-kl bound in different regimes, compared to the empirical Berenstein and the standard kl inequalities. The paper then derives PAC-Bayes-Split-kl inequality and applies it to the excess loss of a binary classification problem. The new bound exploits the lowered variance of the excess losses compared to the binary losses, and therefore, the overall split-kl-PB bound can be competitive with the standard kl-PB bound, as demonstrated on synthetic and real-world data. ### Strengths 1. I believe the work is original and well-motivated. 2. The use of the splitting technique is clever and novel, as far as I know. 3. The paper is well-written and clear. 4. The authors provide an adequate survey of related work. 5. The empirical evaluation of the split-kl inequality clearly shows its merits. ### Weaknesses 1. The empirical evaluation of the split-kl-PAC-Bayes bound does not seem to give definitive conclusions, besides the looseness of PAC-Bayes-Empirical-Bennett on certain datasets. I suggest adding more controlled synthetic experiments, as were done in Fig 1. for the concentration bounds since it can give good intuition to when certain bounds are preferable. No additional limitations <doc-sep>The authors address the question of providing PAC-Bayes bounds for losses when the (empirical) variance is low, as previously addressed by e.g. [1, 2]. A special case of this is finding bounds for ternary losses in {-1,0,1}, which arises in two important ways: 1. bounds on the excess misclassification loss, which can also be used as per [1] to tighten PAC-Bayes bounds on the non-excess loss 2. in conjunction with the Cantelli-Chebyshev relaxation given by [3] to provide bounds on the (non-randomized) weighted majority vote via PAC-Bayes. For losses in {0, 1} the small-kl PAC-Bayes bound [e.g. 4] is usually the tightest, even when the variance is low, but not for losses in [-1, 1] (after rescaling the bound). In order to leverage this, the authors decompose translate each random variable in the sum before decomposing it into positive and negative parts, $$Z_i = \\mu + Z_i^+ Z_i^- = \\mu + \\max(0, Z_i-\\mu) + \\max(0, -Z_i+\\mu)$$ before applying the small-kl bound to the sums of $Z_i^+$ and $Z_i^-$ separately (which are both {0, 1} valued in the ternary untranslated case). This is called the *split-kl* (PAC-Bayes) bound. This is used to prove new concentration and PAC-Bayes bounds. These are further combined with the excess risk and informed prior ideas from [1], or the Cantelli-Chebyshev relaxation from [3], and evaluated in experimental setups taken from the above. ----- [1] Zakaria Mhammedi, Peter Grünwald, and Benjamin Guedj. PAC-Bayes un-expected Bernstein inequality. [2] Ilya Tolstikhin and Yevgeny Seldin. PAC-Bayes-Empirical-Bernstein inequality. [3] Yi-Shan Wu, Andres Masegosa, Stephan Lorenzen, Christian Igel, and Yevgeny Seldin. Chebyshev-cantelli pac-bayes-bennett inequality for the weighted majority vote. [4] John Langford. Tutorial on practical prediction theory for classification. ---- UPDATE: Overall I am not satisfied with the quite limited evaluation of this bound, which does not show clear improvements from previous results. This weakens the motivation for the paper too because of the limited number of new technical ideas. Therefore I find myself much more on the borderline than my original review and I do agree with some of the criticisms of reviewer nL9t. However, given that related work has previously appeared at NeurIPS with similarly negligible empirical improvements, I will keep my "weak accept" score. ### Strengths **Clarity and motivation**: the paper is very well written and was a pleasure to read. The relationships to previous works [1, 2] was very well explained and the incorporation of ideas from [1] was well motivated. The alternative form of the main result from [1] is an improvement in clarity to how it is stated therein and the situation of this work within its wider context was reasonably clear. My only minor criticism is that the experiments in section 4.2 do not sufficiently explain the use of the Chebyshev-Cantelli bound and majority votes as used there. This is a shame as I think the use of the split-kl bound for majority votes is a good use case. **Relevance**: I think that the paper makes a contribution to an important and highly-active area of machine learning, improving PAC-Bayes bounds, which are among the most useful in contemporary learning theory. They bring some ideas from [1] to a wider application which is a valuable contribution. ### Weaknesses **Technical contribution and originality**: here I think the paper falls down a bit. The main technical result is simply a decomposition of a random variable into positive and negative parts, combined with an application of the small-kl PAC-Bayes inequality. This is combined with the excess loss idea from [1] and the experimental setup therein, or the Cantelli-Chebyshev bound from [3] and their experimental setup, all of which is straightforward. Such simple ideas are can be very valuable when they lead to breakthroughs but that does not seem to be the case here, and most of the ideas used in the paper and discussed at length were originated by [1]. **Experimental results**: in the more important PAC-Bayes setting the new results are quite weak, with the new bound giving very similar results to that of [1]. The bound is not shown to be any improvement as optimization objective either. The simpler concentration inequality setting is not particularly interesting except as a motivation, and for the ternary r.v.s used an even better bound would be obtained by applying the test set bound (Th. 8) to the decomposition $Z = Z^+ - Z^-$ (i.e. a "split-Binomial" bound). N/A the results are primarily of a theoretical nature.
This meta review is based on the reviews, the authors rebuttal and the discussion with the reviewers, and ultimately my own judgement on the paper. There was a consensus that the paper contributes an interesting new concentration of measure inequality and derive a useful PAC-Bayes inequality. I feel this work deserves to be featured at NeurIPS and will attract interest from the community. I would like to personally invite the authors to carefully revise their manuscript to take into account the remarks and suggestions made by reviewers. Congratulations!
This work claims to propose a general methodology for approximating offline algorithms in online settings, in contrast to previous methods only for particular cases. To achieve this, the author prosed a multi-tasks-based method to learn from the datasets created by the offline algorithms. Experiments are conducted to verify the idea. *Strengths*: 1. The motivation of bridging the gap between offline algorithms and their online counterparts is clear and practical. Real-world examples are discussed in the introduction and conclusion, and help to further understand the motivation. 2. The proposed approach is novel to my knowledge. I admire the idea to capture the behavior structure by multi-task learning model, which is interesting to create datasets using offline algorithm for training the online counterpart. 3. The design is clearly presented. Figure 1, 2 are helpful to understand the high-level framework. *Weakness*: 1. Why no baselines are presented in the experiment part. I am not an expert in this field, so I am not entirely convinced that it needs any comparison of other benchmarks. 2. Is there any theoretical guarantees or insights behind the design? 3. I personally think that the paper writing can be further enhanced. For example: 1) The sections and subsections does not follow a traditional manner, e.g., the experiment and experimental results are not in one section; the ethics is a subsection of conclusion. 2) Although the authors claim that the proposed method outperform the SOTA, however, the performance of the SOTA model is not present in the Table. Minor: ``We review this limitation more thoroughly in Section ??’’ in page 6 —> Section ??
 I admire the motivation, idea, and possible impact of this paper. However, I am not entirely convinced that the experimental results are convincing enough. I would like to update the score after interacting with the authors and other reviewers. <doc-sep>This paper makes use of **offline algorithms** (i.e. algorithms that can view entire time-series) to produce outputs which are used to train an **online algorithm** (i.e. an algorithm that can only view past values of a time-series). The online algorithms are not trained to produce the outputs of the offline algorithm directly, instead windows of the outputs are mapped to class labels using a hand-crafted mapping specific to the domain. The online algorithms are then trained to predict the class labels of the current window and progressively forward-looking windows (i.e. a multi-task prediction problem) given a window of the time-series. They apply this method to synthetic and real-world time-series data (historical stock market data), and report the classification accurately of each of the multi-task prediction problems. They also mention this can be used to predict the direction the price of a stock will move. And state that their method is competitive with state-of-the-art ML methods on this task. **Strengths** + The paper describes their method, experimental setup, and results very clearly. + The paper presents an interesting research direction, using knowledge from offline algorithms to improve performance of online algorithms via learning. + The paper highlights that leveraging these methods could be impactful for many domains. --- **Weaknesses** + The primary weakness is the main claim seem incorrect. The authors claim to develop a general framework for **approximating offline algorithms using online algorithms**. But the online algorithms trained in this paper do not directly attempt to approximate the offline algorithms. The online algorithms do not even produce the same type of outputs as the offline algorithms. The offline algorithms take time-series X={x_1, ..., x_T} as input and produces outputs in the form of decision points A(X) = {(x_i, a_{x_i}), ... (x_j, a_{x_j})}. An online algorithm that approximates this could take as input a partial time-series X_t = {x_t-d+1, ..., x_t} and decide whether or not to produce a decision point at time t. Instead the online algorithms in this work predict class labels which are a lossy mapping of sequences of decision points. As a result it is not clear to me in what sense these online algorithms approximate the offline algorithms. Can you clarify this for me? In what sense is it approximating the offline algorithm? If classification accuracy is 100%, can we make any statements about how good the approximation is? + Because the primary claim is not clear, it's not clear how to evaluate the proposed method, or what baselines to compare to. + The related work section is short and only mentions offline to online conversion, and explanatory vs predictive models. Since the paper also mentions time-series forecasting, it would be good mention related work in that field too. **Suggestion** + May I suggest the following claim: the paper develops a method which leverages offline algorithms to perform better online time-series prediction. + Then the main evaluation metric should be time-series prediction. And baselines would include a range of methods for time-series prediction, and ablations which use offline outputs in different ways but have similar architecture. + It would be good to report the performance of comparable ML time-series prediction algorithms trained on the same data, and with similar architectures. Currently the authors mention another paper but do not report numbers for it. + Related: In Introduction paragraph 2, you compare your method to time-series forecasting techniques. And mention 3 benefits of your technique which focuses on behavior, vs techniques that directly predict time-series trajectories. It would be good to see this demonstrated experimentally. While the ideas presented in this work could be very impactful, as the paper is currently written, its main claim seems incorrect which is grounds for rejection. The paper claims to develop a general framework for approximating offline algorithms using online algorithms. But to me, it seems the online algorithms do not approximate the offline algorithms. I think the paper could be made substantially better in one of two ways: 1) The authors clarify in what sense the online algorithms approximate the offline algorithms. 2) The authors modify the claims to more accurately reflect the the method, and add additional experiments to support those claims. <doc-sep>The paper presents a novel method for approximating offline time-series algorithms in an online setting. The method achieves this by assigning each window of the time-series data to a set of discrete classes based on the behavioral structure in that window, where the behavioral structures are encodings of the relative placements of the decision points in that window as determined by an offline time-series algorithm. These classes then provide the targets for a series of connected classification problems. An approximate online algorithm is obtained by training a multi-task classification neural network to solve these. Results on one-dimensional synthetic and stock-market data show that the predictive behavior of this method matches our intuitions, where it is most accurate when explaining the data and least accurate when predicting into the future. **Novelty and significance.** I am not an expert in this domain but to my knowledge the proposed approach is novel and presents an interesting method for using offline algorithms to create datasets for training machine learning models to approximate the outputs of the offline algorithms. I think the idea could be of interest to the community. That said, the paper does not provide any way to evaluate the significance of the proposed result, as there are no empirical (or theoretical) comparisons to any other methods. Thus, it is impossible to situate the proposed method either relatively or absolutely to determine whether the method will be of any benefit to the community. The paper presents two datasets (a synthetic toy dataset and a constructed dataset of historical stock market data), neither of which seem to have been used in the literature before, and trains the proposed method on these datasets but compares to no other methods. The results show that the method has higher accuracy for the easier classification tasks and lower accuracy for the lower harder prediction tasks, and that the method seems to get above chance accuracy on most problems, but this does not tell the reader anything about the overall performance and behavior of the algorithm. In future revisions of the paper, the authors should compare to other algorithms in this same space. A reasonable place to start is with the works discussed in the related work section. You can show generality by taking the proposed algorithm and comparing it with multiple different existing approaches on the different tasks that each of those existing approaches works on. If the scores of the proposed approach are reasonable, then we will have some evidence that it works as claimed. I urge the authors to also perform ablations on the method. What effects do changes in model architecture have? Or how does the choice of offline algorithm affect the method? How do variations of the synthetic dataset affect the proposed approach as opposed to other methods (i.e., is it more robust or more accurate in particular regimes, such as different values of n, |S|, \\gamma, and d)? Note that two of the three proposed values of $\\gamma = {0, 0.5, 1}$ are trivial and thus do not provide much information. I encourage the authors to also include $\\gamma=0.25$ and $0.75$ to better show trends, and to plot these values instead of just putting the numbers in a table. Further, showing top-$k$ for $k=5,2,1$ seems unnecessary. 5 and 1 would be sufficient. Regarding the claim of meeting or exceeding performance on ML-based stock prediction systems, there is no evidence given in this paper for this claim so it is unsubstantiated. As I understand it, the cited paper (Rezaei et al., 2021) uses an entirely different dataset, so comparisons of accuracy are meaningless. **Clarity.** Overall the method is fairly clearly explained, and the remainder of the paper is clear. I think the paper would benefit from providing a summary of the method at the beginning of section 2, and from some changes to notation to simplify the presentation and to fix some issues with the notation. The precise method to generate the synthetic dataset and create the stock market data should be detailed in the paper as well, without requiring readers to go to the (not yet provided) code. **Detailed questions and comments.** - Preprocessing both the train and test splits together is wrong as it allows information to bleed from test to train, both in the form of the normalization and the set of structures trained on. All pre-processing should be performed only on the train data, the statistics retained, and then these used on the test data. - The fact that the number of unique structures $|S|$ changes for different values of $\\gamma$ makes it difficult to compare trends across values of \\gamma. Instead, I would suggest the authors change the dataset generation process to first specify an alphabet of structures $S$ and then generate (noised) trajectories from this alphabet. - It appears that $\\lambda$ is used both as an index and a count, but the count value of $\\lambda$ always equals $n$, so why not just use $n$? - Defining $|S| = k$ is confusing as $k$ is already (and typically) used as an index variable and it is nonstandard and unclear to use it as a count. - Please define a domain for the class labels and use that directly to simplify notation. - The definition of a window seems to assume that decision points are uniformly spaced, but this is not made explicit anywhere. - The definition of the estimator $f$ in eq (2) does not match the text, as it should be mapping onto the simplex of the class label domain based on the corresponding text. - Please explain the method for computing the decision points (l1TF) in more detail. Overall, this paper lacks an evaluation for the proposed method, and thus cannot be accepted. The proposed approach seems interesting, and I encourage the authors to resubmit after incorporating a proper evaluation by comparing to other methods on established datasets and addressing some of the other comments above (in particular the dataset issues). <doc-sep>This paper considers the problem of an offline algorithm that operates on a time-series X to obtain sequence of decisions in an online setting. That is, it tries to approximate the behavior of this offline algorithm in a setting where at time t the algorithm only has access to the input until t (whereas in the offline algorithm the algorithm can lookahead and optimize). The pose this as a multi-task learning problem, where they slice the input into windows of size d, and the goal is to map each d dimensional window to one of the k possible structures in the dataset. They propose a MTL algorithm and use simulations and real-world stock market data to study the effects of their approach. Strengths: + A novel formulation and research topic. The idea of trying to predict the behavior of an offline algorithm in a online setting using multi-task learning is a new approach. The exact formulation and the the way to pose this as MTL is non-trivial. The bulk of the contributions of this work is to make this modelling approach. Once figured out the proposed algorithm itself is standard multi-task learning. + This paper contributes to the now growing line of work on bridging classical algorithms with machine learning. In that line of work, this considered approach is novel. It gives a new perspective. The typical direction has been to use the ML model as hints to improve the online/offline algorithm. On the other hand here, the online to offline algorithm is bridged via a machine learning task. + For the most part the paper is clear and well-written. Weakness - The first main weakness I find in this paper is that, it does not sufficiently motivate the problem well. In particular, the online problem and it being posed as a MTL seems very abstract to the reader. It is not clear, how to use the outcome of this modelling in an actionable form. In particular, how does one interpret the class prediction for a window? What happens if the number of classes are unknown/evolving? May be elaborating this on a toy/standard offline algorithm before making it abstract would help the reader a great bit. - Related to above, the formulation makes it seem like this applies to any offline algorithm. But it really only applies to offline algorithms that work on time-segmented data. So it comes of as over-selling the main contributions of the paper. Please correct me if I am wrong; if not, I would reword the introduction to make this aspect very clear. I like some of the ideas of this paper, but overall I think that it falls just below the bar because of the reasons I stated in the weakness. Please correct me if my understadning is incorrect. <doc-sep>This work studies a methodological framework to transform/approximate offline algorithms into their online counterparts. The main methodology is to predict an offline algorithm’s actions in the real time future via learning behavioral structures of the offline algorithm using past data. The work presents several experiments using both synthetic and real (stock market) data. In general, I like the idea of approximating the behavior of offline algorithms through the lens of multiple progressively-forward looking tasks (which essentially predicts the trajectory of future actions of the offline algorithm), since this allows us to predict further into the future (as opposed to predicting one-step ahead in standard ML methodologies). To the best of my knowledge, the idea of encoding the behavior of offline algorithms in graph structures, and then predicting the occurrence of such structures for multiple actions ahead via a multi-task learning framework is novel. The following are some questions/concerns: 1. From my understanding, the ultimate goal of the whole paper is to approximate the behavior of offline algorithms in real time, as opposed to directly predicting the ground truth evolution of the time series. This seems to me that the proposed framework’s performance is primarily driven by how well the offline algorithm can fit the historical data. That being said, if the offline algorithm significantly overfits the offline data (e.g. some complex deep neural network), does this mean the offline-to-online framework can also perform arbitrarily well under certain conditions? If so, I find this hard to believe. I might be misunderstanding something here, and it would be great if the authors can provide some more explanations and insights in the paper (e.g. what are some key drivers for the proposed framework’s performance, and how does the proposed framework’s performance relate to that of the offline algorithm). 2. From a practical perspective, it seems to me that the algorithm is very "data hungry" as the number of structures may grow exponentially in the number of decision points in each structure. Hence, I believe there is this inherent trade-off between the amount of data required for labeling structures and how far we can predict into the future. The paper seems to be lacking detailed discussions for this tradeoff, or, on a related note, for how one should choose the "optimal" number of decisions within a structure. 3. I am confused about the occurrence moments of predicted future actions (since the proposed algorithm is predicting X actions ahead, instead of X moment ahead). Consider the stock market example, where we have task 1 that predicts 1 action ahead of some offline algo, and task 2 that predicts 2 actions ahead. How do we know that the last predicted action in task 2 is further away in the future than the (single) prediction action in task 1? In other words, from my understanding the predicted structures are completely agnostic to actual occurrence moments, and hence we cannot compare prediction actions across tasks? I might have missed related discussions in the paper, and it would be great if the authors can add some more emphasis. 4. I find the discussions in Section 2 General Schema quite difficult to digest at first read, and not until I went through the entire paper did I better understand how the multi-task learning framework works. Perhaps instead of discussing pure concepts (e.g. structure, actions, etc.), introducing the methodological framework within the context of a simple concrete example (e.g. a simplified version of the stock market example with some dummy offline algorithm) would improve the overall clarity of this section. To the best of my knowledge, the proposed offline-to-online framework by predicting behavioral structures of the offline algo through a multi-task learning scheme is novel. For weaknesses, more explanations/discussions on the following aspects would improve the paper: 1. How the performance of the proposed framework relates to that of the offline algorithm; 2. Choice for number of decisions in a structure; 3. Comparing predictions across different tasks. The paper’s exposition in terms of explaining the key concepts can also be improved.
## A Brief Summary This paper uses offline algorithms that can see the entire time-series to approximate the online algorithms that can only view the past time-series. The way this is done is basically, the offline algorithm is used to provide discrete class targets to train the online algorithm. The paper presents results on synthetic and historical stock market data. ## Reviewer s1H9 **Strengths:** - Practical problem. - Novel approach. - Clear presentation. **Weaknesses:** - No other baselines. - No theoretical guarantees behind the approach. - Writing could be improved. ## Reviewer EgW9 **Strengths:** - Clear writing. - Interesting research direction. **Weaknesses:** - The primary claim seems incorrect and unclear. - Due to the unclarity about the primary claim of this paper, it is difficult to evaluate the paper. - Lack of baselines. - The lack of discussions of the related works. ## Reviewer gii5 **Strengths:** - Interesting and novel approach. **Weaknesses:** - Difficult to evaluate, with no empirical baselines or theoretical evidence. - The datasets used in the paper are not used in the literature before. Authors should provide experimental results on datasets from the literature as well. - The paper needs to compare against the other baselines discussed in the related works. - More ablations and analysis on the proposed algorithm is required. - Unsubstantiated claims regarding being SOTA on the task, since the paper doesn't compare against any other baselines on these datasets. - The paper can be restructured to improve the flow and clarity. ## Reviewer zoKR **Strengths:** - Novel and interesting research topic. - Bridging classical algorithms and ML. - Clearly written. **Weaknesses:** - Lack of motivation for the problem. - The approach only works with offline algorithms that work on time-segmented data. ## Reviewer aaFn **Strengths:** - Novel algorithm. **Weaknesses:** - Potentially overfitting to the offline data. - Data hungry approach. - Confusion related to the occurrence moments of predicted future actions. - Section 2 is difficult to understand. ## Key Takeaways and Thoughts Overall, I think the problem setup is very interesting. However, as pointed out by reviewers gii5 and EgW5, due to the lack of baselines, it is tough to compare the proposed algorithm against other approaches, and this paper's evaluation is challenging. I would recommend the authors include more ablations in the future version of the paper and baselines and address the other issues pointed out above by the reviewers.
This paper investigates internal working of RNN, by mapping its hidden states to the nodes of minimal DFAs that generated the training inputs and its abstractions. Authors found that in fact such a mapping exists, and a linear decoder suffices for the purpose. Inspecting some of the minimal DFAs that correspond to regular expressions, induced state abstractions are intuitive and interpretable from a viewpoint of training RNNs by training sequences. This paper is interesting, and the central idea of using formal languages to generate feeding inputs is good (in fact, I am also doing a different research that also leverages a formal grammar with RNN). Most of the paper is clear, so I have only a few minor comments: - In Figures 4 and 5, the most complex MDFA of 14 nodes does not have the lowest testing accuracies. In other words, testing accuracies is not generally proportional to the complexity of MDFA. Why does this happen? - As noted in the footnote in page 5, state abstraction is driven by the idea of hierarchical grammars. Then, as briefly noted in the conclusion, why not using a simple CFG or PCFG to generate training sequences? In this case, state abstractions are clear by definition, and it is curious to see if RNN actually learns abstract states (such as NP and VP in natural language) through mapping from hidden states to abstracted states. - Because this paper is exploratory, I would like to see more examples beyond only the two in Figure 6. Is it possible to generate a regular expression itself randomly to feed into RNN? <doc-sep>This paper aims to show that an RNN trained to recognize regular languages effectively focuses on a more abstract representation of the FSA of the corresponding language. Understanding the type of information encoded in the hidden states of RNNs is an important research question. Recent results have shown connections between existing RNN architectures and both weighted (e.g., Chen et al., NAACL 2018, Peng et al., EMNLP 2018) and unweighted (Weiss et al., ACL 2018) FSAs. This paper asks a simple question: when trained to recognize regular languages, do RNNs converge on the same states as the corresponding FSA? While exploring solutions to this question is potentially interesting, there are significant clarity issues in this paper which make it hard to understand it. Also, the main claim of the paper — that the RNN is focusing on a low level abstraction of thew FSA — is not backed-up by the results. Comments: — The authors claim that the RNN states map to FSA states with *low* coarseness, but Figure 3b (which is never referred to in text…) shows that in most cases the ratio of coarseness is at least 1/3, and in some cases > 1/2. — Clarity: While the introduction is relatively clear starting from the middle of section 3 there are multiple clarity issues in this paper. In the current state of affairs it is hard for me to evaluate the full contribution of the paper. - The definitions in section 3 were somewhat confusing. What is the conceptual difference between the two accuracy definitions? - When combining two states, does the new FSA accept most of the strings in the original FSAs? some of them? can you quantify that? Also, figure 6 (which kind of addresses this question) would be much more helpful if it used simple expressions, and demonstrated how the new FSA looks like after the merge. - section 4 leaves many important questions unanswered: 1. Which RNN was used? which model? which parameters? which training regime? etc. 2. How were the expressions sampled? the authors mention that they were randomly sampled, so how come they talk about DATE and EMAIL expressions? 3. What is the basic accuracy of the RNN classifier (before decoding)? is it able to learn to recognize the language? to what accuracy? - Many of the tables and figures are never referred to in text (Figure 3b, Figure 5) - In Figure 6, there is a mismatch between the regular expression (e.g., [0-9]{3}….) and the transitions on the FSA (a-d, @). - How come Figure 3a goes up to 1.1? isn’t it bounded by 1? (100%?) - The negative sampling procedure should be described in the main text, not the appendix. Also, it is not clear how come shuffling the characters is considered an independent distribution. <doc-sep>Paper Summary - The authors trained RNNs to recognize formal languages defined by random regular expressions, then measured the accuracy of decoders that predict states of the minimal deterministic finite automata (MDFA) from the RNN hidden states. They then perform a greedy search over partitions of the set of MDFA states to find the groups of states which, when merged into a single decoder target, maximize prediction accuracy. For both the MDFA and the merged classes prediction problems, linear decoders perform as well as non-linear decoders. Clarity - The paper is very clear, both in its prose and maths. Originality - I don't know of any prior work that approaches the relationship between RNNs and automata in quite this way. Quality/Significance - I have one major concern about the interpretation of the experiments in this paper. The paper seems to express the following logic: 1 - linear (and non-linear) decoders aren't so good at predicting MDFA states from RNN hidden states 2 - if we make an "abstract" finite automata (FA) by merging states of the MDFA to optimize decoder performance, the linear (and non-linear) decoders are much better at predicting this new, smaller FA's states. 3 - thus, trained RNNs implement something like an abstract FA to recognize formal languages. However, a more appropriate interpretation of these experiments seems to be: 1 - (same) 2 - if we find the output classes the decoder is most often confused between, then merge them into one class, the decoder's performance increases -- trivially. in other words, you just removed the hardest parts of the classification problem, so performance increased. note: performance also increases because there are fewer classes in the merged-state FA prediction problem (e.g., chance accuracy is higher). 3 - thus, from these experiments it's hard to say much about the relationship between trained RNNs and finite automata. I see that the "accuracy" measurement for the merged-state FA prediction problem, \\rho, is somewhat more complicated than I would have expected; e.g., it takes into account \\delta and f(h_t) as well as f(h_{t+1}). Ultimately, this formulation still asks whether any state in the merged state-set that contains f(h) transitions under the MDFA to the any state in the merged state-set that contains f(h_{t+1}). As a result, as far as I can tell the basic logic of the interpretation I laid out still applies. Perhaps I've missed something -- I'll look forward to the author response which may alleviate my concern. Pros - very clearly written, understanding trained RNNs is an important topic Cons - the basic logic of the conclusion may be flawed (will await author response) Minor - The regular expression in Figure 6 (Top) is for phone numbers instead of emails. "Average linear decoding accuracy as a function of M in the MDFA" -- I don't think "M" was ever defined. From contexts it looks like it's the number of nodes in the MDFA. "Average ratio of coarseness" -- It would be nice to be explicit about what the "ratio of coarseness" is. I'm guessing it's (number of nodes in MDFA)/(number of nodes in abstracted DFA). What are the integers and percentages inside the circles in Figure 6? Figures 4 and 5 are difficult to interpret because the same (or at least very similar) colors are used multiple times. I don't see "a" (as in a_t in the equations on page 3) defined anywhere. I think it's meant to indicate a symbol in the alphabet \\Sigma. Maybe I missed it.
This paper presents experiments showing that a linear mapping existing between the hidden states of RNNs trained to recognise (rather than model) formal languages, in the hope of at least partially elucidating the sort of representations this class of network architectures learns. This is important and timely work, fitting into a research programme begun by CL Giles in 92. Despite its relatively low overall score, I am concurring with the assessment made by reviewer 1, whose expertise in the topic I am aware of and respect. But more importantly, I feel the review process has failed the authors here: reviewers 2 and 3 had as chief concern that there were issues with the clarity of some aspects of the paper. The authors made a substantial and bona fide attempt in their response to address the points of concern raised by these reviewers. This is precisely what the discussion period of ICLR is for, and one would expect that clarity issues can be successfully remedied during this period. I am disappointed to have seen little timely engagement from these reviewers, or willingness to explain why they are stick by their assessment if not revisiting it. As far as I am concerned, the authors have done an appropriate job of addressing these concerns, and given reviewer 1's support for the paper, I am happy to add mine as well.
The paper proposes a modification of the saliency map/gradient approach to explain neural networks. # Method summary The approach is as follows: For each layer, the gradient w.r.t. it's input layer is computed for multiple images concurrently. Then for conv layers, the activations are averaged per feature map (over space). As a result, for both fully connected and convolutional layers there is a 3D feature map. From these at most b positive outliers are selected to be propagated further. What is a bit strange is that in the results section, guided backpropagation is mentioned and clearly used in the visualizations but not mentioned in the technical description. # Recommendation The current evaluation is definitely not sufficient for acceptance. The evaluation is done in a purely qualitative matter (even in section 4.1 Quantitive justification of outliers as relevant neurons). The results appear to be interesting but there is no effort done to confirm that the neurons considered to be relevant are truly relevant. On top of that, it is also evaluated only on a single network and no theoretical justification is provided. # Discussion w.r.t. the evaluation To improve section 4.1, the authors could for example drop out the most important neurons and re-evaluate the model to see whether the selected neurons have a larger impact than randomly selected neurons. Since the network is trained with dropout, it should be somewhat robust to this. This would not be a definitive test, but it would be more convincing than the current evaluation. Furthermore high values do not imply importance. It might be possible that I misunderstood the experiment in Figure 2. So please correct me if this is the case in the reasoning below. In figure 2, FC2 is analyzed. This is the second to last layer. So I assume that only the back-propagation from logits (I make this assumption since this is what is done commonly and it is not specified in the paper) to FC2 was used. Since we start at the same output neuron for a single class, all visualisations will use the same weight vector that is propagated back. The only difference between images comes from which Relu's were active but the amount if variability is probably small since the images were selected to be classified with high confidence. Hence, the outliers originate from a large weight to a specific neuron. The interpretation in the second paragraph of section 4.2.1 is not scientific at all. I looked at the German Shepherd images and there are no teeth visible. But again, this is a claim that can be falsified easily. Compare the results when german Shepherds with teeth visible are used and when they are not. The same holds for the hypothesis of the degree of danger w.r.t. the separation. Finally, there is no proof that the approach works better than using the magnitude of neuron activations themselves, which would be an interesting baseline. Additional remarks --------------------------- The following is an odd formulation since it takes a 3D tensor out of a 5D one and mixes these in the explanation: "... the result of equation for is a 5D relevance tensor $\\omega^l_{n,i,..} \\in R^{H\\times W\\times K} ....." The quality of the figures is particularly poor. - Figure 1 b did not help me to understand the concept. - Figure 2 The text on the figure is unreadable. - Figure 4a is not readable when printed. <doc-sep>Summary: This paper introduces step-wise sensitivity analysis (SSA), which is a modification of saliency maps (Baehrens et al. 2010, Simonyan et al. 2013) to a per-layer implementation. Instead of only measuring the importance of input nodes (e.g. pixels) to the classification, SSA measures the importance of all nodes at each layer. This allows for a way to find the important sub-nodes for each node in the tree given a particular sample. It is then straightforward to aggregate results across different input samples and output a dependency graph for nodes. Novelty: The technical contribution is a very simple extension of Simonyan et al. 2013. The main novelty lies within the created dependency graph from the node importance weights, but the usefulness of such graph is unclear. In addition, the claim that this is the first method that aggregates results of an instance-specific method to gain model-centric results is a stretch considering other works have found important nodes or filters for a specific class by aggregating across instance-specific samples (Yosinski et al. 2015). Evaluation: The idea of producing an interpretable dependency graph for nodes is interesting, and the possible conclusions from such graphs seem promising. However, most of the interesting possible conclusions seem to be put off for future work. I don’t believe the experiments are sufficient to show the significance of SSA. The main hypothesis is that dependency graphs allow for a way to interpret the model across samples, but it doesn’t show any conclusive results about the data or models that wasn’t previously known. The results are mostly speculative, such as the fact that German shepherd and great white shark nodes are clustered together, possibly due to the fact that both of these classes share a PDR encoding sharp teeth, but that is never actually demonstrated. <doc-sep>Summary: The paper introduces a new approach for interpreting deep neural networks called step-wise sensitivity analysis. The approach is conceptually quite simple and involves some interesting ideas, but I have some serious concerns whether the output produced by this method carries any meaning at all. If the authors were able to refute my concerns detailed below, I would raise my score substantially. Strengths: + Potentially interesting heuristic to identify groups of feature channels in DNNs that encode image features in a distributed way Weaknesses: - Using the magnitude of the gradient in intermediate layers of ReLU networks is not indicative of importance - No verification of the method on a simple toy example Details: Main issue: Magnitude of the gradient as a measure of importance. I have trouble with the use of the gradient to identify "outliers," which are deemed important. Comparing the magnitude of activations across features does not make sense in a convnet with ReLUs, because the scale of activations in each feature map is arbitrary and meaningless. Consider a feature map h^l[i,x,y,f] (l=layer, i=images, x/y=pixels, f=feature channels), convolution kernels w^l[x,y,k,f] (k=input channels, f=output channels) and biases b^l[f]: h^l[i,:,:,f] = ReLU(b^l[f] + \\sum_k h^(l-1)[i,:,:,k] * w^l[:,:,k,f]) Assume, without loss of generality, the feature map h^l[:,:,:,f] has mean zero and unit variance, computed over all images (i) in the training set and all pixels (x,y). Let's multiply all "incoming" convolution kernels w^l[:,:,:,f] and biases b^l[f] by 10. As a result, this feature map will now have a variance of 100 (over images and pixels). Additionally, let's divide all "outgoing" kernels w^(l+1)[:,:,f,:] by 10. Simple linear algebra suffices to verify that the next layer's features h^(l+1) -- and therefore the entire network output -- are unaffected by this manipulation. However, the gradient of all units in this feature map is 10x as high as that of the original network. Of course the gradient in layer l-1 will be unaltered once we backpropagate through w^l, but because of the authors' selection of "outlier" units, their graph will look vastly different. In other words, it is unclear to me how any method based on gradients should be able to meaningfully assign "importance" to entire feature maps. One could potentially start with the assumption of equal importance when averaged over all images in the dataset and normalize the activations. For instance, ReLU networks with batch norm and without post-normalization scaling would satisfy this assumption. However, for VGG-16 studied here, this is not the case. On a related note, the authors' observation in Fig. 4b that the same features are both strongly positive and strongly negative outliers for the same class suggests that this feature simply has a higher variance than the others in the same layer and is therefore picked most of the time. Similarly, the fact that vastly different classes such as shark and German Sheppard share the same subgraphs speaks to the same potential issue. Secondary issue: No verification of the method on simple, understandable toy example. As shown by Kindermans et al. [1], gradient-based attribution methods fail to produce the correct result even for the simplest possible linear examples. The authors do not seem to be aware of this work (at least it's not cited), so I suggest they have a look and discuss the implications w.r.t. their own work. In addition, I think the authors should demonstrate on a simple, controlled (e.g. linear) toy example that their method works as expected before jumping to a deep neural network. I suppose the issue discussed above will also surface in purely linear multi-layer networks, where the intermediate layers (and their gradients) can be rescaled arbitrarily without changing the network's function. References: [1] Kindermans P-J, Schütt KT, Alber M, Müller K-R, Erhan D, Kim B, Dähne S (2017) Learning how to explain neural networks: PatternNet and PatternAttribution. arXiv:170505598. Available at: http://arxiv.org/abs/1705.05598
This work proposes a modification of gradient based saliency map methods that measure the importance of all nodes at each layer. The reviewers found the novelty is rather marginal and that the evaluation is not up to par (since it's mostly qualitative). The reviewers are in strong agreement that this work does not pass the bar for acceptance.
This paper studies the problem of predicting the segmentations and poses (position + yaw orientation) of multiple objects, given the image of a scene. The paper introduces a method that is trained without supervision for the segmentations, similar to several other recent object-centric models. In contrast to these existing models, the method proposed in this paper additionally estimates the 3D location of each object by predicting a depth map and classifies the yaw angle by representing the pose domain as equally-spaced bins. To do so, during training the method operates on a short clip of the scene recorded by a moving camera, and uses self-supervision by predicting the scene’s image at the next time step. At test time, the model is able to infer a representation of each object in the scene and segment them given a single image of the scene. # Strength 1. The paper tackles the difficult problem of learning to segment objects from an image using no supervision during training. 2. The problem setting and motivation for this task are explained clearly. A detailed description of the method, along with a pseudocode of the learning algorithm is provided in the paper. 3. The paper introduces a new synthetic dataset of images taken from scenes with multiple objects with varying shapes and textures (11 and 15, respectively). 4. Figures 3A-D are very helpful explaining the quantitative performance of the method in relation to the baselines. Figures 3E-G are also helpful showing the failure modes of the proposed method. # Weakness 1. I am not fully convinced that the comparison to the baselines is entirely fair. If I understand correctly, the rest of the methods were trained on single images without having access to previous and next frames. While I appreciate the method’s usage of consecutive frames as part of the supervision, I think this should be stated clearly when comparing with the baselines -- to avoid any overclaims. SynSin [1], for example, also predicts the next frame’s RGB and depth images without using additional supervision. Similar to the proposed method in this paper, SynSin synthesizes future images given their camera poses by warping the current frame using differentiable rendering. Combining an object-centric approach (like slot-attention) with such a method that performs future image prediction would make a fairer comparison, in my opinion. [1] Wiles, O., Gkioxari, G., Szeliski, R. and Johnson, J., 2020. Synsin: End-to-end view synthesis from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7467-7477). 2. I think there are some key references missing in the paper too. For instance, a paper from last year also learns an object-centric representation in an unsupervised fashion to decompose the objects in a scene while estimating their poses in 3D [2]. How does [2] compare to the method presented in this paper? What are the main differences between them? [2] Henderson, P. and Lampert, C.H., 2020. Unsupervised object-centric video generation and decomposition in 3D. Advances in Neural Information Processing Systems, 33. 3. The results are reported using only the dataset introduced in the paper. I suggest including results from datasets like the Objects Room or CLEVR, where the existing methods (MONet, Slot-attention, Genesis, [2]) have already reported results on. 4. There are no ablation studies reported in the paper. Some parts of the loss function seem redundant judging from their current descriptions. I think it would be very helpful either presenting additional results by ablating the loss function / part of the model, or detail in the text why the method needs each of those components. # Recommended Decision I think in its current form the paper is not ready to be published. I strongly encourage the authors to clarify the positioning of this paper in relation to the state-of-the-art (see my comment on weaknesses). I vote for [reject, not good enough] for now, but I would be happy to increase my rating once the authors clear up my concerns. # Additional Feedback 1. It should be \\hat(m) instead of \\hat(t) in the L_spatial loss’ third term. 2. What is x_rand? If they are randomly sampled point positions then what is their underlying distribution? 3. Have you tested the method on scenes with more than 3 objects? Slot attention, for instance, is able to segment up to 9 objects. Since the paper introduces a new dataset, I would’ve hoped to see a more challenging benchmark with a wider variety of objects and number of instances. 4. Also, the dataset assumes that the camera moves with constant pose change. Have you tested the motion prediction (position, orientation and their time-derivatives) in the presence of different camera velocities? If not, are there any limitations of the method that prevents it? 5. I suggest adding the equivalents of Figure 3G for each baseline in the appendix. 6. What is the number of bins for the yaw angle prediction, b? Have you tried using a continuous representation for the rotation? 7. What does the predicted image (I’) for the next time-step look like? I suggest including more qualitative results to the paper for evaluating the warping function. 8. I think it is a good idea to introduce a more diverse benchmarking dataset for learning object-centric representations. However, I think the dataset proposed in the paper should be further expanded by including more daily life objects, instead of just geometric primitives like spheres, prisms and cones. I suggest taking a look at datasets like YCB or ShapeNet to include more realistic objects to your dataset. All in all, I think the paper attempts to tackle a very challenging problem. The method looks sound and the results might be interesting to the community in learning object-centric representations. However, there are some major concerns I have, mainly about the position and novelty of this paper with respect to a paper from last year, and the lack of results from datasets the state-of-the-art methods report on. <doc-sep>This paper presents a method to learn how to parse 3-frame videos into object-centric representations, which include segmentation masks and 3D positions and yaws for those objects frame-by-frame, as well as an image representation of the background, and an overall depth map. This is accomplished with a depth network, an object network with an LSTM at the bottleneck to iteratively pick out objects and their positions and yaws, and a decoder to provide segmentations, a warping/re-compositing operation that pastes the inferred objects at their estimated positions for the NEXT frame (i.e., with a constant-velocity assumption), and finally an "imagination" network which refines the estimated image. The model learns with a combination of 4 losses, which include image reprojection/prediction, depth consistency, a spatial term that includes consistency and randomness (though I have complaints about this), and finally a penalty term that discourages object probabilities from being zero. The paper also introduces a new synthetic dataset where prior methods do badly, and the proposed method does slightly better. The learned depth maps look good, but this is perhaps expected because camera poses are known. Overall, this paper is messily written, and proposes something that only works marginally better than prior methods, on a synthetic toy dataset. The performance in Table 1 illustrates this: the standard deviation of the segmentation IOU (0.34) is about even with the average IOU (0.35)! Looking at the qualitative results in Figure 2, it seems the model often misses objects completely, and produces segmentations that are fractional and have holes. I appreciate that the baselines are doing badly here also, but slot attention had very similar ARI-fg (0.42+-0.35 vs 0.46+-0.39). Statistically maybe these are equivalent. Is it possible at least to show that this method does better on the datasets where those previous papers managed to work (like CLEVR and those deepmind shapes datasets)? In general this area of work seems stuck in a setup where all methods work well on different toy data, but no methods work on real images or videos. It seems like the pose estimation is not working at all, judging by Figure 3-E. The discussion mentions this too: "object spatial location is inferred more easily than object pose (which we have not fully investigated), thus the predicted warping relies more on object translation than rotation". Maybe the object pose estimation can be removed from the paper entirely, to make things simpler. "our framework can easily generalize to 3D pose" I am sure the formulation can easily be extended to capture this, but I would prefer that the wording here be a bit more careful, to not suggest that the model is expected to work when pitch and roll are unknowns as well. (If you do expect it to work, please try it out and add the results to the paper.) "At time t, given the location of the camera o(t) ∈ R3 and its facing direction α(t)" Does this mean the camera intrinsics and extrinsics are assumed known? It would be great to say this directly. The discussion section supports this interpretation, saying "additional information required is that of observer’s self-motion, which is available both in the brain as efference copy and easily accessible in vehicles". I do not really buy the argument about the brain, or easy accessibility in vehicles. Self-driving vehicles usually register themselves to a known map; inferring pose from odometry alone causes drift. The LSTM reading objects from the full-frame encoding sounds like a very weak part of the model. Why not, for example, use a standard object detector, like MaskRCNN? I know you want to be self-supervised, but then why not self-supervise a well-known architecture that is proven to work, instead of inventing a new one? "The location prediction is restricted to the range of possible value within the virtual environment by logistic function and linear scaling" I don't know what this means. The imagination network is clumsily introduced. The first instance of it is already "the imagination network", and the motivation for it is only written in the caption of Figure1. I found a helpful description later on in page5. These things should be re-arranged. The description of the unprojection of a 2D coordinate into 3D space looks odd to me. Where is this coming from? Given that i,j are coordinates, what do |i| and |j| represent? Normally we start from x=fX/Z, and then invert this to X=Zx/f, where x is 2D and X is 3D, and f is the focal length. I am also not sure how the first term and second term here are able to multiply, since the first term is a 2-tuple (i,j) and the third term is a 3-tuple (i,d,j). I also got lost in the angular velocity equation. It seems the sum is over all \\gamma1, all \\gamma2, and all differences of the two within 2\\pi of \\omega. This is too many things to sum over! It seems like you won't end up with a valid probability distribution. I am probably misreading the notation here. In any case, why is this probability distribution useful? The object-based imagination network seems to require a scalar here for the rotational speed. So why not take the expectation of the first pose distribution, expectation of the second, and then take a difference? It is interesting that you do not use depth supervision, but for toy settings like this I think it is OK to assume depth is known, and focus on other hard parts, like segmentation and tracking. I need some help understanding L_{spatial}. The second term is described as a contrastive loss, but it's a difference between known positions and random positions. Why is this a good idea, and why is it contrastive? Normally a contrastive loss compares two estimates and pushes them apart (rather than pulling every estimate to random). I was surprised that the evaluation talked about a model called "OPPLE" ("Only OPPLE shows a bi-modal distribution."). Apparently this is the name of the proposed model, and the place to learn this is the caption of Figure 1! Please do not put critical information exclusively in figure captions. Typos: - probability mess -> probability mass - generated dataset -> generated a dataset - states-of-art -> state-of-the-art - network appear to -> network appears to - intersectin (in fig3) -> intersection - Figure 1 is never referred to in the text I think the paper is not quite ready for publication. The method does not work particularly well, a part of the method (object pose estimation) seems to be not working at all, and the evaluation is only on a new toy dataset and does not include evaluation on established datasets. Also the text contains too much notation, and has some parts mixed up (with terms being used before they are introduced), but I think this can be fixed easily. <doc-sep>This paper presents an unsupervised object-centric scene representation technique that can decompose a scene into multiple objects (and segment the scene) and infer their 3D locations and pose. The overall setup is very similar to earlier models like MONet but this model works on sequences of images, more precisely on 3 consecutive images. It uses the first two images to infer the 3D position and pose of objects and combining this with known camera motion tries to predict the last (third) image. The main contribution here is an optical flow based method to warp the image at time t using the predicted object location/pose/depth to predict (some of) the pixels in image at time t+1. In more detail, the object extraction network outputs the location and pose of each object. And a separate depth perception network outputs the depth for each pixel in the image. The location and pose of objects are used to estimate the velocity of each object (e.g., by subtracting the position at t-1 from position at t. note this requires matching each object at time t-1 to object in time t, which they do using a soft-matching approach). These along with the depth information are then used to warp the image at t to predict pixels in image at time t+1. This is possible only for a subset of the pixels so for the rest, they use a separate "imagination" network that takes in object information and predicts the color/depth and object masks at t+1. The predictions from warping and imagination network are then combined to form the final predicted color and depth images. To train the model, they require images and camera motion, and use a combination of losses: reconstruction loss on predicted and ground truth image, self-supervised losses on object location, pose, and depth. Overall, I found the paper quite interesting. I think the optical flow based warping to predict some subset of the pixels in the next timestep is in itself an important contribution and this paper would be a good addition to the emerging literature on unsupervised object-centric models. I think the main concern with the paper is limited experimental evaluation. The model is evaluated only on a single, rather simple dataset (that was generated by the authors). I know these models cannot usually handle complicated datasets so it's fine to have a simple dataset but I'd have liked to see the model evaluated on some of the datasets that other competing models were evaluated on. Some of those datasets might not have camera information etc. but I'm sure there are other datasets that have the necessary information (e.g., see [1], [2] for some potential datasets). Also, the authors mention that their technique is the first to infer 3D position of objects while segmenting images. If I'm not mistaken, [1] also does both and would be a great model to compare against. Other notes - Std deviations in table 1 are too large. Is this a typo? If not, it looks like all models are doing equally well. - I don't have any specific recommendations here but I found the model description a bit hard to follow. It might be a good idea to do another pass and see if it can be organized better. For example, the fact that warping and imagination are combined to get the final image can be mentioned earlier so the reader knows where/how the imagination network is used. - There are many inline equations; these make it difficult to parse the text visually. It would be nice to take these out of the text and split the long section 2.3.1 to subsections and mark these. - Figure 1 is great but again hard to parse. Perhaps adding variable names (i.e., x, p, z etc.) to the figure might make it easier to understand what goes in and out of each network. [1] Henderson, Paul, and Christoph H. Lampert. 2020. “Unsupervised Object-Centric Video Generation and Decomposition in 3D.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/2007.06705. [2] Kabra et al., 2021. "SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition", https://arxiv.org/abs/2106.03849 Overall, I think the paper is quite interesting and would be of interest to the community. However, the empirical evaluation is very limited and this makes it difficult to evaluate the full merit of the proposed approach. <doc-sep>Inspired by ideas about how humans learn about objects, the authors detail a system to train a neural network to perceive generic objects using image triplets where objects move and the viewer also can move. The viewer's motion is provided as an input. Object perception by parts of the network operating on the first two time points is rewarded by predicting what is seen at the third time point (the training signal). Having been trained, the object perception part of the system, which is a relatively basic neural network, can segment them from a single image. This new setup requires different data that what has been used in this space, and the authors contribute a synthetic dataset as well. Strengths I like the general thrust of this paper. While the basic idea has been around in cognitive psychology, which inspired the authors, I am not aware of any significant implementation of it. This is a nice first step. To make this work, the authors develop some algorithmic bits that might be useful for followup work. The results compare well to others in this space, showing that the training strategy has real promise. In some sense, the methods are not really comparable as the other methods do not have a viable way to make use of the additional training data. (Also see comment about number of objects below). Weaknesses The paper is harder to read than needed. I appreciate there is a lot of stuff going, and I believe I was able to get most of it after a few iterations, so the lack of clarity is not extreme. It is not clear how differing numbers of objects are handled. It seems that the number of objects might simply be provided (K=3 in the dataset)? But if the number of objects is known, then the comparison to other work that infers it might not be fair. K is in the pseudo code, but does not seam inferred. Should K be an input. A few more details on the LSTM would help. The paper could use some polishing. Figure 1, which is informative and key, could be tidied up. Also, I am guessing that the "Objects" box is the LSTM. The English could be improved in places, and there are a number of grammar errors (e.g., the first two uses of "pixel" in 2.3.1 should be "pixels", and 'This last two terms mean' on page 5). The authors do not say whether they will release their code. Comments A clear limitation of this work is that the data is synthetic and very simple (although perhaps more complex than other work in this space). The authors acknowledge this in their discussion. While this might be the standard for this sub-area, real data from a robot or car should be relatively easy to get. Easy block-world-like real data might be better for pushing the work than adding more texture and diverse lighting in synthetic data. Using just two time points is both a strength and a weakness. With just two, there is likely to be a lot of ambiguity between translation and rotation, especially if you generalize to more than one pose parameter. Looking at the effect of larger training sequences would be interesting. While the part of the system that is deployed at testing is a simple network, there are a number of hand constructed components (e.g., the warping function) that make use of what we know about cameras, it would be more interesting to see those learned. This is a good first step in this direction, and should inspire follow up work. The technical innovation is sensible. The results are good.
This paper tackles the difficult problem of learning to segment objects from an image using no supervision during training. The paper is clearly written and a new synthetic dataset is made available. Unfortunately, the reviewers raised a number of issues with the submission (missing citations and comparison to relevant related work / additional baselines + ablation studies / missing empirical evaluation of the proposed method on standard dataset beyond the toy dataset proposed by the authors). The paper received 1 reject, 2 marginal rejects and 1 accept but even the positive reviewer agreed that these were limitations. The authors also conceded to these limitations and initiated experiments that are starting to address the reviewers' comments. At this time, the results of these experiments remain incomplete and hence most reviewers agree that the paper should go through another round of reviews before it is publishable. I thus recommend this paper be rejected in the hope that a subsequent revision will make it a much stronger contribution.
This paper studies input length extrapolation for Transformer language models; i.e., how Transformer LMs perform on test sequences that are longer than training sequences. The paper finds that how positions are encoded plays a crucial role for input length extrapolation. Models with sinusoidal and rotary position embeddings do not extrapolate well, while T5’s position-dependent attention mechanism (dubbed T5 bias) enables better extrapolation. The paper then proposes ALiBi, another attention mechanism that also allows extrapolation while being computationally more efficient than T5 bias. These results are empirically confirmed on two datasets. Strengths: - To my knowledge, the paper is the first to study length extrapolation in Transformer language models. This is an important open problem for language modeling. - The proposed ALiBi mechanism is simple to implement and computationally efficient. - Experiments confirm that the proposed method enables length extrapolation for language modeling. - The paper is well-written and easy to follow. Weaknesses: - Experiments can be expanded. I am curious if the findings also apply to other tasks, such as text classification, sequence labeling, and sequence-to-sequence generation. The proposed method is simple to implement, so I imagine it would not be hard to add a few more tasks. Missing related work: - Xu et al., 2021. How neural networks extrapolate: from feedforward to graph neural networks. This paper studies a similar kind of input length/size extrapolation for graph neural networks. The paper studies a novel problem, input length extrapolation in language modeling, and proposes a simple solution with good empirical results. The paper is also well-written. One way to further improve the paper is to add experiments on other tasks. Overall, I recommend acceptance of this paper. <doc-sep>The paper addresses the extrapolation problem where a test sequence longer than training sequences is given and proposes Attention with Linear Biases (ALiBi) that adds a penalty linear to the distance between a query and a key to the attention scores. ALiBi shows remarkable input length extrapolation ability while computationally efficient with almost marginal overhead compared to the standard transformer. Moreover, ALiBi does not induce any additional parameters and generalizes well to a billion scale language model. The method is simple and quite effective. The paper addresses an important research problem of input length extrapolation. ALiBi developed on Wikitext-103 generalizes to 1.3B parameter model. ALiBi’s inductive bias also improves the accuracy. Previous works did not rigorously evaluate the extrapolation of a transformer and simply assumed the possibility of extrapolation. On the other hand, this paper carefully measured extrapolation compared with other works (Rotary and T5 Bias) and devised their own method to overcome the limitations of previous works. The method itself might look less novel or incremental because previous works inspire its many parts. It would be much better to provide theoretical explanations more than empirical proof on why ALiBi enables better extrapolation and higher final accuracy. ALiBi is only evaluated on language modeling in this paper. A transformer is a widely used neural architecture for many different tasks and domains. They also mentioned in the related work section that other works studied extrapolation on other tasks. I wonder about the authors’ thoughts whether their ALiBi could be helpful to other tasks as well. Of course, the importance of the longer context and extrapolation ability may vary depending on the task. One minor question is that the dot products of queries and keys are usually divided by the square root of the dimension, and it is maybe abbreviated in the equation. I am curious this division is performed after or before adding a bias. Each head has a different slope for the linear bias, so I expect that heads learn different patterns. An analysis of that would be interesting. The authors argue that the method is robust to slope choice, but they found that other alternatives underperform, such as learning these slopes. Because many other design choices are possible, I am curious how they found the final solution and what they tested. ALiBi was tested on two different model sizes. According to their results, extrapolation on a billion language model (improving until ~2x) is relatively inferior to that on a Wikitext-103 scale language model (improving until ~6x). I worry whether extrapolation ability reduces as the model becomes bigger (or with more training data). The paper is well written and easy to follow. The contribution is concrete and practically useful since a transformer is a building block of many machine learning models. More importantly, the size of language models becomes bigger, so their training cost is prohibitive. ALiBi improves the efficiency of language model (or transformers in general) training. <doc-sep>The submission proposed an effective approach to allow pre-trained transformer-based language models to extrapolate beyond the maximum length used in training, which potentially reduces the training time as extrapolation is empirically guaranteed. The proposed method adds fixed biases to the dot-product values between queries and keys that linearly decays w.r.t. the gap between two positions. Empirically, the proposed method indeed successfully allows pre-trained models to be evaluated on sequences that are multiple times longer than the training ones without significant loss. At a very high level, I did enjoy the paper as the method is simple and it indeed helps a pretrained transformer-based models to extrapolate to much longer sequences. Some of my concerns were addressed in the authors' response, and the others do require extensive exploration. Therefore, I would like to see this submittion at ICLR2022. ====end of the update==== 1. When the dimension of a transformer module is roughly the same as or significantly larger than the number of tokens, the dimension becomes the main contributing factor to the time complexity, which explains why, with the linear bias, the model only achieved ~10% speedup. 2. I was wondering if we could directly manipulate the probability after the softmax layer, it probably would achieve a similar effect. For example, one can multiply the probability map with a matrix with 1s in the diagonal terms and with linearly decaying off-diagonal terms towards 0, which also effectively biased the model to learn from nearby tokens. My point here is that the submission could have been more generalised in a way that, say, as long as the bias terms are fixed before training and they have an impact on the attention scores or the probability maps, the model will extrapolate to very long sequences. This would've been a stronger and more generalised message. 3. The title and the intro gave me the impression that it was designed for transformers, but I was wondering whether it would hinder transformers' capability in modelling images or biological sequences where tokens that are far from the current one would still play an important role. For images, the current approach of serialising an image is either at pixel-level or at patch-level, which means that tokens surrounding the current one in 2-dimensional space will be the context, however, the proposed approach would potentially worsen the situation. The submission proposed a simple yet effective method that helps pre-trained language models to extrapolate beyond the sequence length used in the training, but I think the paper could've delivered a stronger message. I am open to discussions. <doc-sep>This paper investigates the extrapolation capability of transformer-based language models. The authors observed that existing positional encoding methods (e.g., sinusoidal embedding, relative positional embedding) fail to generalize to longer sequences in language modeling tasks. Therefore, they introduce a new positional encoding method called ALiBi, which adds temporal bias to the multi-head attention to penalize attention score proportional to token distances. Experimental results show that ALiBi has significantly stronger extrapolation capability compared to other positional encoding methods. Pros: - Injecting temporal bias to attention is a neat idea for the language model extrapolation problems. - This paper presents comprehensive experiments on comparing the proposed method with existing positional encoding approaches. - The paper is well written and easy to understand. Cons: - It would be helpful to discuss the potential applications of the proposed method other than language modeling. - I am curious about comparing the transformer+ALiBi with LSTM in extrapolation tasks. The idea of adding temporal bias to attention is similar to the forget gate in LSTMs. Therefore, adding LSTM as a reference will make the paper stronger. **Update:** The additional results are convincing. I raised my rating to acceptance. This paper proposes an interesting and novel idea for enhancing the extrapolation capability of transformer-based language models. A few additional experiments and discussions will make the paper stronger.
This submission proposes a simple, efficient, and effective position representation method for the Transformer architecture called ALiBi. ALiBi enables better extrapolation and performance (in terms of efficiency and task performance). The submission also includes careful analysis and extensive experiments, and notably suggests that the gains of ALiBi may be less pronounced in more scaled-up settings. All reviewers agreed the paper should be accepted. I think it's reasonably likely that ALiBi will become a common choice in future Transformer models, or at the very least that this work will prompt further work on developing improved position representations for Transformer models. I therefore recommend acceptance.
This work automatically selects a best detection model while simultaneously controlling the false discovery rate. The experimental results shows that the proposed method can control the false discovery rate (FDR) and the true discovery rate (TDR) simultaneously. This paper is very clearly written and easy to understand. I really enjoy reading this paper and it makes interesting contribution. The key idea is estimating more "stable" p-value for better FDR and then adding extra step (i.e., model selection) for additional TDR control. It's not surprising to see in experiments this method have better TDR than those methods without controlling TDR. This paper can be seen a good extension work of Bates et al. [17]. The authors replace the simple split conformal prediction with Jackknife technique for more "accurate" estimated p-value, by fully exploring the clean data and avoid the randomness caused by data-splitting. This idea is very straightforward. Another contribution is selecting the best model from a pool of detectors. “Best" here means that the model detected the most outliers in the new dataset, which is not novel technique as well. Overall, the novelty of this work is limited. The manuscript will benefit from adding explanation about the novelty of such combination of two existing techniques, theoretically or practically. yes <doc-sep>This paper proposes a general AutoML framework for novelty detection and controlling the error rate of the model. The framework consists of an automated model selection procedure with FDR control. The theoretical bound is provided for AutoMS. Extensive experiments are conducted to demonstrate its effectiveness. Strengths 1: The paper proposed a unified framework that can be combined with different base detectors 2: The paper provides a theoretical bound of FDR 3: Experiments are conducted to evaluate the effectiveness of AutoMS on both synthetic and real-world data. Weaknesses 1. Only several real-world datasets are selected in the experiments. As a comparison, the previous work MetaOD has performed experiments on hundreds of datasets. The authors are encouraged to conduct a more thorough comparison with MetaOD. None <doc-sep>The authors propose a model selection method for novelty detection with false discovery rate (FDR) control. Given a detection model $M$, a detection threshold $L_M$ is selected based on the Benjamini-Hochberg (BH) procedure so that the FDR of $M$ is less than $\\alpha$. To estimate the p-values in the BH procedure precisely, the authors propose to apply the Jackknife estimation, which extends the existing work by Bates et al. After estimating $L_M$ for each model $M \\in G$, the model that most detects the novelties with $L_M$ is selected as the best model $M^*$. The authors also give theoretical results to show that the FDP of $M^*$ is non-asymptotically bounded and the FDR of $M^*$ is asymptotically bounded by $\\alpha$. Experiments using synthetic or real datasets demonstrate the advantage of the proposed method against the work by Bates et al. or METAOD. Strengths - Hyperparameter tuning or model selection is especially hard in unsupervised settings like novelty dectecion. This paper proposes a simple yet effective approach for this problem from the viewpoint of "maximize_{M \\in G} #detection(M), subject to FDR(M) \\lt \\alpha$." - The control of FDR of $M$ is mainly achieved by the existing framework of Bates et al., but its Jackknife extention is proposed. Weaknesses - The computational overhead of applying the Jackknife procedure is not negligible, especially when the training set is large - Experimental results, e.g. Fig.3, suggest that the FDR control of $M$ gets slightly worse by applying the Jackknife compared to the original SRS (by Bates et al.). =====POST-REBUTTAL COMMENTS======== Thanks for the authors' response. The newly added experimental results and authors' response addressed a part of my concerns. I have raised my score. The computational overhead of applying the k-fold Jackknife against the original SRS should be assessed in the experiment.
The paper proposes a method for finding the best anomaly detector among a set of candidate methods that are all based on constructing a score function. The selection method is based on a leave-one-out estimate. Some theoretical results are presented and proven in the appendix, and in addition, some experiments are reported. Overall, this paper presents a novel and interesting method for an important problem, and the theoretical considerations are certainly a plus. The only major issue of the paper is that only 4 real world data sets were considered, and despite the fact that this problem was raised by the reviewers, the authors did not include more during the rebuttal phase. From my perspective, a strongly theoretical paper does not require extensive experiments, but the paper under review does not fall into this category. And for this reason, more experiments, on say another 15 data sets would have been really helpful. In summary, this is an interesting paper with a sufficiently good theoretical part and some promising experiments. The latter could have been more, but overall this paper should be accepted.
The paper proposes a novel neural representation of an given image, which decomposes K object instances from background. So various tasks such as rerendering, rearrangement etc. The learning process is first sampling K centers, and represent each object as a learnable hidden variable z, an gaussian based soft. k-means styple clustering is then performed afterwards. The differences here are 1) for sampling the centers, there are learnt forground and background priors, which can benefit the initial state in this learning process. 2) updating the centers z with a learnable GRU rathor than simple mean pooling, which I believe it have more flexible representations for the cluster. Finally, the object clusters are discoverred. In this process. The author evaluted with 3 self created datasets and show several reasonable results by performing mentioned tasks such as 3D segmentation, rearrangement etc. strengths: The overall direction is promising, and factorize the scene representation is indeed an important issue to study. The technique proposed is sound overall with soft-kmeans like strategy to generate corresponding features in an unsupervised manner, through the GRU probably break the theoretical guarantee of convergence. Weak: The overall results looks more in a concept proof, the objects in all test datasets are relatively simple having uniformed color. Feel it can hardly work in real scene senario, as shown in GIRAFF paper. Under these senarios, the segmented results and rearranged results lose many object details yielding blurry or incorrect renderring. I feel there should be more improvement over these issues. The overall concept is fairly close to GIRAFF and major difference could be the training scheme inference from a single image or multiple. I would like to see there could be additional techniqual improvement especially some high resolution representations. Or improvement of architectures in order to support better quality. some questiions: Does the algorithm always obtains a reasonable representation w.r.t different initial sampled centers ? This paper points out a good direction to dive into unsupervised learning of compositional scene representations. However, technique strongness and novelty may need to be further improved. <doc-sep>This paper utilized the powerful NeRF. The authors present a new approach to learn the scene arrangement in an unsupervised way. The training is performed on unlabeled datasets of similar objects in different arrangements. The inference requires only one RGB image as input, and can correctly deduce the arrangement and the 3D geometry of the objects. The authors showed two supporting technical contributions to the system: (1) splitting background and foreground objects leads to better results, (2) a coarse-to-fine training to alleviate the space and time needs. The authors showed success on three synthetic datasets and various applications. The authors presented a novel idea. The system is engineered well and the authors have shown success on three synthetic datasets and various applications. My concerns are as follows. 1. All experiments are conducted on synthetic datasets. Both training and testing use the same set of object shapes - only the arrangements of the objects are different. It is not clear how this approach can generalize to real world scenes. As in the real world, lots of objects have not been seen in the training set. An ablation study that adds unseen shapes into the testing scenes can be very informative. Additionally, a demonstration of the approach on real world scenes would be a strong result to show in the paper (either it is negative or positive). 2. How is the number of the foreground objects decided? Does it have a strong impact in the results? Edit: The authors addressed my concerns in their revision. In particular, the authors showed additional results on a real world image. As expected, the rendering is not as good as on synthetic data. However, I do not think this overshadows the contribution of this paper - instead, it shows the value as well as limitation of the proposed method, and can inspire future work. The authors presented a novel idea. The system is engineered well and the authors have shown success on three synthetic datasets and various applications. However, all evaluation and experiments are done on synthetic datasets with the same set of objects and background. Thus it is unclear how this approach can generalize to solve real world problems. Edit: The authors addressed my concerns in their revision. I think it is a good paper and should be accepted. <doc-sep>The paper introduces an interesting new research direction of factorized, 3D consistent neural representations. In particular, it proposes to combine slot-attention mechanisms with conditional neural radiance fields to segment and render novel views of a scene from a single input view. The authors also address one apparent shortcoming in the slot-attention paper: the background and foreground object latent codes are sampled from the same distribution, leading to breakdowns on scenes with complicated backgrounds. The paper proposes to learn two disjoint distributions, one for the background and one for the foreground, to alleviate this issue. ################################################################## # Pros 1. The authors address a significant new problem: modeling 3D scenes as a disjoint set of objects that can be combined and rendered for novel view synthesis. Their model can also be learned from only 2D data. 2. The proposed method uses state-of-the-art techniques to achieve its goal. Namely, it uses slot-attention mechanisms (NeurIPS 2020) and Neural Radiance Fields (ECCV 2020) and combines them to address a new problem. 3. Treating background and foreground latent vectors as being drawn from two separately learnable distributions addresses one of the significant drawbacks of slot attention. 4. I appreciated the authors mentioning that concurrent work by Stelzner et al. (2021) addresses the same issue and differentiates this paper appropriately. 5. The proposed method is overall technically sound, and code is provided, which will help with reproducibility. Also, the authors do an excellent job at mentioning all the hyper-parameters and model architecture details as far as I can see. 6. The paper provides comprehensive experimental evaluations, both quantitative and qualitative. #################################################################### # Negatives / Questions 1. It would have been great to compare this work to the concurrent work of Stelzner et al. (2021). I believe that this would help the community to put the two concurrent submissions into context. That said, I do recognize that the work of Stelzner et al. (2021) has not been published and that code/data for their approach is not publicly available, making comparisons extremely difficult and should therefore not be a requirement for publication of this work. I hope that future work in this direction will pick up this issue. 2. Section 3.3, Coarse to fine training. I agree that rendering images with the volumetric rendering framework proposed in NeRF requires many evaluations per ray, so reducing the number of rays sampled during training makes sense. One detail that is missing in this section is how many samples per ray are used? Moreover, do the authors still use the two networks for ray evaluation as NeRF (coarse and fine ones)? If not, why not? Second: In your approach, you sample random patches during the fine training stage and downsample the images during the coarse training stage. In this paper: "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis," the authors propose a different sampling strategy, which does not require downsampling. Would this strategy perform better? Worse? 3. Section 4.1, Segmentation Experiment Results. It would help to discuss the shortcomings of slot-attention, namely, why the method fails on the segmentation task. My guess why uORF works better would be due to the two disjoint latent spaces for foreground and background? I think this should be highlighted here. 4. Section 4.3, Scene Design and Editing Setup. I did not understand how the setup for modifying the foreground object pose/appearance works. I think you can switch the latent embeddings for background, as you have a one-to-one mapping, but this is not the case for the foreground objects. Could you explain this process in more detail? 5. Appendix B2, Coordinate Space. Here you mention that you use a foreground box to encourage disentanglement of foreground and background slots. How does this foreground box work, what is its influence on the final result? Mentioning this (maybe) crucial detail only in the appendix is not sufficient in my opinion and should be better explained in the main text. I vote to accept this paper for publication at ICLR 2022. I like the idea of modeling scenes as a combination of disjoint objects, which can be added, removed, modified, and recombined to form new scenes. I also think the paper is well written, well-motivated, and provides extensive experiments. In my opinion, the paper adds to the literature on neural scene representation/decomposition and is interesting to the community. I have some minor suggestions and questions (see above), which I hope the authors clarify during the rebuttal.
This paper develops a method for decomposing scenes into object-specific neural radiance fields. After the discussion phase, two reviewers support acceptance. Empirical results on multiple synthetic datasets and benchmarks appear convincing; the rebuttal also added an initial demonstration of generalization to real images.